id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2102.07615
Learning image quality assessment by reinforcing task amenable data selection
In this paper, we consider a type of image quality assessment as a task-specific measurement, which can be used to select images that are more amenable to a given target task, such as image classification or segmentation. We propose to train simultaneously two neural networks for image selection and a target task using reinforcement learning. A controller network learns an image selection policy by maximising an accumulated reward based on the target task performance on the controller-selected validation set, whilst the target task predictor is optimised using the training set. The trained controller is therefore able to reject those images that lead to poor accuracy in the target task. In this work, we show that the controller-predicted image quality can be significantly different from the task-specific image quality labels that are manually defined by humans. Furthermore, we demonstrate that it is possible to learn effective image quality assessment without using a ``clean'' validation set, thereby avoiding the requirement for human labelling of images with respect to their amenability for the task. Using $6712$, labelled and segmented, clinical ultrasound images from $259$ patients, experimental results on holdout data show that the proposed image quality assessment achieved a mean classification accuracy of $0.94\pm0.01$ and a mean segmentation Dice of $0.89\pm0.02$, by discarding $5\%$ and $15\%$ of the acquired images, respectively. The significantly improved performance was observed for both tested tasks, compared with the respective $0.90\pm0.01$ and $0.82\pm0.02$ from networks without considering task amenability. This enables image quality feedback during real-time ultrasound acquisition among many other medical imaging applications.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
220,158
1705.07830
Ask the Right Questions: Active Question Reformulation with Reinforcement Learning
We frame Question Answering (QA) as a Reinforcement Learning task, an approach that we call Active Question Answering. We propose an agent that sits between the user and a black box QA system and learns to reformulate questions to elicit the best possible answers. The agent probes the system with, potentially many, natural language reformulations of an initial question and aggregates the returned evidence to yield the best answer. The reformulation system is trained end-to-end to maximize answer quality using policy gradient. We evaluate on SearchQA, a dataset of complex questions extracted from Jeopardy!. The agent outperforms a state-of-the-art base model, playing the role of the environment, and other benchmarks. We also analyze the language that the agent has learned while interacting with the question answering system. We find that successful question reformulations look quite different from natural language paraphrases. The agent is able to discover non-trivial reformulation strategies that resemble classic information retrieval techniques such as term re-weighting (tf-idf) and stemming.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
73,902
1410.2168
Improved steady-state stability of power grids with a communication infrastructure
Efficient control of power systems is becoming increasingly difficult as they gain in complexity and size. We propose an automatic control strategy that regulates the mechanical power output of the generators in a power grid based on information obtained via a communication infrastructure. An algorithm that optimizes steady-state stability of a power grid by iteratively adding communication links is presented. The proposed control scheme is successfully applied to the IEEE New England and IEEE RTS 96 power systems, leading to a significant increase in the steady-state stability of the systems and an improvement in their overall robustness. The resulting communication network topology differs significantly from the transmission grid topology. This shows how complex the steady- state control for power systems is, influenced by the generators configuration, the transmission network topology, and the manner by which control is executed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
36,597
2103.07532
Comprehensive and Comprehensible Data Catalogs: The What, Who, Where, When, Why, and How of Metadata Management
Data management tasks require access to metadata, which is increasingly tracked by databases called data catalogs. Current catalogs are too dependent on users' understanding of data, leading to difficulties in large organizations of users with different skills: catalogs either make metadata easy for users to store and difficult to retrieve, or they make it easy to retrieve, but difficult to store. In this paper, we present 5W1H+R, a new catalog mental model that is comprehensive in the metadata it represents, and comprehensible in that it permits all users to locate metadata easily. We demonstrate these properties via a user study. We then discuss practical guidelines for implementing the new mental model. We conclude mental models are important to make data catalogs more useful and to boost metadata management efforts.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
224,612
2202.05655
Spatial Reuse in Dense Wireless Areas: A Cross-layer Optimization Approach via ADMM
This paper introduces an efficient method for communication resource use in dense wireless areas where all nodes must communicate with a common destination node. The proposed method groups nodes based on their \newt{distance from the destination} and creates a structured multi-hop configuration in which each group can relay its neighbor's data. \newt{The large number of active radio nodes and the common direction of communication toward a single destination are exploited to reuse the limited spectrum resources in spatially separated groups}. Spectrum allocation constraints among groups are then embedded in a joint routing and resource allocation framework to optimize the route and amount of resources allocated to each node. \newt{The solution to this problem uses coordination among the lower-layers of the wireless-network protocol stack to outperform conventional approaches where these layers are decoupled. Furthermore, the structure of this problem is exploited to obtain} a semi-distributed optimization algorithm based on the alternating direction method of multipliers (ADMM) where each node can optimize its resources independently based on local channel information.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
279,938
1001.1798
Fountain Codes with Varying Probability Distributions
Fountain codes are rateless erasure-correcting codes, i.e., an essentially infinite stream of encoded packets can be generated from a finite set of data packets. Several fountain codes have been proposed recently to minimize overhead, many of which involve modifications of the Luby transform (LT) code. These fountain codes, like the LT code, have the implicit assumption that the probability distribution is fixed throughout the encoding process. In this paper, we will use the theory of posets to show that this assumption is unnecessary, and by dropping it, we can achieve overhead reduction by as much as 64% lower than LT codes. We also present the fundamental theory of probability distribution designs for fountain codes with non-constant probability distributions that minimize overhead.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
5,323
1611.06730
On the convergence of gradient-like flows with noisy gradient input
In view of solving convex optimization problems with noisy gradient input, we analyze the asymptotic behavior of gradient-like flows under stochastic disturbances. Specifically, we focus on the widely studied class of mirror descent schemes for convex programs with compact feasible regions, and we examine the dynamics' convergence and concentration properties in the presence of noise. In the vanishing noise limit, we show that the dynamics converge to the solution set of the underlying problem (a.s.). Otherwise, when the noise is persistent, we show that the dynamics are concentrated around interior solutions in the long run, and they converge to boundary solutions that are sufficiently "sharp". Finally, we show that a suitably rectified variant of the method converges irrespective of the magnitude of the noise (or the structure of the underlying convex program), and we derive an explicit estimate for its rate of convergence.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
64,250
2209.01635
Towards Adaptive Storage Views in Virtual Memory
Traditionally, DBMSs separate their storage layer from their indexing layer. While the storage layer physically materializes the database and provides low-level access methods to it, the indexing layer on top enables a faster locating of searched-for entries. While this clearly separates concerns, it also adds a level of indirection to the already complex execution path. In this work, we propose an alternative design: Instead of conservatively separating both layers, we naturally fuse them by integrating an adaptive coarse-granular indexing scheme directly into the storage layer. We do so by utilizing tools of the virtual memory management subsystem provided by the OS: On the lowest level, we materialize the database content in form of physical main memory. On top of that, we allow the creation of arbitrarily many virtual memory storage views that map to subsets of the database having certain properties of interest. This creation happens fully adaptively as a side-product of query processing. To speed up query answering, we route each query automatically to the most fitting virtual view(s). By this, we naturally index the storage layer in its core and gradually improve the provided scan performance.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
315,962
2208.07444
Entity Anchored ICD Coding
Medical coding is a complex task, requiring assignment of a subset of over 72,000 ICD codes to a patient's notes. Modern natural language processing approaches to these tasks have been challenged by the length of the input and size of the output space. We limit our model inputs to a small window around medical entities found in our documents. From those local contexts, we build contextualized representations of both ICD codes and entities, and aggregate over these representations to form document-level predictions. In contrast to existing methods which use a representation fixed either in size or by codes seen in training, we represent ICD codes by encoding the code description with local context. We discuss metrics appropriate to deploying coding systems in practice. We show that our approach is superior to existing methods in both standard and deployable measures, including performance on rare and unseen codes.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
313,046
2309.01093
CoTDet: Affordance Knowledge Prompting for Task Driven Object Detection
Task driven object detection aims to detect object instances suitable for affording a task in an image. Its challenge lies in object categories available for the task being too diverse to be limited to a closed set of object vocabulary for traditional object detection. Simply mapping categories and visual features of common objects to the task cannot address the challenge. In this paper, we propose to explore fundamental affordances rather than object categories, i.e., common attributes that enable different objects to accomplish the same task. Moreover, we propose a novel multi-level chain-of-thought prompting (MLCoT) to extract the affordance knowledge from large language models, which contains multi-level reasoning steps from task to object examples to essential visual attributes with rationales. Furthermore, to fully exploit knowledge to benefit object recognition and localization, we propose a knowledge-conditional detection framework, namely CoTDet. It conditions the detector from the knowledge to generate object queries and regress boxes. Experimental results demonstrate that our CoTDet outperforms state-of-the-art methods consistently and significantly (+15.6 box AP and +14.8 mask AP) and can generate rationales for why objects are detected to afford the task.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,541
2110.06972
Block Contextual MDPs for Continual Learning
In reinforcement learning (RL), when defining a Markov Decision Process (MDP), the environment dynamics is implicitly assumed to be stationary. This assumption of stationarity, while simplifying, can be unrealistic in many scenarios. In the continual reinforcement learning scenario, the sequence of tasks is another source of nonstationarity. In this work, we propose to examine this continual reinforcement learning setting through the block contextual MDP (BC-MDP) framework, which enables us to relax the assumption of stationarity. This framework challenges RL algorithms to handle both nonstationarity and rich observation settings and, by additionally leveraging smoothness properties, enables us to study generalization bounds for this setting. Finally, we take inspiration from adaptive control to propose a novel algorithm that addresses the challenges introduced by this more realistic BC-MDP setting, allows for zero-shot adaptation at evaluation time, and achieves strong performance on several nonstationary environments.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
260,803
1910.10075
Automatic Generation of Multi-precision Multi-arithmetic CNN Accelerators for FPGAs
Modern deep Convolutional Neural Networks (CNNs) are computationally demanding, yet real applications often require high throughput and low latency. To help tackle these problems, we propose Tomato, a framework designed to automate the process of generating efficient CNN accelerators. The generated design is pipelined and each convolution layer uses different arithmetics at various precisions. Using Tomato, we showcase state-of-the-art multi-precision multi-arithmetic networks, including MobileNet-V1, running on FPGAs. To our knowledge, this is the first multi-precision multi-arithmetic auto-generation framework for CNNs. In software, Tomato fine-tunes pretrained networks to use a mixture of short powers-of-2 and fixed-point weights with a minimal loss in classification accuracy. The fine-tuned parameters are combined with the templated hardware designs to automatically produce efficient inference circuits in FPGAs. We demonstrate how our approach significantly reduces model sizes and computation complexities, and permits us to pack a complete ImageNet network onto a single FPGA without accessing off-chip memories for the first time. Furthermore, we show how Tomato produces implementations of networks with various sizes running on single or multiple FPGAs. To the best of our knowledge, our automatically generated accelerators outperform closest FPGA-based competitors by at least 2-4x for lantency and throughput; the generated accelerator runs ImageNet classification at a rate of more than 3000 frames per second.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
150,386
2110.01304
Synthetic Velocity Mapping Cardiac MRI Coupled with Automated Left Ventricle Segmentation
Temporal patterns of cardiac motion provide important information for cardiac disease diagnosis. This pattern could be obtained by three-directional CINE multi-slice left ventricular myocardial velocity mapping (3Dir MVM), which is a cardiac MR technique providing magnitude and phase information of the myocardial motion simultaneously. However, long acquisition time limits the usage of this technique by causing breathing artifacts, while shortening the time causes low temporal resolution and may provide an inaccurate assessment of cardiac motion. In this study, we proposed a frame synthesis algorithm to increase the temporal resolution of 3Dir MVM data. Our algorithm is featured by 1) three attention-based encoders which accept magnitude images, phase images, and myocardium segmentation masks respectively as inputs; 2) three decoders that output the interpolated frames and corresponding myocardium segmentation results; and 3) loss functions highlighting myocardium pixels. Our algorithm can not only increase the temporal resolution 3Dir MVMs, but can also generates the myocardium segmentation results at the same time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
258,721
1604.00975
Combining Vision, Machine Learning and Automatic Control to Play the Labyrinth Game
The labyrinth game is a simple yet challenging platform, not only for humans but also for control algorithms and systems. The game is easy to understand but still very hard to master. From a system point of view, the ball behaviour is in general easy to model but close to the obstacles there are severe non-linearities. Additionally, the far from flat surface on which the ball rolls provides for changing dynamics depending on the ball position. The general dynamics of the system can easliy be handled by traditional automatic control methods. Taking the obstacles and uneaven surface into accout would require very detailed models of the system. A simple deterministic control algorithm is combined with a learning control method. The simple control method provides initial training data. As the learning method is trained, the system can learn from the results of its own actions and the performance improves well beyond the performance of the initial controller. A vision system and image analysis is used to estimate the ball position while a combination of a PID controller and a learning controller based on LWPR is used to learn to navigate the ball through the maze.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
54,132
2001.06032
Compactly Supported Quasi-tight Multiframelets with High Balancing Orders and Compact Framelet Transforms
Framelets (a.k.a. wavelet frames) are of interest in both theory and applications. Quite often, tight or dual framelets with high vanishing moments are constructed through the popular oblique extension principle (OEP). Though OEP can increase vanishing moments for improved sparsity, it has a serious shortcoming for scalar framelets: the associated discrete framelet transform is often not compact and deconvolution is unavoidable. Here we say that a framelet transform is compact if it can be implemented by convolution using only finitely supported filters. On the other hand, in sharp contrast to the extensively studied scalar framelets, multiframelets (a.k.a. vector framelets) derived through OEP from refinable vector functions are much less studied and are far from well understood. Also, most constructed multiframelets often lack balancing property which reduces sparsity. In this paper, we are particularly interested in quasi-tight multiframelets, which are special dual multiframelets but behave almost identically as tight multiframelets. From any compactly supported \emph{refinable vector function having at least two entries}, we prove that we can always construct through OEP a compactly supported quasi-tight multiframelet such that (1) its associated discrete framelet transform is compact and has the highest possible balancing order; (2) all compactly supported framelet generators have the highest possible order of vanishing moments, matching the approximation/accuracy order of its underlying refinable vector function. This result demonstrates great advantages of OEP for multiframelets (retaining all the desired properties) over scalar framelets.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
160,713
1711.02597
Towards an Economic Analysis of Routing in Payment Channel Networks
Payment channel networks are supposed to overcome technical scalability limitations of blockchain infrastructure by employing a special overlay network with fast payment confirmation and only sporadic settlement of netted transactions on the blockchain. However, they introduce economic routing constraints that limit decentralized scalability and are currently not well understood. In this paper, we model the economic incentives for participants in payment channel networks. We provide the first formal model of payment channel economics and analyze how the cheapest path can be found. Additionally, our simulation assesses the long-term evolution of a payment channel network. We find that even for small routing fees, sometimes it is cheaper to settle the transaction directly on the blockchain.
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
84,084
2212.00742
New Probabilistic-Dynamic Multi-Method Ensembles for Optimization based on the CRO-SL
In this paper we propose new probabilistic and dynamic (adaptive) strategies to create multi-method ensembles based on the Coral Reefs Optimization with Substrate Layers (CRO-SL) algorithm. The CRO-SL is an evolutionary-based ensemble approach, able to combine different search procedures within a single population. In this work we discuss two different probabilistic strategies to improve the algorithm. First, we defined the Probabilistic CRO-SL (PCRO-SL), which substitutes the substrates in the CRO-SL population by {\em tags} associated with each individual. Each tag represents a different operator which will modify the individual in the reproduction phase. In each generation of the algorithm, the tags are randomly assigned to the individuals with a similar probability, obtaining this way an ensemble with a more intense change in the application of different operators to a given individual than the original CRO-SL. The second strategy discussed in this paper is the Dynamical Probabilistic CRO-SL (DPCRO-SL), in which the probability of tag assignment is modified during the evolution of the algorithm, depending on the quality of the solutions generated in each substrate. Thus, the best substrates in the search process will be assigned with a higher probability that those which showed a worse performance during the search. We test the performance of the proposed probabilistic and dynamic ensembles in different optimization problems, including benchmark functions and a real application of wind turbines layout optimization, comparing the results obtained with that of existing algorithms in the literature.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
334,173
2205.14846
Precise Learning Curves and Higher-Order Scaling Limits for Dot Product Kernel Regression
As modern machine learning models continue to advance the computational frontier, it has become increasingly important to develop precise estimates for expected performance improvements under different model and data scaling regimes. Currently, theoretical understanding of the learning curves that characterize how the prediction error depends on the number of samples is restricted to either large-sample asymptotics ($m\to\infty$) or, for certain simple data distributions, to the high-dimensional asymptotics in which the number of samples scales linearly with the dimension ($m\propto d$). There is a wide gulf between these two regimes, including all higher-order scaling relations $m\propto d^r$, which are the subject of the present paper. We focus on the problem of kernel ridge regression for dot-product kernels and present precise formulas for the mean of the test error, bias, and variance, for data drawn uniformly from the sphere with isotropic random labels in the $r$th-order asymptotic scaling regime $m\to\infty$ with $m/d^r$ held constant. We observe a peak in the learning curve whenever $m \approx d^r/r!$ for any integer $r$, leading to multiple sample-wise descent and nontrivial behavior at multiple scales.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,512
2103.07302
Siamese Infrared and Visible Light Fusion Network for RGB-T Tracking
Due to the different photosensitive properties of infrared and visible light, the registered RGB-T image pairs shot in the same scene exhibit quite different characteristics. This paper proposes a siamese infrared and visible light fusion Network (SiamIVFN) for RBG-T image-based tracking. SiamIVFN contains two main subnetworks: a complementary-feature-fusion network (CFFN) and a contribution-aggregation network (CAN). CFFN utilizes a two-stream multilayer convolutional structure whose filters for each layer are partially coupled to fuse the features extracted from infrared images and visible light images. CFFN is a feature-level fusion network, which can cope with the misalignment of the RGB-T image pairs. Through adaptively calculating the contributions of infrared and visible light features obtained from CFFN, CAN makes the tracker robust under various light conditions. Experiments on two RGB-T tracking benchmark datasets demonstrate that the proposed SiamIVFN has achieved state-of-the-art performance. The tracking speed of SiamIVFN is 147.6FPS, the current fastest RGB-T fusion tracker.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
224,562
2010.14881
Medical Deep Learning -- A systematic Meta-Review
Deep learning (DL) has remarkably impacted several different scientific disciplines over the last few years. E.g., in image processing and analysis, DL algorithms were able to outperform other cutting-edge methods. Additionally, DL has delivered state-of-the-art results in tasks like autonomous driving, outclassing previous attempts. There are even instances where DL outperformed humans, for example with object recognition and gaming. DL is also showing vast potential in the medical domain. With the collection of large quantities of patient records and data, and a trend towards personalized treatments, there is a great need for automated and reliable processing and analysis of health information. Patient data is not only collected in clinical centers, like hospitals and private practices, but also by mobile healthcare apps or online websites. The abundance of collected patient data and the recent growth in the DL field has resulted in a large increase in research efforts. In Q2/2020, the search engine PubMed returned already over 11,000 results for the search term 'deep learning', and around 90% of these publications are from the last three years. However, even though PubMed represents the largest search engine in the medical field, it does not cover all medical-related publications. Hence, a complete overview of the field of 'medical deep learning' is almost impossible to obtain and acquiring a full overview of medical sub-fields is becoming increasingly more difficult. Nevertheless, several review and survey articles about medical DL have been published within the last few years. They focus, in general, on specific medical scenarios, like the analysis of medical images containing specific pathologies. With these surveys as a foundation, the aim of this article is to provide the first high-level, systematic meta-review of medical DL surveys.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
203,607
2206.15274
Augment like there's no tomorrow: Consistently performing neural networks for medical imaging
Deep neural networks have achieved impressive performance in a wide variety of medical imaging tasks. However, these models often fail on data not used during training, such as data originating from a different medical centre. How to recognize models suffering from this fragility, and how to design robust models are the main obstacles to clinical adoption. Here, we present general methods to identify causes for model generalisation failures and how to circumvent them. First, we use $\textit{distribution-shifted datasets}$ to show that models trained with current state-of-the-art methods are highly fragile to variability encountered in clinical practice, and then develop a $\textit{strong augmentation}$ strategy to address this fragility. Distribution-shifted datasets allow us to discover this fragility, which can otherwise remain undetected after validation against multiple external datasets. Strong augmentation allows us to train robust models achieving consistent performance under shifts from the training data distribution. Importantly, we demonstrate that strong augmentation yields biomedical imaging models which retain high performance when applied to real-world clinical data. Our results pave the way for the development and evaluation of reliable and robust neural networks in clinical practice.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
305,540
2102.08866
IoTDevID: A Behavior-Based Device Identification Method for the IoT
Device identification is one way to secure a network of IoT devices, whereby devices identified as suspicious can subsequently be isolated from a network. In this study, we present a machine learning-based method, IoTDevID, that recognizes devices through characteristics of their network packets. As a result of using a rigorous feature analysis and selection process, our study offers a generalizable and realistic approach to modelling device behavior, achieving high predictive accuracy across two public datasets. The model's underlying feature set is shown to be more predictive than existing feature sets used for device identification, and is shown to generalize to data unseen during the feature selection process. Unlike most existing approaches to IoT device identification, IoTDevID is able to detect devices using non-IP and low-energy protocols.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
220,594
1612.01459
Approximate Support Recovery of Atomic Line Spectral Estimation: A Tale of Resolution and Precision
This work investigates the parameter estimation performance of super-resolution line spectral estimation using atomic norm minimization. The focus is on analyzing the algorithm's accuracy of inferring the frequencies and complex magnitudes from noisy observations. When the Signal-to-Noise Ratio is reasonably high and the true frequencies are separated by $O(\frac{1}{n})$, the atomic norm estimator is shown to localize the correct number of frequencies, each within a neighborhood of size $O(\sqrt{{\log n}/{n^3}} \sigma)$ of one of the true frequencies. Here $n$ is half the number of temporal samples and $\sigma^2$ is the Gaussian noise variance. The analysis is based on a primal-dual witness construction procedure. The obtained error bound matches the Cram\'er-Rao lower bound up to a logarithmic factor. The relationship between resolution (separation of frequencies) and precision or accuracy of the estimator is highlighted. Our analysis also reveals that the atomic norm minimization can be viewed as a convex way to solve a $\ell_1$-norm regularized, nonlinear and nonconvex least-squares problem to global optimality.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
65,090
1807.07784
Model Agnostic Saliency for Weakly Supervised Lesion Detection from Breast DCE-MRI
There is a heated debate on how to interpret the decisions provided by deep learning models (DLM), where the main approaches rely on the visualization of salient regions to interpret the DLM classification process. However, these approaches generally fail to satisfy three conditions for the problem of lesion detection from medical images: 1) for images with lesions, all salient regions should represent lesions, 2) for images containing no lesions, no salient region should be produced,and 3) lesions are generally small with relatively smooth borders. We propose a new model-agnostic paradigm to interpret DLM classification decisions supported by a novel definition of saliency that incorporates the conditions above. Our model-agnostic 1-class saliency detector (MASD) is tested on weakly supervised breast lesion detection from DCE-MRI, achieving state-of-the-art detection accuracy when compared to current visualization methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
103,383
1501.00437
Computational Feasibility of Clustering under Clusterability Assumptions
It is well known that most of the common clustering objectives are NP-hard to optimize. In practice, however, clustering is being routinely carried out. One approach for providing theoretical understanding of this seeming discrepancy is to come up with notions of clusterability that distinguish realistically interesting input data from worst-case data sets. The hope is that there will be clustering algorithms that are provably efficient on such 'clusterable' instances. In other words, hope that "Clustering is difficult only when it does not matter" (CDNM thesis, for short). We believe that to some extent this may indeed be the case. This paper provides a survey of recent papers along this line of research and a critical evaluation their results. Our bottom line conclusion is that that CDNM thesis is still far from being formally substantiated. We start by discussing which requirements should be met in order to provide formal support the validity of the CDNM thesis. In particular, we list some implied requirements for notions of clusterability. We then examine existing results in view of those requirements and outline some research challenges and open questions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
38,994
1902.00236
Natural and Adversarial Error Detection using Invariance to Image Transformations
We propose an approach to distinguish between correct and incorrect image classifications. Our approach can detect misclassifications which either occur $\it{unintentionally}$ ("natural errors"), or due to $\it{intentional~adversarial~attacks}$ ("adversarial errors"), both in a single $\it{unified~framework}$. Our approach is based on the observation that correctly classified images tend to exhibit robust and consistent classifications under certain image transformations (e.g., horizontal flip, small image translation, etc.). In contrast, incorrectly classified images (whether due to adversarial errors or natural errors) tend to exhibit large variations in classification results under such transformations. Our approach does not require any modifications or retraining of the classifier, hence can be applied to any pre-trained classifier. We further use state of the art targeted adversarial attacks to demonstrate that even when the adversary has full knowledge of our method, the adversarial distortion needed for bypassing our detector is $\it{no~longer~imperceptible~to~the~human~eye}$. Our approach obtains state-of-the-art results compared to previous adversarial detection methods, surpassing them by a large margin.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
120,361
1712.07019
PSO-Optimized Hopfield Neural Network-Based Multipath Routing for Mobile Ad-hoc Networks
Mobile ad-hoc network (MANET) is a dynamic collection of mobile computers without the need for any existing infrastructure. Nodes in a MANET act as hosts and routers. Designing of robust routing algorithms for MANETs is a challenging task. Disjoint multipath routing protocols address this problem and increase the reliability, security and lifetime of network. However, selecting an optimal multipath is an NP-complete problem. In this paper, Hopfield neural network (HNN) which its parameters are optimized by particle swarm optimization (PSO) algorithm is proposed as multipath routing algorithm. Link expiration time (LET) between each two nodes is used as the link reliability estimation metric. This approach can find either node-disjoint or link-disjoint paths in single phase route discovery. Simulation results confirm that PSO-HNN routing algorithm has better performance as compared to backup path set selection algorithm (BPSA) in terms of the path set reliability and number of paths in the set.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
86,973
2010.14848
Flexible retrieval with NMSLIB and FlexNeuART
Our objective is to introduce to the NLP community an existing k-NN search library NMSLIB, a new retrieval toolkit FlexNeuART, as well as their integration capabilities. NMSLIB, while being one the fastest k-NN search libraries, is quite generic and supports a variety of distance/similarity functions. Because the library relies on the distance-based structure-agnostic algorithms, it can be further extended by adding new distances. FlexNeuART is a modular, extendible and flexible toolkit for candidate generation in IR and QA applications, which supports mixing of classic and neural ranking signals. FlexNeuART can efficiently retrieve mixed dense and sparse representations (with weights learned from training data), which is achieved by extending NMSLIB. In that, other retrieval systems work with purely sparse representations (e.g., Lucene), purely dense representations (e.g., FAISS and Annoy), or only perform mixing at the re-ranking stage.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
203,594
1605.06820
Automated Resolution Selection for Image Segmentation
It is well-known in image processing that computational cost increases rapidly with the number and dimensions of the images to be processed. Several fields, such as medical imaging, routinely use numerous very large images, which might also be 3D and/or captured at several frequency bands, all adding to the computational expense. Multiresolution analysis is a method of increasing the efficiency of the segmentation process. One multiresolution approach is the coarse-to-fine segmentation strategy, whereby the segmentation starts at a coarse resolution and is then fine-tuned during subsequent steps. The starting resolution for segmentation is generally selected arbitrarily with no clear selection criteria. The research reported in this paper showed that starting from different resolutions for image segmentation results in different accuracies and computational times, even for images of the same category (depicting similar scenes or objects). An automated method for resolution selection for an input image would thus be beneficial. This paper introduces a framework for the automated selection of the best resolution for image segmentation. We propose a measure for defining the best resolution based on user/system criteria, offering a trade-off between accuracy and computation time. A learning approach is then introduced for the selection of the resolution, whereby extracted image features are mapped to the previously determined best resolution. In the learning process, class (i.e., resolution) distribution is generally imbalanced, making effective learning from the data difficult. Experiments conducted with three datasets using two different segmentation algorithms show that the resolutions selected through learning enable much faster segmentation than the original ones, while retaining at least the original accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
56,193
2108.08191
Masked Face Recognition Challenge: The InsightFace Track Report
During the COVID-19 coronavirus epidemic, almost everyone wears a facial mask, which poses a huge challenge to deep face recognition. In this workshop, we organize Masked Face Recognition (MFR) challenge and focus on bench-marking deep face recognition methods under the existence of facial masks. In the MFR challenge, there are two main tracks: the InsightFace track and the WebFace260M track. For the InsightFace track, we manually collect a large-scale masked face test set with 7K identities. In addition, we also collect a children test set including 14K identities and a multi-racial test set containing 242K identities. By using these three test sets, we build up an online model testing system, which can give a comprehensive evaluation of face recognition models. To avoid data privacy problems, no test image is released to the public. As the challenge is still under-going, we will keep on updating the top-ranked solutions as well as this report on the arxiv.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
251,169
2410.06442
MaD-Scientist: AI-based Scientist solving Convection-Diffusion-Reaction Equations Using Massive PINN-Based Prior Data
Large language models (LLMs), like ChatGPT, have shown that even trained with noisy prior data, they can generalize effectively to new tasks through in-context learning (ICL) and pre-training techniques. Motivated by this, we explore whether a similar approach can be applied to scientific foundation models (SFMs). Our methodology is structured as follows: (i) we collect low-cost physics-informed neural network (PINN)-based approximated prior data in the form of solutions to partial differential equations (PDEs) constructed through an arbitrary linear combination of mathematical dictionaries; (ii) we utilize Transformer architectures with self and cross-attention mechanisms to predict PDE solutions without knowledge of the governing equations in a zero-shot setting; (iii) we provide experimental evidence on the one-dimensional convection-diffusion-reaction equation, which demonstrate that pre-training remains robust even with approximated prior data, with only marginal impacts on test accuracy. Notably, this finding opens the path to pre-training SFMs with realistic, low-cost data instead of (or in conjunction with) numerical high-cost data. These results support the conjecture that SFMs can improve in a manner similar to LLMs, where fully cleaning the vast set of sentences crawled from the Internet is nearly impossible.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
496,202
1603.03682
Ultra Dense Small Cell Networks: Turning Density into Energy Efficiency
In this paper, a novel approach for joint power control and user scheduling is proposed for optimizing energy efficiency (EE), in terms of bits per unit energy, in ultra dense small cell networks (UDNs). Due to severe coupling in interference, this problem is formulated as a dynamic stochastic game (DSG) between small cell base stations (SBSs). This game enables to capture the dynamics of both the queues and channel states of the system. To solve this game, assuming a large homogeneous UDN deployment, the problem is cast as a mean-field game (MFG) in which the MFG equilibrium is analyzed with the aid of low-complexity tractable partial differential equations. Exploiting the stochastic nature of the problem, user scheduling is formulated as a stochastic optimization problem and solved using the drift plus penalty (DPP) approach in the framework of Lyapunov optimization. Remarkably, it is shown that by weaving notions from Lyapunov optimization and mean-field theory, the proposed solution yields an equilibrium control policy per SBS which maximizes the network utility while ensuring users' quality-of-service. Simulation results show that the proposed approach achieves up to 70.7% gains in EE and 99.5% reductions in the network's outage probabilities compared to a baseline model which focuses on improving EE while attempting to satisfy the users' instantaneous quality-of-service requirements.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
53,138
2004.10053
Load on the Typical Poisson Voronoi Cell with Clustered User Distribution
In this letter, we characterize the distribution of the number of users associated with the typical base station (BS), termed the typical cell load, in a cellular network where the BSs are distributed as a homogeneous Poisson point process (PPP) and the users are distributed as an independent Poisson cluster process (PCP). In this setting, we derive the exact expressions for the first two moments of the typical cell load. Given the computational complexity of evaluating the higher moments, we derive easy-to-use approximations for the probability generating function (PGF) of the typical cell load, which can be inverted to obtain the probability mass function (PMF).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
173,529
2208.09686
YOLOV: Making Still Image Object Detectors Great at Video Object Detection
Video object detection (VID) is challenging because of the high variation of object appearance as well as the diverse deterioration in some frames. On the positive side, the detection in a certain frame of a video, compared with that in a still image, can draw support from other frames. Hence, how to aggregate features across different frames is pivotal to VID problem. Most of existing aggregation algorithms are customized for two-stage detectors. However, these detectors are usually computationally expensive due to their two-stage nature. This work proposes a simple yet effective strategy to address the above concerns, which costs marginal overheads with significant gains in accuracy. Concretely, different from traditional two-stage pipeline, we select important regions after the one-stage detection to avoid processing massive low-quality candidates. Besides, we evaluate the relationship between a target frame and reference frames to guide the aggregation. We conduct extensive experiments and ablation studies to verify the efficacy of our design, and reveal its superiority over other state-of-the-art VID approaches in both effectiveness and efficiency. Our YOLOX-based model can achieve promising performance (\emph{e.g.}, 87.5\% AP50 at over 30 FPS on the ImageNet VID dataset on a single 2080Ti GPU), making it attractive for large-scale or real-time applications. The implementation is simple, we have made the demo codes and models available at \url{https://github.com/YuHengsss/YOLOV}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
313,797
2010.00732
Extreme-SAX: Extreme Points Based Symbolic Representation for Time Series Classification
Time series classification is an important problem in data mining with several applications in different domains. Because time series data are usually high dimensional, dimensionality reduction techniques have been proposed as an efficient approach to lower their dimensionality. One of the most popular dimensionality reduction techniques of time series data is the Symbolic Aggregate Approximation (SAX), which is inspired by algorithms from text mining and bioinformatics. SAX is simple and efficient because it uses precomputed distances. The disadvantage of SAX is its inability to accurately represent important points in the time series. In this paper we present Extreme-SAX (E-SAX), which uses only the extreme points of each segment to represent the time series. E-SAX has exactly the same simplicity and efficiency of the original SAX, yet it gives better results in time series classification than the original SAX, as we show in extensive experiments on a variety of time series datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
198,374
1207.3809
Image Labeling on a Network: Using Social-Network Metadata for Image Classification
Large-scale image retrieval benchmarks invariably consist of images from the Web. Many of these benchmarks are derived from online photo sharing networks, like Flickr, which in addition to hosting images also provide a highly interactive social community. Such communities generate rich metadata that can naturally be harnessed for image classification and retrieval. Here we study four popular benchmark datasets, extending them with social-network metadata, such as the groups to which each image belongs, the comment thread associated with the image, who uploaded it, their location, and their network of friends. Since these types of data are inherently relational, we propose a model that explicitly accounts for the interdependencies between images sharing common properties. We model the task as a binary labeling problem on a network, and use structured learning techniques to learn model parameters. We find that social-network metadata are useful in a variety of classification tasks, in many cases outperforming methods based on image content.
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
17,506
2409.16765
MaViLS, a Benchmark Dataset for Video-to-Slide Alignment, Assessing Baseline Accuracy with a Multimodal Alignment Algorithm Leveraging Speech, OCR, and Visual Features
This paper presents a benchmark dataset for aligning lecture videos with corresponding slides and introduces a novel multimodal algorithm leveraging features from speech, text, and images. It achieves an average accuracy of 0.82 in comparison to SIFT (0.56) while being approximately 11 times faster. Using dynamic programming the algorithm tries to determine the optimal slide sequence. The results show that penalizing slide transitions increases accuracy. Features obtained via optical character recognition (OCR) contribute the most to a high matching accuracy, followed by image features. The findings highlight that audio transcripts alone provide valuable information for alignment and are beneficial if OCR data is lacking. Variations in matching accuracy across different lectures highlight the challenges associated with video quality and lecture style. The novel multimodal algorithm demonstrates robustness to some of these challenges, underscoring the potential of the approach.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
491,498
2403.05550
Teranga Go!: Carpooling Collaborative Consumption Community with multi-criteria hesitant fuzzy linguistic term set opinions to build confidence and trust
Classic Delphi and Fuzzy Delphi methods are used to test content validity of a data collection tools such as questionnaires. Fuzzy Delphi takes the opinion issued by judges from a linguistic perspective reducing ambiguity in opinions by using fuzzy numbers. We propose an extension named 2-Tuple Fuzzy Linguistic Delphi method to deal with scenarios in which judges show different expertise degrees by using fuzzy multigranular semantics of the linguistic terms and to obtain intermediate and final results expressed by 2-tuple linguistic values. The key idea of our proposal is to validate the full questionnaire by means of the evaluation of its parts, defining the validity of each item as a Decision Making problem. Taking the opinion of experts, we measure the degree of consensus, the degree of consistency, and the linguistic score of each item, in order to detect those items that affect, positively or negatively, the quality of the instrument. Considering the real need to evaluate a b-learning educational experience with a consensual questionnaire, we present a Decision Making model for questionnaire validation that solve it. Additionally, we contribute to this consensus reaching problem by developing an online tool under GPL v3 license. The software visualizes the collective valuations for each iteration and assists to determine which parts of the questionnaire should be modified to reach a consensual solution.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
436,051
2302.14225
Weighted Sampling for Masked Language Modeling
Masked Language Modeling (MLM) is widely used to pretrain language models. The standard random masking strategy in MLM causes the pre-trained language models (PLMs) to be biased toward high-frequency tokens. Representation learning of rare tokens is poor and PLMs have limited performance on downstream tasks. To alleviate this frequency bias issue, we propose two simple and effective Weighted Sampling strategies for masking tokens based on the token frequency and training loss. We apply these two strategies to BERT and obtain Weighted-Sampled BERT (WSBERT). Experiments on the Semantic Textual Similarity benchmark (STS) show that WSBERT significantly improves sentence embeddings over BERT. Combining WSBERT with calibration methods and prompt learning further improves sentence embeddings. We also investigate fine-tuning WSBERT on the GLUE benchmark and show that Weighted Sampling also improves the transfer learning capability of the backbone PLM. We further analyze and provide insights into how WSBERT improves token embeddings.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
348,198
1802.00624
When can $l_p$-norm objective functions be minimized via graph cuts?
Techniques based on minimal graph cuts have become a standard tool for solving combinatorial optimization problems arising in image processing and computer vision applications. These techniques can be used to minimize objective functions written as the sum of a set of unary and pairwise terms, provided that the objective function is submodular. This can be interpreted as minimizing the $l_1$-norm of the vector containing all pairwise and unary terms. By raising each term to a power $p$, the same technique can also be used to minimize the $l_p$-norm of the vector. Unfortunately, the submodularity of an $l_1$-norm objective function does not guarantee the submodularity of the corresponding $l_p$-norm objective function. The contribution of this paper is to provide useful conditions under which an $l_p$-norm objective function is submodular for all $p\geq 1$, thereby identifying a large class of $l_p$-norm objective functions that can be minimized via minimal graph cuts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
89,446
2502.12842
Towards Adaptive Feedback with AI: Comparing the Feedback Quality of LLMs and Teachers on Experimentation Protocols
Effective feedback is essential for fostering students' success in scientific inquiry. With advancements in artificial intelligence, large language models (LLMs) offer new possibilities for delivering instant and adaptive feedback. However, this feedback often lacks the pedagogical validation provided by real-world practitioners. To address this limitation, our study evaluates and compares the feedback quality of LLM agents with that of human teachers and science education experts on student-written experimentation protocols. Four blinded raters, all professionals in scientific inquiry and science education, evaluated the feedback texts generated by 1) the LLM agent, 2) the teachers and 3) the science education experts using a five-point Likert scale based on six criteria of effective feedback: Feed Up, Feed Back, Feed Forward, Constructive Tone, Linguistic Clarity, and Technical Terminology. Our results indicate that LLM-generated feedback shows no significant difference to that of teachers and experts in overall quality. However, the LLM agent's performance lags in the Feed Back dimension, which involves identifying and explaining errors within the student's work context. Qualitative analysis highlighted the LLM agent's limitations in contextual understanding and in the clear communication of specific errors. Our findings suggest that combining LLM-generated feedback with human expertise can enhance educational practices by leveraging the efficiency of LLMs and the nuanced understanding of educators.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
535,073
2204.04390
Deep neural network goes lighter: A case study of deep compression techniques on automatic RF modulation recognition for Beyond 5G networks
Automatic RF modulation recognition is a primary signal intelligence (SIGINT) technique that serves as a physical layer authentication enabler and automated signal processing scheme for the beyond 5G and military networks. Most existing works rely on adopting deep neural network architectures to enable RF modulation recognition. The application of deep compression for the wireless domain, especially automatic RF modulation classification, is still in its infancy. Lightweight neural networks are key to sustain edge computation capability on resource-constrained platforms. In this letter, we provide an in-depth view of the state-of-the-art deep compression and acceleration techniques with an emphasis on edge deployment for beyond 5G networks. Finally, we present an extensive analysis of the representative acceleration approaches as a case study on automatic radar modulation classification and evaluate them in terms of the computational metrics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
290,638
2302.07856
Dictionary-based Phrase-level Prompting of Large Language Models for Machine Translation
Large language models (LLMs) demonstrate remarkable machine translation (MT) abilities via prompting, even though they were not explicitly trained for this task. However, even given the incredible quantities of data they are trained on, LLMs can struggle to translate inputs with rare words, which are common in low resource or domain transfer scenarios. We show that LLM prompting can provide an effective solution for rare words as well, by using prior knowledge from bilingual dictionaries to provide control hints in the prompts. We propose a novel method, DiPMT, that provides a set of possible translations for a subset of the input words, thereby enabling fine-grained phrase-level prompted control of the LLM. Extensive experiments show that DiPMT outperforms the baseline both in low-resource MT, as well as for out-of-domain MT. We further provide a qualitative analysis of the benefits and limitations of this approach, including the overall level of controllability that is achieved.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
345,850
1809.05495
Auto-tuning Distributed Stream Processing Systems using Reinforcement Learning
Fine tuning distributed systems is considered to be a craftsmanship, relying on intuition and experience. This becomes even more challenging when the systems need to react in near real time, as streaming engines have to do to maintain pre-agreed service quality metrics. In this article, we present an automated approach that builds on a combination of supervised and reinforcement learning methods to recommend the most appropriate lever configurations based on previous load. With this, streaming engines can be automatically tuned without requiring a human to determine the right way and proper time to deploy them. This opens the door to new configurations that are not being applied today since the complexity of managing these systems has surpassed the abilities of human experts. We show how reinforcement learning systems can find substantially better configurations in less time than their human counterparts and adapt to changing workloads.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
true
107,802
2405.19283
Programmable Motion Generation for Open-Set Motion Control Tasks
Character animation in real-world scenarios necessitates a variety of constraints, such as trajectories, key-frames, interactions, etc. Existing methodologies typically treat single or a finite set of these constraint(s) as separate control tasks. They are often specialized, and the tasks they address are rarely extendable or customizable. We categorize these as solutions to the close-set motion control problem. In response to the complexity of practical motion control, we propose and attempt to solve the open-set motion control problem. This problem is characterized by an open and fully customizable set of motion control tasks. To address this, we introduce a new paradigm, programmable motion generation. In this paradigm, any given motion control task is broken down into a combination of atomic constraints. These constraints are then programmed into an error function that quantifies the degree to which a motion sequence adheres to them. We utilize a pre-trained motion generation model and optimize its latent code to minimize the error function of the generated motion. Consequently, the generated motion not only inherits the prior of the generative model but also satisfies the required constraints. Experiments show that we can generate high-quality motions when addressing a wide range of unseen tasks. These tasks encompass motion control by motion dynamics, geometric constraints, physical laws, interactions with scenes, objects or the character own body parts, etc. All of these are achieved in a unified approach, without the need for ad-hoc paired training data collection or specialized network designs. During the programming of novel tasks, we observed the emergence of new skills beyond those of the prior model. With the assistance of large language models, we also achieved automatic programming. We hope that this work will pave the way for the motion control of general AI agents.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
458,823
1901.00326
Plugin Networks for Inference under Partial Evidence
In this paper, we propose a novel method to incorporate partial evidence in the inference of deep convolutional neural networks. Contrary to the existing, top performing methods, which either iteratively modify the input of the network or exploit external label taxonomy to take the partial evidence into account, we add separate network modules ("Plugin Networks") to the intermediate layers of a pre-trained convolutional network. The goal of these modules is to incorporate additional signal, ie information about known labels, into the inference procedure and adjust the predicted output accordingly. Since the attached plugins have a simple structure, consisting of only fully connected layers, we drastically reduced the computational cost of training and inference. At the same time, the proposed architecture allows to propagate information about known labels directly to the intermediate layers to improve the final representation. Extensive evaluation of the proposed method confirms that our Plugin Networks outperform the state-of-the-art in a variety of tasks, including scene categorization, multi-label image annotation, and semantic segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
117,748
2408.09919
Long-Tail Temporal Action Segmentation with Group-wise Temporal Logit Adjustment
Procedural activity videos often exhibit a long-tailed action distribution due to varying action frequencies and durations. However, state-of-the-art temporal action segmentation methods overlook the long tail and fail to recognize tail actions. Existing long-tail methods make class-independent assumptions and struggle to identify tail classes when applied to temporal segmentation frameworks. This work proposes a novel group-wise temporal logit adjustment~(G-TLA) framework that combines a group-wise softmax formulation while leveraging activity information and action ordering for logit adjustment. The proposed framework significantly improves in segmenting tail actions without any performance loss on head actions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
481,646
2306.01697
MutateNN: Mutation Testing of Image Recognition Models Deployed on Hardware Accelerators
The increased utilization of Artificial Intelligence (AI) solutions brings with it inherent risks, such as misclassification and sub-optimal execution time performance, due to errors introduced in their deployment infrastructure because of problematic configuration and software faults. On top of that, AI methods such as Deep Neural Networks (DNNs) are utilized to perform demanding, resource-intensive and even safety-critical tasks, and in order to effectively increase the performance of the DNN models deployed, a variety of Machine Learning (ML) compilers have been developed, allowing compatibility of DNNs with a variety of hardware acceleration devices, such as GPUs and TPUs. Furthermore the correctness of the compilation process should be verified. In order to allow developers and researchers to explore the robustness of DNN models deployed on different hardware accelerators via ML compilers, in this paper we propose MutateNN, a tool that provides mutation testing and model analysis features in the context of deployment on different hardware accelerators. To demonstrate the capabilities of MutateNN, we focus on the image recognition domain by applying mutation testing to 7 well-established models utilized for image classification. We instruct 21 mutations of 6 different categories, and deploy our mutants on 4 different hardware acceleration devices of varying capabilities. Our results indicate that models are proven robust to changes related to layer modifications and arithmetic operators, while presenting discrepancies of up to 90.3% in mutants related to conditional operators. We also observed unexpectedly severe performance degradation on mutations related to arithmetic types of variables, leading the mutants to produce the same classifications for all dataset inputs.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
370,550
2001.10741
Extreme Algorithm Selection With Dyadic Feature Representation
Algorithm selection (AS) deals with selecting an algorithm from a fixed set of candidate algorithms most suitable for a specific instance of an algorithmic problem, e.g., choosing solvers for SAT problems. Benchmark suites for AS usually comprise candidate sets consisting of at most tens of algorithms, whereas in combined algorithm selection and hyperparameter optimization problems the number of candidates becomes intractable, impeding to learn effective meta-models and thus requiring costly online performance evaluations. Therefore, here we propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms, facilitating meta learning. We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation in which both problem instances and algorithms are described. We find the latter to improve significantly over the current state of the art in various metrics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,896
2202.12916
Incremental Inference on Higher-Order Probabilistic Graphical Models Applied to Constraint Satisfaction Problems
Probabilistic graphical models (PGMs) are tools for solving complex probabilistic relationships. However, suboptimal PGM structures are primarily used in practice. This dissertation presents three contributions to the PGM literature. The first is a comparison between factor graphs and cluster graphs on graph colouring problems such as Sudokus - indicating a significant advantage for preferring cluster graphs. The second is an application of cluster graphs to a practical problem in cartography: land cover classification boosting. The third is a PGMs formulation for constraint satisfaction problems and an algorithm called purge-and-merge to solve such problems too complex for traditional PGMs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
282,402
1105.1697
Vine copulas as a mean for the construction of high dimensional probability distribution associated to a Markov Network
Building higher-dimensional copulas is generally recognized as a difficult problem. Regular-vines using bivariate copulas provide a flexible class of high-dimensional dependency models. In large dimensions, the drawback of the model is the exponentially increasing complexity. Recognizing some of the conditional independences is a possibility for reducing the number of levels of the pair-copula decomposition, and hence to simplify its construction Aas et al (2009). The idea of using conditional independences was already performed under elliptical copula assumptions Hanea, Kurowicka and Cooke (2006), Kurowicka and Cooke (2002) and in the case of DAGs in a recent work Bauer, Czado and Klein (2011). We provide a method which uses some of the conditional independences encoded by the Markov network underlying the variables. We give a theorem which under some graph conditions makes possible to derive pair-copula decomposition of the probability density function associated to a Markov network. As the underlying Markov network is usually unknown, we first have to discover it from the sample data. Using our results published in Szantai and Kovacs (2008) and Kovacs and Szantai (2010a) we will show how to derive a multidimensional copula model exploiting the information on conditional independences hidden in the sample data.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,296
0910.2039
Higher coordination with less control - A result of information maximization in the sensorimotor loop
This work presents a novel learning method in the context of embodied artificial intelligence and self-organization, which has as few assumptions and restrictions as possible about the world and the underlying model. The learning rule is derived from the principle of maximizing the predictive information in the sensorimotor loop. It is evaluated on robot chains of varying length with individually controlled, non-communicating segments. The comparison of the results shows that maximizing the predictive information per wheel leads to a higher coordinated behavior of the physically connected robots compared to a maximization per robot. Another focus of this paper is the analysis of the effect of the robot chain length on the overall behavior of the robots. It will be shown that longer chains with less capable controllers outperform those of shorter length and more complex controllers. The reason is found and discussed in the information-geometric interpretation of the learning process.
false
false
false
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
4,710
cs/0310009
On Interference of Signals and Generalization in Feedforward Neural Networks
This paper studies how the generalization ability of neurons can be affected by mutual processing of different signals. This study is done on the basis of a feedforward artificial neural network. The mutual processing of signals can possibly be a good model of patterns in a set generalized by a neural network and in effect may improve generalization. In this paper it is discussed that the interference may also cause a highly random generalization. Adaptive activation functions are discussed as a way of reducing that type of generalization. A test of a feedforward neural network is performed that shows the discussed random generalization.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
537,998
1810.01008
Learning Hash Codes via Hamming Distance Targets
We present a powerful new loss function and training scheme for learning binary hash codes with any differentiable model and similarity function. Our loss function improves over prior methods by using log likelihood loss on top of an accurate approximation for the probability that two inputs fall within a Hamming distance target. Our novel training scheme obtains a good estimate of the true gradient by better sampling inputs and evaluating loss terms between all pairs of inputs in each minibatch. To fully leverage the resulting hashes, we use multi-indexing. We demonstrate that these techniques provide large improvements to a similarity search tasks. We report the best results to date on competitive information retrieval tasks for ImageNet and SIFT 1M, improving MAP from 73% to 84% and reducing query cost by a factor of 2-8, respectively.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
109,305
2406.05688
Peer Review as A Multi-Turn and Long-Context Dialogue with Role-Based Interactions
Large Language Models (LLMs) have demonstrated wide-ranging applications across various fields and have shown significant potential in the academic peer-review process. However, existing applications are primarily limited to static review generation based on submitted papers, which fail to capture the dynamic and iterative nature of real-world peer reviews. In this paper, we reformulate the peer-review process as a multi-turn, long-context dialogue, incorporating distinct roles for authors, reviewers, and decision makers. We construct a comprehensive dataset containing over 26,841 papers with 92,017 reviews collected from multiple sources, including the top-tier conference and prestigious journal. This dataset is meticulously designed to facilitate the applications of LLMs for multi-turn dialogues, effectively simulating the complete peer-review process. Furthermore, we propose a series of metrics to evaluate the performance of LLMs for each role under this reformulated peer-review setting, ensuring fair and comprehensive evaluations. We believe this work provides a promising perspective on enhancing the LLM-driven peer-review process by incorporating dynamic, role-based interactions. It aligns closely with the iterative and interactive nature of real-world academic peer review, offering a robust foundation for future research and development in this area. We open-source the dataset at https://github.com/chengtan9907/ReviewMT.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
462,249
2003.07711
$F$, $B$, Alpha Matting
Cutting out an object and estimating its opacity mask, known as image matting, is a key task in many image editing applications. Deep learning approaches have made significant progress by adapting the encoder-decoder architecture of segmentation networks. However, most of the existing networks only predict the alpha matte and post-processing methods must then be used to recover the original foreground and background colours in the transparent regions. Recently, two methods have shown improved results by also estimating the foreground colours, but at a significant computational and memory cost. In this paper, we propose a low-cost modification to alpha matting networks to also predict the foreground and background colours. We study variations of the training regime and explore a wide range of existing and novel loss functions for the joint prediction. Our method achieves the state of the art performance on the Adobe Composition-1k dataset for alpha matte and composite colour quality. It is also the current best performing method on the alphamatting.com online evaluation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
168,516
1706.02141
How Important is Syntactic Parsing Accuracy? An Empirical Evaluation on Rule-Based Sentiment Analysis
Syntactic parsing, the process of obtaining the internal structure of sentences in natural languages, is a crucial task for artificial intelligence applications that need to extract meaning from natural language text or speech. Sentiment analysis is one example of application for which parsing has recently proven useful. In recent years, there have been significant advances in the accuracy of parsing algorithms. In this article, we perform an empirical, task-oriented evaluation to determine how parsing accuracy influences the performance of a state-of-the-art rule-based sentiment analysis system that determines the polarity of sentences from their parse trees. In particular, we evaluate the system using four well-known dependency parsers, including both current models with state-of-the-art accuracy and more innacurate models which, however, require less computational resources. The experiments show that all of the parsers produce similarly good results in the sentiment analysis task, without their accuracy having any relevant influence on the results. Since parsing is currently a task with a relatively high computational cost that varies strongly between algorithms, this suggests that sentiment analysis researchers and users should prioritize speed over accuracy when choosing a parser; and parsing researchers should investigate models that improve speed further, even at some cost to accuracy.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
74,921
2310.13566
Retrieval-Augmented Neural Response Generation Using Logical Reasoning and Relevance Scoring
Constructing responses in task-oriented dialogue systems typically relies on information sources such the current dialogue state or external databases. This paper presents a novel approach to knowledge-grounded response generation that combines retrieval-augmented language models with logical reasoning. The approach revolves around a knowledge graph representing the current dialogue state and background information, and proceeds in three steps. The knowledge graph is first enriched with logically derived facts inferred using probabilistic logical programming. A neural model is then employed at each turn to score the conversational relevance of each node and edge of this extended graph. Finally, the elements with highest relevance scores are converted to a natural language form, and are integrated into the prompt for the neural conversational model employed to generate the system response. We investigate the benefits of the proposed approach on two datasets (KVRET and GraphWOZ) along with a human evaluation. Experimental results show that the combination of (probabilistic) logical reasoning with conversational relevance scoring does increase both the factuality and fluency of the responses.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
401,493
2009.09822
TODS: An Automated Time Series Outlier Detection System
We present TODS, an automated Time Series Outlier Detection System for research and industrial applications. TODS is a highly modular system that supports easy pipeline construction. The basic building block of TODS is primitive, which is an implementation of a function with hyperparameters. TODS currently supports 70 primitives, including data processing, time series processing, feature analysis, detection algorithms, and a reinforcement module. Users can freely construct a pipeline using these primitives and perform end- to-end outlier detection with the constructed pipeline. TODS provides a Graphical User Interface (GUI), where users can flexibly design a pipeline with drag-and-drop. Moreover, a data-driven searcher is provided to automatically discover the most suitable pipelines given a dataset. TODS is released under Apache 2.0 license at https://github.com/datamllab/tods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
196,702
2201.13215
The Curvature Effect in Gaussian Random Fields
Random field models are mathematical structures used in the study of stochastic complex systems. In this paper, we compute the shape operator of Gaussian random field manifolds using the first and second fundamental forms (Fisher information matrices). Using Markov Chain Monte Carlo techniques, we simulate the dynamics of these random fields and compute the Gaussian curvature of the parametric space, analyzing how this quantity changes along phase transitions. During the simulation, we have observed an unexpected phenomenon that we called the \emph{curvature effect}, which indicates that a highly asymmetric geometric deformation happens in the underlying parametric space when there are significant increase/decrease in the system's entropy. This asymmetric pattern relates to the emergence of hysteresis, leading to an intrinsic arrow of time along the dynamics.
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
277,913
2004.11598
Deep 3D Portrait from a Single Image
In this paper, we present a learning-based approach for recovering the 3D geometry of human head from a single portrait image. Our method is learned in an unsupervised manner without any ground-truth 3D data. We represent the head geometry with a parametric 3D face model together with a depth map for other head regions including hair and ear. A two-step geometry learning scheme is proposed to learn 3D head reconstruction from in-the-wild face images, where we first learn face shape on single images using self-reconstruction and then learn hair and ear geometry using pairs of images in a stereo-matching fashion. The second step is based on the output of the first to not only improve the accuracy but also ensure the consistency of overall head geometry. We evaluate the accuracy of our method both in 3D and with pose manipulation tasks on 2D images. We alter pose based on the recovered geometry and apply a refinement network trained with adversarial learning to ameliorate the reprojected images and translate them to the real image domain. Extensive evaluations and comparison with previous methods show that our new method can produce high-fidelity 3D head geometry and head pose manipulation results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
173,955
1906.11890
DVDnet: A Fast Network for Deep Video Denoising
In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Previous neural network based approaches to video denoising have been unsuccessful as their performance cannot compete with the performance of patch-based methods. However, our approach outperforms other patch-based competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as a small memory footprint, and the ability to handle a wide range of noise levels with a single network model. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications. We compare our method with different state-of-art algorithms, both visually and with respect to objective quality metrics. The experiments show that our algorithm compares favorably to other state-of-art methods. Video examples, code and models are publicly available at \url{https://github.com/m-tassano/dvdnet}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
136,780
2301.09809
Low-Resource Compositional Semantic Parsing with Concept Pretraining
Semantic parsing plays a key role in digital voice assistants such as Alexa, Siri, and Google Assistant by mapping natural language to structured meaning representations. When we want to improve the capabilities of a voice assistant by adding a new domain, the underlying semantic parsing model needs to be retrained using thousands of annotated examples from the new domain, which is time-consuming and expensive. In this work, we present an architecture to perform such domain adaptation automatically, with only a small amount of metadata about the new domain and without any new training data (zero-shot) or with very few examples (few-shot). We use a base seq2seq (sequence-to-sequence) architecture and augment it with a concept encoder that encodes intent and slot tags from the new domain. We also introduce a novel decoder-focused approach to pretrain seq2seq models to be concept aware using Wikidata and use it to help our model learn important concepts and perform well in low-resource settings. We report few-shot and zero-shot results for compositional semantic parsing on the TOPv2 dataset and show that our model outperforms prior approaches in few-shot settings for the TOPv2 and SNIPS datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
341,612
2403.15316
Ultrasound Imaging based on the Variance of a Diffusion Restoration Model
Despite today's prevalence of ultrasound imaging in medicine, ultrasound signal-to-noise ratio is still affected by several sources of noise and artefacts. Moreover, enhancing ultrasound image quality involves balancing concurrent factors like contrast, resolution, and speckle preservation. Recently, there has been progress in both model-based and learning-based approaches addressing the problem of ultrasound image reconstruction. Bringing the best from both worlds, we propose a hybrid reconstruction method combining an ultrasound linear direct model with a learning-based prior coming from a generative Denoising Diffusion model. More specifically, we rely on the unsupervised fine-tuning of a pre-trained Denoising Diffusion Restoration Model (DDRM). Given the nature of multiplicative noise inherent to ultrasound, this paper proposes an empirical model to characterize the stochasticity of diffusion reconstruction of ultrasound images, and shows the interest of its variance as an echogenicity map estimator. We conduct experiments on synthetic, in-vitro, and in-vivo data, demonstrating the efficacy of our variance imaging approach in achieving high-quality image reconstructions from single plane-wave acquisitions and in comparison to state-of-the-art methods. The code is available at: https://github.com/Yuxin-Zhang-Jasmine/DRUSvar
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
440,492
1305.0297
The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits
Wiring diagrams, as seen in digital circuits, can be nested hierarchically and thus have an aspect of self-similarity. We show that wiring diagrams form the morphisms of an operad $\mcT$, capturing this self-similarity. We discuss the algebra $\Rel$ of mathematical relations on $\mcT$, and in so doing use wiring diagrams as a graphical language with which to structure queries on relational databases. We give the example of circuit diagrams as a special case. We move on to show how plug-and-play devices and also recursion can be formulated in the operadic framework as well. Throughout we include many examples and figures.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
24,338
1106.0831
Optimal Real-time Spectrum Sharing between Cooperative Relay and Ad-hoc Networks
Optimization based spectrum sharing strategies have been widely studied. However, these strategies usually require a great amount of real-time computation and significant signaling delay, and thus are hard to be fulfilled in practical scenarios. This paper investigates optimal real-time spectrum sharing between a cooperative relay network (CRN) and a nearby ad-hoc network. Specifically, we optimize the spectrum access and resource allocation strategies of the CRN so that the average traffic collision time between the two networks can be minimized while maintaining a required throughput for the CRN. The development is first for a frame-level setting, and then is extended to an ergodic setting. For the latter setting, we propose an appealing optimal real-time spectrum sharing strategy via Lagrangian dual optimization. The proposed method only involves a small amount of real-time computation and negligible control delay, and thus is suitable for practical implementations. Simulation results are presented to demonstrate the efficiency of the proposed strategies.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
10,723
2403.10746
Vector search with small radiuses
In recent years, the dominant accuracy metric for vector search is the recall of a result list of fixed size (top-k retrieval), considering as ground truth the exact vector retrieval results. Although convenient to compute, this metric is distantly related to the end-to-end accuracy of a full system that integrates vector search. In this paper we focus on the common case where a hard decision needs to be taken depending on the vector retrieval results, for example, deciding whether a query image matches a database image or not. We solve this as a range search task, where all vectors within a certain radius from the query are returned. We show that the value of a range search result can be modeled rigorously based on the query-to-vector distance. This yields a metric for range search, RSM, that is both principled and easy to compute without running an end-to-end evaluation. We apply this metric to the case of image retrieval. We show that indexing methods that are adapted for top-k retrieval do not necessarily maximize the RSM. In particular, for inverted file based indexes, we show that visiting a limited set of clusters and encoding vectors compactly yields near optimal results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
438,331
2401.14081
Accelerating Fractional PINNs using Operational Matrices of Derivative
This paper presents a novel operational matrix method to accelerate the training of fractional Physics-Informed Neural Networks (fPINNs). Our approach involves a non-uniform discretization of the fractional Caputo operator, facilitating swift computation of fractional derivatives within Caputo-type fractional differential problems with $0<\alpha<1$. In this methodology, the operational matrix is precomputed, and during the training phase, automatic differentiation is replaced with a matrix-vector product. While our methodology is compatible with any network, we particularly highlight its successful implementation in PINNs, emphasizing the enhanced accuracy achieved when utilizing the Legendre Neural Block (LNB) architecture. LNB incorporates Legendre polynomials into the PINN structure, providing a significant boost in accuracy. The effectiveness of our proposed method is validated across diverse differential equations, including Delay Differential Equations (DDEs) and Systems of Differential Algebraic Equations (DAEs). To demonstrate its versatility, we extend the application of the method to systems of differential equations, specifically addressing nonlinear Pantograph fractional-order DDEs/DAEs. The results are supported by a comprehensive analysis of numerical outcomes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
423,965
2312.10300
Shot2Story: A New Benchmark for Comprehensive Understanding of Multi-shot Videos
A short clip of video may contain progression of multiple events and an interesting story line. A human need to capture both the event in every shot and associate them together to understand the story behind it. In this work, we present a new multi-shot video understanding benchmark Shot2Story with detailed shot-level captions, comprehensive video summaries and question-answering pairs. To facilitate better semantic understanding of videos, we provide captions for both visual signals and human narrations. We design several distinct tasks including single-shot video captioning, multi-shot video summarization, and multi-shot video question answering. Preliminary experiments show some challenges to generate a long and comprehensive video summary for multi-shot videos. Nevertheless, the generated imperfect summaries can already achieve competitive performance on existing video understanding tasks such as video question-answering, promoting an under-explored setting of video understanding with detailed summaries.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
416,096
2101.06586
Auto4D: Learning to Label 4D Objects from Sequential Point Clouds
In the past few years we have seen great advances in object perception (particularly in 4D space-time dimensions) thanks to deep learning methods. However, they typically rely on large amounts of high-quality labels to achieve good performance, which often require time-consuming and expensive work by human annotators. To address this we propose an automatic annotation pipeline that generates accurate object trajectories in 3D space (i.e., 4D labels) from LiDAR point clouds. The key idea is to decompose the 4D object label into two parts: the object size in 3D that's fixed through time for rigid objects, and the motion path describing the evolution of the object's pose through time. Instead of generating a series of labels in one shot, we adopt an iterative refinement process where online generated object detections are tracked through time as the initialization. Given the cheap but noisy input, our model produces higher quality 4D labels by re-estimating the object size and smoothing the motion path, where the improvement is achieved by exploiting aggregated observations and motion cues over the entire trajectory. We validate the proposed method on a large-scale driving dataset and show a 25% reduction of human annotation efforts. We also showcase the benefits of our approach in the annotator-in-the-loop setting.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
215,775
1909.08021
Strategic Formation and Reliability of Supply Chain Networks
Supply chains are the backbone of the global economy. Disruptions to them can be costly. Centrally managed supply chains invest in ensuring their resilience. Decentralized supply chains, however, must rely upon the self-interest of their individual components to maintain the resilience of the entire chain. We examine the incentives that independent self-interested agents have in forming a resilient supply chain network in the face of production disruptions and competition. In our model, competing suppliers are subject to yield uncertainty (they deliver less than ordered) and congestion (lead time uncertainty or, "soft" supply caps). Competing retailers must decide which suppliers to link to based on both price and reliability. In the presence of yield uncertainty only, the resulting supply chain networks are sparse. Retailers concentrate their links on a single supplier, counter to the idea that they should mitigate yield uncertainty by diversifying their supply base. This happens because retailers benefit from supply variance. It suggests that competition will amplify output uncertainty. When congestion is included as well, the resulting networks are denser and resemble the bipartite expander graphs that have been proposed in the supply chain literature, thereby, providing the first example of endogenous formation of resilient supply chain networks, without resilience being explicitly encoded in payoffs. Finally, we show that a supplier's investments in improved yield can make it worse off. This happens because high production output saturates the market, which, in turn lowers prices and profits for participants.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
145,840
1705.06950
The Kinetics Human Action Video Dataset
We describe the DeepMind Kinetics human action video dataset. The dataset contains 400 human action classes, with at least 400 video clips for each action. Each clip lasts around 10s and is taken from a different YouTube video. The actions are human focussed and cover a broad range of classes including human-object interactions such as playing instruments, as well as human-human interactions such as shaking hands. We describe the statistics of the dataset, how it was collected, and give some baseline performance figures for neural network architectures trained and tested for human action classification on this dataset. We also carry out a preliminary analysis of whether imbalance in the dataset leads to bias in the classifiers.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
73,712
2501.02735
Sequence Complementor: Complementing Transformers For Time Series Forecasting with Learnable Sequences
Since its introduction, the transformer has shifted the development trajectory away from traditional models (e.g., RNN, MLP) in time series forecasting, which is attributed to its ability to capture global dependencies within temporal tokens. Follow-up studies have largely involved altering the tokenization and self-attention modules to better adapt Transformers for addressing special challenges like non-stationarity, channel-wise dependency, and variable correlation in time series. However, we found that the expressive capability of sequence representation is a key factor influencing Transformer performance in time forecasting after investigating several representative methods, where there is an almost linear relationship between sequence representation entropy and mean square error, with more diverse representations performing better. In this paper, we propose a novel attention mechanism with Sequence Complementors and prove feasible from an information theory perspective, where these learnable sequences are able to provide complementary information beyond current input to feed attention. We further enhance the Sequence Complementors via a diversification loss that is theoretically covered. The empirical evaluation of both long-term and short-term forecasting has confirmed its superiority over the recent state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,605
2301.12178
MVKT-ECG: Efficient Single-lead ECG Classification on Multi-Label Arrhythmia by Multi-View Knowledge Transferring
The widespread emergence of smart devices for ECG has sparked demand for intelligent single-lead ECG-based diagnostic systems. However, it is challenging to develop a single-lead-based ECG interpretation model for multiple diseases diagnosis due to the lack of some key disease information. In this work, we propose inter-lead Multi-View Knowledge Transferring of ECG (MVKT-ECG) to boost single-lead ECG's ability for multi-label disease diagnosis. This training strategy can transfer superior disease knowledge from multiple different views of ECG (e.g. 12-lead ECG) to single-lead-based ECG interpretation model to mine details in single-lead ECG signals that are easily overlooked by neural networks. MVKT-ECG allows this lead variety as a supervision signal within a teacher-student paradigm, where the teacher observes multi-lead ECG educates a student who observes only single-lead ECG. Since the mutual disease information between the single-lead ECG and muli-lead ECG plays a key role in knowledge transferring, we present a new disease-aware Contrastive Lead-information Transferring(CLT) to improve the mutual disease information between the single-lead ECG and muli-lead ECG. Moreover, We modify traditional Knowledge Distillation to multi-label disease Knowledge Distillation (MKD) to make it applicable for multi-label disease diagnosis. The comprehensive experiments verify that MVKT-ECG has an excellent performance in improving the diagnostic effect of single-lead ECG.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
342,431
2403.07746
Unleashing HyDRa: Hybrid Fusion, Depth Consistency and Radar for Unified 3D Perception
Low-cost, vision-centric 3D perception systems for autonomous driving have made significant progress in recent years, narrowing the gap to expensive LiDAR-based methods. The primary challenge in becoming a fully reliable alternative lies in robust depth prediction capabilities, as camera-based systems struggle with long detection ranges and adverse lighting and weather conditions. In this work, we introduce HyDRa, a novel camera-radar fusion architecture for diverse 3D perception tasks. Building upon the principles of dense BEV (Bird's Eye View)-based architectures, HyDRa introduces a hybrid fusion approach to combine the strengths of complementary camera and radar features in two distinct representation spaces. Our Height Association Transformer module leverages radar features already in the perspective view to produce more robust and accurate depth predictions. In the BEV, we refine the initial sparse representation by a Radar-weighted Depth Consistency. HyDRa achieves a new state-of-the-art for camera-radar fusion of 64.2 NDS (+1.8) and 58.4 AMOTA (+1.5) on the public nuScenes dataset. Moreover, our new semantically rich and spatially accurate BEV features can be directly converted into a powerful occupancy representation, beating all previous camera-based methods on the Occ3D benchmark by an impressive 3.7 mIoU. Code and models are available at https://github.com/phi-wol/hydra.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
437,011
1611.05490
Semantic Regularisation for Recurrent Image Annotation
The "CNN-RNN" design pattern is increasingly widely applied in a variety of image annotation tasks including multi-label classification and captioning. Existing models use the weakly semantic CNN hidden layer or its transform as the image embedding that provides the interface between the CNN and RNN. This leaves the RNN overstretched with two jobs: predicting the visual concepts and modelling their correlations for generating structured annotation output. Importantly this makes the end-to-end training of the CNN and RNN slow and ineffective due to the difficulty of back propagating gradients through the RNN to train the CNN. We propose a simple modification to the design pattern that makes learning more effective and efficient. Specifically, we propose to use a semantically regularised embedding layer as the interface between the CNN and RNN. Regularising the interface can partially or completely decouple the learning problems, allowing each to be more effectively trained and jointly training much more efficient. Extensive experiments show that state-of-the art performance is achieved on multi-label classification as well as image captioning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
64,028
2407.11734
Generating Multi-Modal and Multi-Attribute Single-Cell Counts with CFGen
Generative modeling of single-cell RNA-seq data has shown invaluable potential in community-driven tasks such as trajectory inference, batch effect removal and gene expression generation. However, most recent deep models generating synthetic single cells from noise operate on pre-processed continuous gene expression approximations, ignoring the inherently discrete and over-dispersed nature of single-cell data, which limits downstream applications and hinders the incorporation of robust noise models. Moreover, crucial aspects of deep-learning-based synthetic single-cell generation remain underexplored, such as controllable multi-modal and multi-label generation and its role in the performance enhancement of downstream tasks. This work presents Cell Flow for Generation (CFGen), a flow-based conditional generative model for multi-modal single-cell counts, which explicitly accounts for the discrete nature of the data. Our results suggest improved recovery of crucial biological data characteristics while accounting for novel generative tasks such as conditioning on multiple attributes and boosting rare cell type classification via data augmentation. By showcasing CFGen on a diverse set of biological datasets and settings, we provide evidence of its value to the fields of computational biology and deep generative models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
473,593
2406.10354
SigDiffusions: Score-Based Diffusion Models for Long Time Series via Log-Signature Embeddings
Score-based diffusion models have recently emerged as state-of-the-art generative models for a variety of data modalities. Nonetheless, it remains unclear how to adapt these models to generate long multivariate time series. Viewing a time series as the discretization of an underlying continuous process, we introduce SigDiffusion, a novel diffusion model operating on log-signature embeddings of the data. The forward and backward processes gradually perturb and denoise log-signatures preserving their algebraic structure. To recover a signal from its log-signature, we provide new closed-form inversion formulae expressing the coefficients obtained by expanding the signal in a given basis (e.g. Fourier or orthogonal polynomials) as explicit polynomial functions of the log-signature. Finally, we show that combining SigDiffusion with these inversion formulae results in highly realistic time series generation, competitive with the current state-of-the-art on various datasets of synthetic and real-world examples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
464,370
2302.09634
Magnitude Matters: Fixing SIGNSGD Through Magnitude-Aware Sparsification in the Presence of Data Heterogeneity
Communication overhead has become one of the major bottlenecks in the distributed training of deep neural networks. To alleviate the concern, various gradient compression methods have been proposed, and sign-based algorithms are of surging interest. However, SIGNSGD fails to converge in the presence of data heterogeneity, which is commonly observed in the emerging federated learning (FL) paradigm. Error feedback has been proposed to address the non-convergence issue. Nonetheless, it requires the workers to locally keep track of the compression errors, which renders it not suitable for FL since the workers may not participate in the training throughout the learning process. In this paper, we propose a magnitude-driven sparsification scheme, which addresses the non-convergence issue of SIGNSGD while further improving communication efficiency. Moreover, the local update scheme is further incorporated to improve the learning performance, and the convergence of the proposed method is established. The effectiveness of the proposed scheme is validated through experiments on Fashion-MNIST, CIFAR-10, and CIFAR-100 datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
346,503
2008.04126
Reasoning about Cardinal Directions between 3-Dimensional Extended Objects using Answer Set Programming
We propose a novel formal framework (called 3D-nCDC-ASP) to represent and reason about cardinal directions between extended objects in 3-dimensional (3D) space, using Answer Set Programming (ASP). 3D-nCDC-ASP extends Cardinal Directional Calculus (CDC) with a new type of default constraints, and nCDC-ASP to 3D. 3D-nCDC-ASP provides a flexible platform offering different types of reasoning: Nonmonotonic reasoning with defaults, checking consistency of a set of constraints on 3D cardinal directions between objects, explaining inconsistencies, and inferring missing CDC relations. We prove the soundness of 3D-nCDC-ASP, and illustrate its usefulness with applications. This paper is under consideration for acceptance in TPLP.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
191,143
2110.03774
Data-driven Tissue Mechanics with Polyconvex Neural Ordinary Differential Equations
Data-driven methods are becoming an essential part of computational mechanics due to their unique advantages over traditional material modeling. Deep neural networks are able to learn complex material response without the constraints of closed-form approximations. However, imposing the physics-based mathematical requirements that any material model must comply with is not straightforward for data-driven approaches. In this study, we use a novel class of neural networks, known as neural ordinary differential equations (N-ODEs), to develop data-driven material models that automatically satisfy polyconvexity of the strain energy function with respect to the deformation gradient, a condition needed for the existence of minimizers for boundary value problems in elasticity. We take advantage of the properties of ordinary differential equations to create monotonic functions that approximate the derivatives of the strain energy function with respect to the invariants of the right Cauchy-Green deformation tensor. The monotonicity of the derivatives guarantees the convexity of the energy. The N-ODE material model is able to capture synthetic data generated from closed-form material models, and it outperforms conventional models when tested against experimental data on skin, a highly nonlinear and anisotropic material. We also showcase the use of the N-ODE material model in finite element simulations. The framework is general and can be used to model a large class of materials. Here we focus on hyperelasticity, but polyconvex strain energies are a core building block for other problems in elasticity such as viscous and plastic deformations. We therefore expect our methodology to further enable data-driven methods in computational mechanics
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
259,625
2406.14973
LU2Net: A Lightweight Network for Real-time Underwater Image Enhancement
Computer vision techniques have empowered underwater robots to effectively undertake a multitude of tasks, including object tracking and path planning. However, underwater optical factors like light refraction and absorption present challenges to underwater vision, which cause degradation of underwater images. A variety of underwater image enhancement methods have been proposed to improve the effectiveness of underwater vision perception. Nevertheless, for real-time vision tasks on underwater robots, it is necessary to overcome the challenges associated with algorithmic efficiency and real-time capabilities. In this paper, we introduce Lightweight Underwater Unet (LU2Net), a novel U-shape network designed specifically for real-time enhancement of underwater images. The proposed model incorporates axial depthwise convolution and the channel attention module, enabling it to significantly reduce computational demands and model parameters, thereby improving processing speed. The extensive experiments conducted on the dataset and real-world underwater robots demonstrate the exceptional performance and speed of proposed model. It is capable of providing well-enhanced underwater images at a speed 8 times faster than the current state-of-the-art underwater image enhancement method. Moreover, LU2Net is able to handle real-time underwater video enhancement.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
466,563
1902.09023
Automatic ISP image quality tuning using non-linear optimization
Image Signal Processor (ISP) comprises of various blocks to reconstruct image sensor raw data to final image consumed by human visual system or computer vision applications. Each block typically has many tuning parameters due to the complexity of the operation. These need to be hand tuned by Image Quality (IQ) experts, which takes considerable amount of time. In this paper, we present an automatic IQ tuning using nonlinear optimization and automatic reference generation algorithms. The proposed method can produce high quality IQ in minutes as compared with weeks of hand-tuned results by IQ experts. In addition, the proposed method can work with any algorithms without being aware of their specific implementation. It was found successful on multiple different processing blocks such as noise reduction, demosaic, and sharpening.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
122,320
2207.03697
End-to-End Binaural Speech Synthesis
In this work, we present an end-to-end binaural speech synthesis system that combines a low-bitrate audio codec with a powerful binaural decoder that is capable of accurate speech binauralization while faithfully reconstructing environmental factors like ambient noise or reverb. The network is a modified vector-quantized variational autoencoder, trained with several carefully designed objectives, including an adversarial loss. We evaluate the proposed system on an internal binaural dataset with objective metrics and a perceptual study. Results show that the proposed approach matches the ground truth data more closely than previous methods. In particular, we demonstrate the capability of the adversarial loss in capturing environment effects needed to create an authentic auditory scene.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
306,951
2012.10138
Resource-efficient DNNs for Keyword Spotting using Neural Architecture Search and Quantization
This paper introduces neural architecture search (NAS) for the automatic discovery of small models for keyword spotting (KWS) in limited resource environments. We employ a differentiable NAS approach to optimize the structure of convolutional neural networks (CNNs) to maximize the classification accuracy while minimizing the number of operations per inference. Using NAS only, we were able to obtain a highly efficient model with 95.4% accuracy on the Google speech commands dataset with 494.8 kB of memory usage and 19.6 million operations. Additionally, weight quantization is used to reduce the memory consumption even further. We show that weight quantization to low bit-widths (e.g. 1 bit) can be used without substantial loss in accuracy. By increasing the number of input features from 10 MFCC to 20 MFCC we were able to increase the accuracy to 96.3% at 340.1 kB of memory usage and 27.1 million operations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
212,266
2201.07725
Data-to-Value: An Evaluation-First Methodology for Natural Language Projects
Big data, i.e. collecting, storing and processing of data at scale, has recently been possible due to the arrival of clusters of commodity computers powered by application-level distributed parallel operating systems like HDFS/Hadoop/Spark, and such infrastructures have revolutionized data mining at scale. For data mining project to succeed more consistently, some methodologies were developed (e.g. CRISP-DM, SEMMA, KDD), but these do not account for (1) very large scales of processing, (2) dealing with textual (unstructured) data (i.e. Natural Language Processing (NLP, "text analytics"), and (3) non-technical considerations (e.g. legal, ethical, project managerial aspects). To address these shortcomings, a new methodology, called "Data to Value" (D2V), is introduced, which is guided by a detailed catalog of questions in order to avoid a disconnect of big data text analytics project team with the topic when facing rather abstract box-and-arrow diagrams commonly associated with methodologies.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
276,115
2303.07717
HALOS: Hallucination-free Organ Segmentation after Organ Resection Surgery
The wide range of research in deep learning-based medical image segmentation pushed the boundaries in a multitude of applications. A clinically relevant problem that received less attention is the handling of scans with irregular anatomy, e.g., after organ resection. State-of-the-art segmentation models often lead to organ hallucinations, i.e., false-positive predictions of organs, which cannot be alleviated by oversampling or post-processing. Motivated by the increasing need to develop robust deep learning models, we propose HALOS for abdominal organ segmentation in MR images that handles cases after organ resection surgery. To this end, we combine missing organ classification and multi-organ segmentation tasks into a multi-task model, yielding a classification-assisted segmentation pipeline. The segmentation network learns to incorporate knowledge about organ existence via feature fusion modules. Extensive experiments on a small labeled test set and large-scale UK Biobank data demonstrate the effectiveness of our approach in terms of higher segmentation Dice scores and near-to-zero false positive prediction rate.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
351,359
1809.01557
Dynamically Context-Sensitive Time-Decay Attention for Dialogue Modeling
Spoken language understanding (SLU) is an essential component in conversational systems. Considering that contexts provide informative cues for better understanding, history can be leveraged for contextual SLU. However, most prior work only paid attention to the related content in history utterances and ignored the temporal information. In dialogues, it is intuitive that the most recent utterances are more important than the least recent ones, and time-aware attention should be in a decaying manner. Therefore, this paper allows the model to automatically learn a time-decay attention function where the attentional weights can be dynamically decided based on the content of each role's contexts, which effectively integrates both content-aware and time-aware perspectives and demonstrates remarkable flexibility to complex dialogue contexts. The experiments on the benchmark Dialogue State Tracking Challenge (DSTC4) dataset show that the proposed dynamically context-sensitive time-decay attention mechanisms significantly improve the state-of-the-art model for contextual understanding performance.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
106,832
2110.06933
Style-based quantum generative adversarial networks for Monte Carlo events
We propose and assess an alternative quantum generator architecture in the context of generative adversarial learning for Monte Carlo event generation, used to simulate particle physics processes at the Large Hadron Collider (LHC). We validate this methodology by implementing the quantum network on artificial data generated from known underlying distributions. The network is then applied to Monte Carlo-generated datasets of specific LHC scattering processes. The new quantum generator architecture leads to a generalization of the state-of-the-art implementations, achieving smaller Kullback-Leibler divergences even with shallow-depth networks. Moreover, the quantum generator successfully learns the underlying distribution functions even if trained with small training sample sets; this is particularly interesting for data augmentation applications. We deploy this novel methodology on two different quantum hardware architectures, trapped-ion and superconducting technologies, to test its hardware-independent viability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
260,797
cmp-lg/9504033
Corpus Statistics Meet the Noun Compound: Some Empirical Results
A variety of statistical methods for noun compound analysis are implemented and compared. The results support two main conclusions. First, the use of conceptual association not only enables a broad coverage, but also improves the accuracy. Second, an analysis model based on dependency grammar is substantially more accurate than one based on deepest constituents, even though the latter is more prevalent in the literature.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,358
2408.03746
Flexible Bayesian Last Layer Models Using Implicit Priors and Diffusion Posterior Sampling
Bayesian Last Layer (BLL) models focus solely on uncertainty in the output layer of neural networks, demonstrating comparable performance to more complex Bayesian models. However, the use of Gaussian priors for last layer weights in Bayesian Last Layer (BLL) models limits their expressive capacity when faced with non-Gaussian, outlier-rich, or high-dimensional datasets. To address this shortfall, we introduce a novel approach that combines diffusion techniques and implicit priors for variational learning of Bayesian last layer weights. This method leverages implicit distributions for modeling weight priors in BLL, coupled with diffusion samplers for approximating true posterior predictions, thereby establishing a comprehensive Bayesian prior and posterior estimation strategy. By delivering an explicit and computationally efficient variational lower bound, our method aims to augment the expressive abilities of BLL models, enhancing model accuracy, calibration, and out-of-distribution detection proficiency. Through detailed exploration and experimental validation, We showcase the method's potential for improving predictive accuracy and uncertainty quantification while ensuring computational efficiency.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
479,137
2011.04364
SuperDeConFuse: A Supervised Deep Convolutional Transform based Fusion Framework for Financial Trading Systems
This work proposes a supervised multi-channel time-series learning framework for financial stock trading. Although many deep learning models have recently been proposed in this domain, most of them treat the stock trading time-series data as 2-D image data, whereas its true nature is 1-D time-series data. Since the stock trading systems are multi-channel data, many existing techniques treating them as 1-D time-series data are not suggestive of any technique to effectively fusion the information carried by the multiple channels. To contribute towards both of these shortcomings, we propose an end-to-end supervised learning framework inspired by the previously established (unsupervised) convolution transform learning framework. Our approach consists of processing the data channels through separate 1-D convolution layers, then fusing the outputs with a series of fully-connected layers, and finally applying a softmax classification layer. The peculiarity of our framework - SuperDeConFuse (SDCF), is that we remove the nonlinear activation located between the multi-channel convolution layers and the fully-connected layers, as well as the one located between the latter and the output layer. We compensate for this removal by introducing a suitable regularization on the aforementioned layer outputs and filters during the training phase. Specifically, we apply a logarithm determinant regularization on the layer filters to break symmetry and force diversity in the learnt transforms, whereas we enforce the non-negativity constraint on the layer outputs to mitigate the issue of dead neurons. This results in the effective learning of a richer set of features and filters with respect to a standard convolutional neural network. Numerical experiments confirm that the proposed model yields considerably better results than state-of-the-art deep learning techniques for real-world problem of stock trading.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
205,562
2404.04877
A Bird-Eye view on DNA Storage Simulators
In the current world due to the huge demand for storage, DNA-based storage solution sounds quite promising because of their longevity, low power consumption, and high capacity. However in real life storing data in the form of DNA is quite expensive, and challenging. Therefore researchers and developers develop such kind of software that helps simulate real-life DNA storage without worrying about the cost. This paper aims to review some of the software that performs DNA storage simulations in different domains. The paper also explains the core concepts such as synthesis, sequencing, clustering, reconstruction, GC window, K-mer window, etc and some overview on existing algorithms. Further, we present 3 different softwares on the basis of domain, implementation techniques, and customer/commercial usability.
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
true
444,834
2103.16832
Online Learning of a Probabilistic and Adaptive Scene Representation
Constructing and maintaining a consistent scene model on-the-fly is the core task for online spatial perception, interpretation, and action. In this paper, we represent the scene with a Bayesian nonparametric mixture model, seamlessly describing per-point occupancy status with a continuous probability density function. Instead of following the conventional data fusion paradigm, we address the problem of online learning the process how sequential point cloud data are generated from the scene geometry. An incremental and parallel inference is performed to update the parameter space in real-time. We experimentally show that the proposed representation achieves state-of-the-art accuracy with promising efficiency. The consistent probabilistic formulation assures a generative model that is adaptive to different sensor characteristics, and the model complexity can be dynamically adjusted on-the-fly according to different data scales.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
227,722
1505.00484
Limited Feedback in Multiple-Antenna Systems with One-Bit Quantization
Communication systems with low-resolution analog-to-digital-converters (ADCs) can exploit channel state information at the transmitter (CSIT) and receiver. This paper presents initial results on codebook design and performance analysis for limited feedback systems with one-bit ADCs. Different from the high-resolution case, the absolute phase at the receiver is important to align the phase of the received signals when the received signal is sliced by one-bit ADCs. A new codebook design for the beamforming case is proposed that separately quantizes the channel direction and the residual phase.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
42,733
1505.03229
APAC: Augmented PAttern Classification with Neural Networks
Deep neural networks have been exhibiting splendid accuracies in many of visual pattern classification problems. Many of the state-of-the-art methods employ a technique known as data augmentation at the training stage. This paper addresses an issue of decision rule for classifiers trained with augmented data. Our method is named as APAC: the Augmented PAttern Classification, which is a way of classification using the optimal decision rule for augmented data learning. Discussion of methods of data augmentation is not our primary focus. We show clear evidences that APAC gives far better generalization performance than the traditional way of class prediction in several experiments. Our convolutional neural network model with APAC achieved a state-of-the-art accuracy on the MNIST dataset among non-ensemble classifiers. Even our multilayer perceptron model beats some of the convolutional models with recently invented stochastic regularization techniques on the CIFAR-10 dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
43,055
1701.07675
Sparse Ternary Codes for similarity search have higher coding gain than dense binary codes
This paper addresses the problem of Approximate Nearest Neighbor (ANN) search in pattern recognition where feature vectors in a database are encoded as compact codes in order to speed-up the similarity search in large-scale databases. Considering the ANN problem from an information-theoretic perspective, we interpret it as an encoding, which maps the original feature vectors to a less entropic sparse representation while requiring them to be as informative as possible. We then define the coding gain for ANN search using information-theoretic measures. We next show that the classical approach to this problem, which consists of binarization of the projected vectors is sub-optimal. Instead, a properly designed ternary encoding achieves higher coding gains and lower complexity.
false
false
false
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
67,332
2203.03546
LMN at SemEval-2022 Task 11: A Transformer-based System for English Named Entity Recognition
Processing complex and ambiguous named entities is a challenging research problem, but it has not received sufficient attention from the natural language processing community. In this short paper, we present our participation in the English track of SemEval-2022 Task 11: Multilingual Complex Named Entity Recognition. Inspired by the recent advances in pretrained Transformer language models, we propose a simple yet effective Transformer-based baseline for the task. Despite its simplicity, our proposed approach shows competitive results in the leaderboard as we ranked 12 over 30 teams. Our system achieved a macro F1 score of 72.50% on the held-out test set. We have also explored a data augmentation approach using entity linking. While the approach does not improve the final performance, we also discuss it in this paper.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
284,124
2412.10582
WHAT-IF: Exploring Branching Narratives by Meta-Prompting Large Language Models
WHAT-IF -- Writing a Hero's Alternate Timeline through Interactive Fiction -- is a system that uses zero-shot meta-prompting to create branching narratives from a prewritten story. Played as an interactive fiction (IF) game, WHAT-IF lets the player choose between decisions that the large language model (LLM) GPT-4 generates as possible branches in the story. Starting with an existing linear plot as input, a branch is created at each key decision taken by the main character. By meta-prompting the LLM to consider the major plot points from the story, the system produces coherent and well-structured alternate storylines. WHAT-IF stores the branching plot tree in a graph which helps it to both keep track of the story for prompting and maintain the structure for the final IF system. A video demo of our system can be found here: https://youtu.be/8vBqjqtupcc.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
517,003
2206.08509
Neural Architecture Adaptation for Object Detection by Searching Channel Dimensions and Mapping Pre-trained Parameters
Most object detection frameworks use backbone architectures originally designed for image classification, conventionally with pre-trained parameters on ImageNet. However, image classification and object detection are essentially different tasks and there is no guarantee that the optimal backbone for classification is also optimal for object detection. Recent neural architecture search (NAS) research has demonstrated that automatically designing a backbone specifically for object detection helps improve the overall accuracy. In this paper, we introduce a neural architecture adaptation method that can optimize the given backbone for detection purposes, while still allowing the use of pre-trained parameters. We propose to adapt both the micro- and macro-architecture by searching for specific operations and the number of layers, in addition to the output channel dimensions of each block. It is important to find the optimal channel depth, as it greatly affects the feature representation capability and computation cost. We conduct experiments with our searched backbone for object detection and demonstrate that our backbone outperforms both manually designed and searched state-of-the-art backbones on the COCO dataset.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
303,171