id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1811.04860
Bio-YODIE: A Named Entity Linking System for Biomedical Text
Ever-expanding volumes of biomedical text require automated semantic annotation techniques to curate and put to best use. An established field of research seeks to link mentions in text to knowledge bases such as those included in the UMLS (Unified Medical Language System), in order to enable a more sophisticated understanding. This work has yielded good results for tasks such as curating literature, but increasingly, annotation systems are more broadly applied. Medical vocabularies are expanding in size, and with them the extent of term ambiguity. Document collections are increasing in size and complexity, creating a greater need for speed and robustness. Furthermore, as the technologies are turned to new tasks, requirements change; for example greater coverage of expressions may be required in order to annotate patient records, and greater accuracy may be needed for applications that affect patients. This places new demands on the approaches currently in use. In this work, we present a new system, Bio-YODIE, and compare it to two other popular systems in order to give guidance about suitable approaches in different scenarios and how systems might be designed to accommodate future needs.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
113,191
2311.18071
Turn Down the Noise: Leveraging Diffusion Models for Test-time Adaptation via Pseudo-label Ensembling
The goal of test-time adaptation is to adapt a source-pretrained model to a continuously changing target domain without relying on any source data. Typically, this is either done by updating the parameters of the model (model adaptation) using inputs from the target domain or by modifying the inputs themselves (input adaptation). However, methods that modify the model suffer from the issue of compounding noisy updates whereas methods that modify the input need to adapt to every new data point from scratch while also struggling with certain domain shifts. We introduce an approach that leverages a pre-trained diffusion model to project the target domain images closer to the source domain and iteratively updates the model via pseudo-label ensembling. Our method combines the advantages of model and input adaptations while mitigating their shortcomings. Our experiments on CIFAR-10C demonstrate the superiority of our approach, outperforming the strongest baseline by an average of 1.7% across 15 diverse corruptions and surpassing the strongest input adaptation baseline by an average of 18%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
411,543
1812.07695
Index-based, High-dimensional, Cosine Threshold Querying with Optimality Guarantees
Given a database of vectors, a cosine threshold query returns all vectors in the database having cosine similarity to a query vector above a given threshold {\theta}. These queries arise naturally in many applications, such as document retrieval, image search, and mass spectrometry. The present paper considers the efficient evaluation of such queries, providing novel optimality guarantees and exhibiting good performance on real datasets. We take as a starting point Fagin's well-known Threshold Algorithm (TA), which can be used to answer cosine threshold queries as follows: an inverted index is first built from the database vectors during pre-processing; at query time, the algorithm traverses the index partially to gather a set of candidate vectors to be later verified for {\theta}-similarity. However, directly applying TA in its raw form misses significant optimization opportunities. Indeed, we first show that one can take advantage of the fact that the vectors can be assumed to be normalized, to obtain an improved, tight stopping condition for index traversal and to efficiently compute it incrementally. Then we show that one can take advantage of data skewness to obtain better traversal strategies. In particular, we show a novel traversal strategy that exploits a common data skewness condition which holds in multiple domains including mass spectrometry, documents, and image databases. We show that under the skewness assumption, the new traversal strategy has a strong, near-optimal performance guarantee. The techniques developed in the paper are quite general since they can be applied to a large class of similarity functions beyond cosine.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
116,860
1506.03289
Characterizing the intrinsic correlations of scale-free networks
Very often, when studying topological or dynamical properties of random scale-free networks, it is tacitly assumed that degree-degree correlations are not present. However, simple constraints, such as the absence of multiple edges and self-loops, can give rise to intrinsic correlations in these structures. In the same way that Fermionic correlations in thermodynamic systems are relevant only in the limit of low temperature, the intrinsic correlations in scale-free networks are relevant only when the extreme values for the degrees grow faster than the square-root of the network size. In this situation, these correlations can significantly affect the dependence of the average degree of the nearest neighbors of a given vertice on this vertices's degree. Here, we introduce an analytical approach that is capable to predict the functional form of this property. Moreover, our results indicate that random scale-free networks models are not self-averaging, that is, the second moment of their degree distribution may vary orders of magnitude among different realizations. Finally, we argue that the intrinsic correlations investigated here may have profound impact on the critical properties of random scale-free networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
44,030
2312.04063
An unsupervised approach towards promptable defect segmentation in laser-based additive manufacturing by Segment Anything
Foundation models are currently driving a paradigm shift in computer vision tasks for various fields including biology, astronomy, and robotics among others, leveraging user-generated prompts to enhance their performance. In the Laser Additive Manufacturing (LAM) domain, accurate image-based defect segmentation is imperative to ensure product quality and facilitate real-time process control. However, such tasks are often characterized by multiple challenges including the absence of labels and the requirement for low latency inference among others. Porosity is a very common defect in LAM due to lack of fusion, entrapped gas, and keyholes, directly affecting mechanical properties like tensile strength, stiffness, and hardness, thereby compromising the quality of the final product. To address these issues, we construct a framework for image segmentation using a state-of-the-art Vision Transformer (ViT) based Foundation model (Segment Anything Model) with a novel multi-point prompt generation scheme using unsupervised clustering. Utilizing our framework we perform porosity segmentation in a case study of laser-based powder bed fusion (L-PBF) and obtain high accuracy without using any labeled data to guide the prompt tuning process. By capitalizing on lightweight foundation model inference combined with unsupervised prompt generation, we envision constructing a real-time anomaly detection pipeline that could revolutionize current laser additive manufacturing processes, thereby facilitating the shift towards Industry 4.0 and promoting defect-free production along with operational efficiency.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
413,534
1802.05591
Towards End-to-End Lane Detection: an Instance Segmentation Approach
Modern cars are incorporating an increasing number of driver assist features, among which automatic lane keeping. The latter allows the car to properly position itself within the road lanes, which is also crucial for any subsequent lane departure or trajectory planning decision in fully autonomous cars. Traditional lane detection methods rely on a combination of highly-specialized, hand-crafted features and heuristics, usually followed by post-processing techniques, that are computationally expensive and prone to scalability due to road scene variations. More recent approaches leverage deep learning models, trained for pixel-wise lane segmentation, even when no markings are present in the image due to their big receptive field. Despite their advantages, these methods are limited to detecting a pre-defined, fixed number of lanes, e.g. ego-lanes, and can not cope with lane changes. In this paper, we go beyond the aforementioned limitations and propose to cast the lane detection problem as an instance segmentation problem - in which each lane forms its own instance - that can be trained end-to-end. To parametrize the segmented lane instances before fitting the lane, we further propose to apply a learned perspective transformation, conditioned on the image, in contrast to a fixed "bird's-eye view" transformation. By doing so, we ensure a lane fitting which is robust against road plane changes, unlike existing approaches that rely on a fixed, pre-defined transformation. In summary, we propose a fast lane detection algorithm, running at 50 fps, which can handle a variable number of lanes and cope with lane changes. We verify our method on the tuSimple dataset and achieve competitive results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
90,465
0810.2598
New avenue to the Parton Distribution Functions: Self-Organizing Maps
Neural network algorithms have been recently applied to construct Parton Distribution Function (PDF) parametrizations which provide an alternative to standard global fitting procedures. We propose a technique based on an interactive neural network algorithm using Self-Organizing Maps (SOMs). SOMs are a class of clustering algorithms based on competitive learning among spatially-ordered neurons. Our SOMs are trained on selections of stochastically generated PDF samples. The selection criterion for every optimization iteration is based on the features of the clustered PDFs. Our main goal is to provide a fitting procedure that, at variance with the standard neural network approaches, allows for an increased control of the systematic bias by enabling user interaction in the various stages of the process.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
2,503
2408.12416
Unlearning Trojans in Large Language Models: A Comparison Between Natural Language and Source Code
This work investigates the application of Machine Unlearning (MU) for mitigating the impact of trojans embedded in conventional large language models of natural language (Text-LLMs) and large language models of code (Code-LLMs) We propose a novel unlearning approach, LYA, that leverages both gradient ascent and elastic weight consolidation, a Fisher Information Matrix (FIM) based regularization technique, to unlearn trojans from poisoned models. We compare the effectiveness of LYA against conventional techniques like fine-tuning, retraining, and vanilla gradient ascent. The subject models we investigate are BERT and CodeBERT, for sentiment analysis and code defect detection tasks, respectively. Our findings demonstrate that the combination of gradient ascent and FIM-based regularization, as done in LYA, outperforms existing methods in removing the trojan's influence from the poisoned model, while preserving its original functionality. To the best of our knowledge, this is the first work that compares and contrasts MU of trojans in LLMs, in the NL and Coding domain.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
482,723
2312.12641
Robust Point Matching with Distance Profiles
We show the outlier robustness and noise stability of practical matching procedures based on distance profiles. Although the idea of matching points based on invariants like distance profiles has a long history in the literature, there has been little understanding of the theoretical properties of such procedures, especially in the presence of outliers and noise. We provide a theoretical analysis showing that under certain probabilistic settings, the proposed matching procedure is successful with high probability even in the presence of outliers and noise. We demonstrate the performance of the proposed method using a real data example and provide simulation studies to complement the theoretical findings. Lastly, we extend the concept of distance profiles to the abstract setting and connect the proposed matching procedure to the Gromov-Wasserstein distance and its lower bound, with a new sample complexity result derived based on the properties of distance profiles. This paper contributes to the literature by providing theoretical underpinnings of the matching procedures based on invariants like distance profiles, which have been widely used in practice but have rarely been analyzed theoretically.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
417,034
2005.01020
Investigating the Effects of Robot Engagement Communication on Learning from Demonstration
Robot Learning from Demonstration (RLfD) is a technique for robots to derive policies from instructors' examples. Although the reciprocal effects of student engagement on teacher behavior are widely recognized in the educational community, it is unclear whether the same phenomenon holds true for RLfD. To fill this gap, we first design three types of robot engagement behavior (attention, imitation, and a hybrid of the two) based on the learning literature. We then conduct, in a simulation environment, a within-subject user study to investigate the impact of different robot engagement cues on humans compared to a "without-engagement" condition. Results suggest that engagement communication significantly changes the human's estimation of the robots' capability and significantly raises their expectation towards the learning outcomes, even though we do not run actual learning algorithms in the experiments. Moreover, imitation behavior affects humans more than attention does in all metrics, while their combination has the most profound influences on humans. We also find that communicating engagement via imitation or the combined behavior significantly improve humans' perception towards the quality of demonstrations, even if all demonstrations are of the same quality.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
175,471
1612.02605
Towards Information-Seeking Agents
We develop a general problem setting for training and testing the ability of agents to gather information efficiently. Specifically, we present a collection of tasks in which success requires searching through a partially-observed environment, for fragments of information which can be pieced together to accomplish various goals. We combine deep architectures with techniques from reinforcement learning to develop agents that solve our tasks. We shape the behavior of these agents by combining extrinsic and intrinsic rewards. We empirically demonstrate that these agents learn to search actively and intelligently for new information to reduce their uncertainty, and to exploit information they have already acquired.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
65,256
2209.10385
Long-Lived Accurate Keypoints in Event Streams
We present a novel end-to-end approach to keypoint detection and tracking in an event stream that provides better precision and much longer keypoint tracks than previous methods. This is made possible by two contributions working together. First, we propose a simple procedure to generate stable keypoint labels, which we use to train a recurrent architecture. This training data results in detections that are very consistent over time. Moreover, we observe that previous methods for keypoint detection work on a representation (such as the time surface) that integrates events over a period of time. Since this integration is required, we claim it is better to predict the keypoints' trajectories for the time period rather than single locations, as done in previous approaches. We predict these trajectories in the form of a series of heatmaps for the integration time period. This improves the keypoint localization. Our architecture can also be kept very simple, which results in very fast inference times. We demonstrate our approach on the HVGA ATIS Corner dataset as well as "The Event-Camera Dataset and Simulator" dataset, and show it results in keypoint tracks that are three times longer and nearly twice as accurate as the best previous state-of-the-art methods. We believe our approach can be generalized to other event-based camera problems, and we release our source code to encourage other authors to explore it.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
318,845
2207.03546
BibleTTS: a large, high-fidelity, multilingual, and uniquely African speech corpus
BibleTTS is a large, high-quality, open speech dataset for ten languages spoken in Sub-Saharan Africa. The corpus contains up to 86 hours of aligned, studio quality 48kHz single speaker recordings per language, enabling the development of high-quality text-to-speech models. The ten languages represented are: Akuapem Twi, Asante Twi, Chichewa, Ewe, Hausa, Kikuyu, Lingala, Luganda, Luo, and Yoruba. This corpus is a derivative work of Bible recordings made and released by the Open.Bible project from Biblica. We have aligned, cleaned, and filtered the original recordings, and additionally hand-checked a subset of the alignments for each language. We present results for text-to-speech models with Coqui TTS. The data is released under a commercial-friendly CC-BY-SA license.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
306,879
2307.11465
A Deep Learning Approach for Overall Survival Prediction in Lung Cancer with Missing Values
In the field of lung cancer research, particularly in the analysis of overall survival (OS), artificial intelligence (AI) serves crucial roles with specific aims. Given the prevalent issue of missing data in the medical domain, our primary objective is to develop an AI model capable of dynamically handling this missing data. Additionally, we aim to leverage all accessible data, effectively analyzing both uncensored patients who have experienced the event of interest and censored patients who have not, by embedding a specialized technique within our AI model, not commonly utilized in other AI tasks. Through the realization of these objectives, our model aims to provide precise OS predictions for non-small cell lung cancer (NSCLC) patients, thus overcoming these significant challenges. We present a novel approach to survival analysis with missing values in the context of NSCLC, which exploits the strengths of the transformer architecture to account only for available features without requiring any imputation strategy. More specifically, this model tailors the transformer architecture to tabular data by adapting its feature embedding and masked self-attention to mask missing data and fully exploit the available ones. By making use of ad-hoc designed losses for OS, it is able to account for both censored and uncensored patients, as well as changes in risks over time. We compared our method with state-of-the-art models for survival analysis coupled with different imputation strategies. We evaluated the results obtained over a period of 6 years using different time granularities obtaining a Ct-index, a time-dependent variant of the C-index, of 71.97, 77.58 and 80.72 for time units of 1 month, 1 year and 2 years, respectively, outperforming all state-of-the-art methods regardless of the imputation method used.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
380,916
1906.06492
A formal approach for customization of schema.org based on SHACL
Schema.org is a widely adopted vocabulary for semantic annotation of content and data. However, its generic nature makes it complicated for data publishers to pick right types and properties for a specific domain and task. In this paper we propose a formal approach, a domain specification process that generates domain specific patterns by applying operators implemented in SHACL to the schema.org vocabulary. These patterns can support knowledge generation and assessment processes for specific domains and tasks. We demonstrated our approach with use cases in tourism domain.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
135,315
2004.06632
A Survey on Activation Functions and their relation with Xavier and He Normal Initialization
In artificial neural network, the activation function and the weight initialization method play important roles in training and performance of a neural network. The question arises is what properties of a function are important/necessary for being a well-performing activation function. Also, the most widely used weight initialization methods - Xavier and He normal initialization have fundamental connection with activation function. This survey discusses the important/necessary properties of activation function and the most widely used activation functions (sigmoid, tanh, ReLU, LReLU and PReLU). This survey also explores the relationship between these activation functions and the two weight initialization methods - Xavier and He normal initialization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
172,560
2302.05073
Digital Twin-Aided Learning for Managing Reconfigurable Intelligent Surface-Assisted, Uplink, User-Centric Cell-Free Systems
This paper puts forth a new, reconfigurable intelligent surface (RIS)-assisted, uplink, user-centric cell-free (UCCF) system managed with the assistance of a digital twin (DT). Specifically, we propose a novel learning framework that maximizes the sum-rate by jointly optimizing the access point and user association (AUA), power control, and RIS beamforming. This problem is challenging and has never been addressed due to its prohibitively large and complex solution space. Our framework decouples the AUA from the power control and RIS beamforming (PCRB) based on the different natures of their variables, hence reducing the solution space. A new position-adaptive binary particle swarm optimization (PABPSO) method is designed for the AUA. Two twin-delayed deep deterministic policy gradient (TD3) models with new and refined state pre-processing layers are developed for the PCRB. Another important aspect is that a DT is leveraged to train the learning framework with its replay of channel estimates stored. The AUA, power control, and RIS beamforming are only tested in the physical environment at the end of selected epochs. Simulations show that using RISs contributes to considerable increases in the sum-rate of UCCF systems, and the DT dramatically reduces overhead with marginal performance loss. The proposed framework is superior to its alternatives in terms of sum-rate and convergence stability.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
344,920
1903.00984
Tight Robot Packing in the Real World: A Complete Manipulation Pipeline with Robust Primitives
Many order fulfillment applications in logistics, such as packing, involve picking objects from unstructured piles before tightly arranging them in bins or shipping containers. Desirable robotic solutions in this space need to be low-cost, robust, easily deployable and simple to control. The current work proposes a complete pipeline for solving packing tasks for cuboid objects, given access only to RGB-D data and a single robot arm with a vacuum-based end-effector, which is also used as a pushing or dragging finger. The pipeline integrates perception for detecting the objects and planning so as to properly pick and place objects. The key challenges correspond to sensing noise and failures in execution, which appear at multiple steps of the process. To achieve robustness, three uncertainty-reducing manipulation primitives are proposed, which take advantage of the end-effector's and the workspace's compliance, to successfully and tightly pack multiple cuboid objects. The overall solution is demonstrated to be robust to execution and perception errors. The impact of each manipulation primitive is evaluated in extensive real-world experiments by considering different versions of the pipeline. Furthermore, an open-source simulation framework is provided for modeling such packing operations. Ablation studies are performed within this simulation environment to evaluate features of the proposed primitives.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
123,156
1508.07097
A Dynamical Model of Twitter Activity Profiles
The advent of the era of Big Data has allowed many researchers to dig into various socio-technical systems, including social media platforms. In particular, these systems have provided them with certain verifiable means to look into certain aspects of human behavior. In this work, we are specifically interested in the behavior of individuals on social media platforms---how they handle the information they get, and how they share it. We look into Twitter to understand the dynamics behind the users' posting activities---tweets and retweets---zooming in on topics that peaked in popularity. Three mechanisms are considered: endogenous stimuli, exogenous stimuli, and a mechanism that dictates the decay of interest of the population in a topic. We propose a model involving two parameters $\eta^\star$ and $\lambda$ describing the tweeting behaviour of users, which allow us to reconstruct the findings of Lehmann et al. (2012) on the temporal profiles of popular Twitter hashtags. With this model, we are able to accurately reproduce the temporal profile of user engagements on Twitter. Furthermore, we introduce an alternative in classifying the collective activities on the socio-technical system based on the model.
true
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
46,378
2410.02258
Physics-Constrained Taylor Neural Networks for Learning and Control of Dynamical Systems
Data-driven approaches are increasingly popular for identifying dynamical systems due to improved accuracy and availability of sensor data. However, relying solely on data for identification does not guarantee that the identified systems will maintain their physical properties or that the predicted models will generalize well. In this paper, we propose a novel method for system identification by integrating a neural network as the first-order derivative of a Taylor series expansion instead of learning a dynamical function directly. This approach, called Monotonic Taylor Neural Networks (MTNN), aims to ensure monotonic properties of dynamical systems by constraining the conditions for the output of the neural networks model to be either always non-positive or non-negative. These conditions are constructed in two ways: by designing a new neural network architecture or by regularizing the loss function for training. The proposed method demonstrates better performance compared to methods without constraints on the monotonic properties of the systems when tested with experimental data from two real-world systems, including HVAC and TCLab. Furthermore, MTNN shows good performance in an actual control application when using a model predictive controller for a nonlinear MIMO system, illustrating the practical applications of this method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
494,196
2006.04340
The Strength of Nesterov's Extrapolation in the Individual Convergence of Nonsmooth Optimization
The extrapolation strategy raised by Nesterov, which can accelerate the convergence rate of gradient descent methods by orders of magnitude when dealing with smooth convex objective, has led to tremendous success in training machine learning tasks. In this article, the convergence of individual iterates of projected subgradient (PSG) methods for nonsmooth convex optimization problems is theoretically studied based on Nesterov's extrapolation, which we name individual convergence. We prove that Nesterov's extrapolation has the strength to make the individual convergence of PSG optimal for nonsmooth problems. In light of this consideration, a direct modification of the subgradient evaluation suffices to achieve optimal individual convergence for strongly convex problems, which can be regarded as making an interesting step toward the open question about stochastic gradient descent (SGD) posed by Shamir. Furthermore, we give an extension of the derived algorithms to solve regularized learning tasks with nonsmooth losses in stochastic settings. Compared with other state-of-the-art nonsmooth methods, the derived algorithms can serve as an alternative to the basic SGD especially in coping with machine learning problems, where an individual output is needed to guarantee the regularization structure while keeping an optimal rate of convergence. Typically, our method is applicable as an efficient tool for solving large-scale $l$1-regularized hinge-loss learning problems. Several comparison experiments demonstrate that our individual output not only achieves an optimal convergence rate but also guarantees better sparsity than the averaged solution.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,656
2103.08133
$\omega-$nonblocking supervisory control of discrete-event systems with infinite behavior
In the supervisory control framework of discrete-event systems (DES) with infinite behavior initiated by Thistle and Wonham, a supervisor satisfying the minimal acceptable specification and the maximal legal specification is synthesized. However, this supervisor may incur livelocks as it cannot ensure that the infinite behavior under supervision will always visit some marker states. To tackle this problem, we propose the definition of markability by requiring that all infinite cycles include at least one marker state. Then we formulate the problem of $\omega-$nonblocking supervisory control of DES with infinite behavior to synthesize an $\omega-$nonblocking (i.e. nonblocking, deadlock-free and livelock-free) supervisor. An algorithm is proposed to achieve $\omega-$nonblockingness by computing the supremal $*-$controllable, $*-$closed, $\omega-$controllable and markable sublanguage. We utilize the example of a robot as a running example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
224,806
1503.06900
New Classes of Partial Geometries and Their Associated LDPC Codes
The use of partial geometries to construct parity-check matrices for LDPC codes has resulted in the design of successful codes with a probability of error close to the Shannon capacity at bit error rates down to $10^{-15}$. Such considerations have motivated this further investigation. A new and simple construction of a type of partial geometries with quasi-cyclic structure is given and their properties are investigated. The trapping sets of the partial geometry codes were considered previously using the geometric aspects of the underlying structure to derive information on the size of allowable trapping sets. This topic is further considered here. Finally, there is a natural relationship between partial geometries and strongly regular graphs. The eigenvalues of the adjacency matrices of such graphs are well known and it is of interest to determine if any of the Tanner graphs derived from the partial geometries are good expanders for certain parameter sets, since it can be argued that codes with good geometric and expansion properties might perform well under message-passing decoding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
41,413
2012.06158
Reduced-Order Nonlinear Observers via Contraction Analysis and Convex Optimization
In this paper, we propose a new approach to design globally convergent reduced-order observers for nonlinear control systems via contraction analysis and convex optimization. Despite the fact that contraction is a concept naturally suitable for state estimation, the existing solutions are either local or relatively conservative when applying to physical systems. To address this, we show that this problem can be translated into an off-line search for a coordinate transformation after which the dynamics is (transversely) contracting. The obtained sufficient condition consists of some easily verifiable differential inequalities, which, on one hand, identify a very general class of "detectable" nonlinear systems, and on the other hand, can be expressed as computationally efficient convex optimization, making the design procedure more systematic. Connections with some well-established approaches and concepts are also clarified in the paper. Finally, we illustrate the proposed method with several numerical and physical examples, including polynomial, mechanical, electromechanical and biochemical systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
211,019
2410.03885
Collaborative Safety-Critical Formation Control with Obstacle Avoidance
This work explores a collaborative method for ensuring safety in multi-agent formation control problems. We formulate a control barrier function (CBF) based safety filter control law for a generic distributed formation controller and extend our previously developed collaborative safety framework to an obstacle avoidance problem for agents with acceleration control inputs. We then incorporate multi-obstacle collision avoidance into the collaborative safety framework. This framework includes a method for computing the maximum capability of agents to satisfy their individual safety requirements. We analyze the convergence rate of our collaborative safety algorithm, and prove the linear-time convergence of cooperating agents to a jointly feasible safe action for all agents under the special case of a tree-structured communication network with a single obstacle for each agent. We illustrate the analytical results via simulation on a mass-spring kinematics-based formation controller and demonstrate the finite-time convergence of the collaborative safety algorithm in the simple proven case, the more general case of a fully-connected system with multiple static obstacles, and with dynamic obstacles.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
495,024
1401.7474
The phenotypic expansion and its boundaries
The development of sport performances in the future is a subject of myth and disagreement among experts. As arguments favoring and opposing such methodology were discussed, other publications empirically showed that the past development of performances followed a non linear trend. Other works, while deeply exploring the conditions leading to world records, highlighted that performance is tied to the economical and geopolitical context. Here we investigated the following human boundaries: development of performances with time in Olympic and non-Olympic events, development of sport performances with aging among humans and others species (greyhounds, thoroughbreds, mice). Development of performances from a broader point of view (demography & lifespan) in a specific sub-system centered on primary energy was also investigated. We show that the physiological developments are limited with time. Three major and direct determinants of sport performance are age, technology and climatic conditions (temperature). However, all observed developments are related to the international context including the efficient use of primary energies. This last parameter is a major indirect propeller of performance development. We show that when physiological and societal performance indicators such as lifespan and population density depend on primary energies, the energy source, competition and mobility are key parameters for achieving long term sustainable trajectories. Otherwise, the vast majority (98.7%) of the studied trajectories reaches 0 before 15 generations, due to the consumption of fossil energy and a low mobility rate. This led us to consider that in the present turbulent economical context and given the upcoming energy crisis, societal and physical performances are not expected to grow continuously.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
30,459
2110.07498
End-to-end Keyword Spotting using Xception-1d
The field of conversational agents is growing fast and there is an increasing need for algorithms that enhance natural interaction. In this work we show how we achieved state of the art results in the Keyword Spotting field by adapting and tweaking the Xception algorithm, which achieved outstanding results in several computer vision tasks. We obtained about 96\% accuracy when classifying audio clips belonging to 35 different categories, beating human annotation at the most complex tasks proposed.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
261,012
2310.08743
Development and Validation of a Deep Learning-Based Microsatellite Instability Predictor from Prostate Cancer Whole-Slide Images
Microsatellite instability-high (MSI-H) is a tumor agnostic biomarker for immune checkpoint inhibitor therapy. However, MSI status is not routinely tested in prostate cancer, in part due to low prevalence and assay cost. As such, prediction of MSI status from hematoxylin and eosin (H&E) stained whole-slide images (WSIs) could identify prostate cancer patients most likely to benefit from confirmatory testing and becoming eligible for immunotherapy. Prostate biopsies and surgical resections from de-identified records of consecutive prostate cancer patients referred to our institution were analyzed. Their MSI status was determined by next generation sequencing. Patients before a cutoff date were split into an algorithm development set (n=4015, MSI-H 1.8%) and a paired validation set (n=173, MSI-H 19.7%) that consisted of two serial sections from each sample, one stained and scanned internally and the other at an external site. Patients after the cutoff date formed the temporal validation set (n=1350, MSI-H 2.3%). Attention-based multiple instance learning models were trained to predict MSI-H from H&E WSIs. The MSI-H predictor achieved area under the receiver operating characteristic curve values of 0.78 (95% CI [0.69-0.86]), 0.72 (95% CI [0.63-0.81]), and 0.72 (95% CI [0.62-0.82]) on the internally prepared, externally prepared, and temporal validation sets, respectively. While MSI-H status is significantly correlated with Gleason score, the model remained predictive within each Gleason score subgroup. In summary, we developed and validated an AI-based MSI-H diagnostic model on a large real-world cohort of routine H&E slides, which effectively generalized to externally stained and scanned samples and a temporally independent validation cohort. This algorithm has the potential to direct prostate cancer patients toward immunotherapy and to identify MSI-H cases secondary to Lynch syndrome.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
399,506
1703.10926
EMULATOR vs REAL PHONE: Android Malware Detection Using Machine Learning
The Android operating system has become the most popular operating system for smartphones and tablets leading to a rapid rise in malware. Sophisticated Android malware employ detection avoidance techniques in order to hide their malicious activities from analysis tools. These include a wide range of anti-emulator techniques, where the malware programs attempt to hide their malicious activities by detecting the emulator. For this reason, countermeasures against antiemulation are becoming increasingly important in Android malware detection. Analysis and detection based on real devices can alleviate the problems of anti-emulation as well as improve the effectiveness of dynamic analysis. Hence, in this paper we present an investigation of machine learning based malware detection using dynamic analysis on real devices. A tool is implemented to automatically extract dynamic features from Android phones and through several experiments, a comparative analysis of emulator based vs. device based detection by means of several machine learning algorithms is undertaken. Our study shows that several features could be extracted more effectively from the on-device dynamic analysis compared to emulators. It was also found that approximately 24% more apps were successfully analysed on the phone. Furthermore, all of the studied machine learning based detection performed better when applied to features extracted from the on-device dynamic analysis.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
70,991
2208.11574
Improving on the Markov-Switching Regression Model by the Use of an Adaptive Moving Average
Regime detection is vital for the effective operation of trading and investment strategies. However, the most popular means of doing this, the two-state Markov-switching regression model (MSR), is not an optimal solution, as two volatility states do not fully capture the complexity of the market. Past attempts to extend this model to a multi-state MSR have proved unstable, potentially expensive in terms of trading costs, and can only divide the market into states with varying levels of volatility, which is not the only aspect of market dynamics relevant to trading. We demonstrate it is possible and valuable to instead segment the market into more than two states not on the basis of volatility alone, but on a combined basis of volatility and trend, by combining the two-state MSR with an adaptive moving average. A realistic trading framework is used to demonstrate that using two selected states from the four thus generated leads to better trading performance than traditional benchmarks, including the two-state MSR. In addition, the proposed model could serve as a label generator for machine learning tasks used in predicting financial regimes ex ante.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
314,479
2407.16073
KaPQA: Knowledge-Augmented Product Question-Answering
Question-answering for domain-specific applications has recently attracted much interest due to the latest advancements in large language models (LLMs). However, accurately assessing the performance of these applications remains a challenge, mainly due to the lack of suitable benchmarks that effectively simulate real-world scenarios. To address this challenge, we introduce two product question-answering (QA) datasets focused on Adobe Acrobat and Photoshop products to help evaluate the performance of existing models on domain-specific product QA tasks. Additionally, we propose a novel knowledge-driven RAG-QA framework to enhance the performance of the models in the product QA task. Our experiments demonstrated that inducing domain knowledge through query reformulation allowed for increased retrieval and generative performance when compared to standard RAG-QA methods. This improvement, however, is slight, and thus illustrates the challenge posed by the datasets introduced.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
475,441
1408.0985
Speech earthquakes: scaling and universality in human voice
Speech is a distinctive complex feature of human capabilities. In order to understand the physics underlying speech production, in this work we empirically analyse the statistics of large human speech datasets ranging several languages. We first show that during speech the energy is unevenly released and power-law distributed, reporting a universal robust Gutenberg-Richter-like law in speech. We further show that such earthquakes in speech show temporal correlations, as the interevent statistics are again power-law distributed. Since this feature takes place in the intra-phoneme range, we conjecture that the responsible for this complex phenomenon is not cognitive, but it resides on the physiological speech production mechanism. Moreover, we show that these waiting time distributions are scale invariant under a renormalisation group transformation, suggesting that the process of speech generation is indeed operating close to a critical point. These results are put in contrast with current paradigms in speech processing, which point towards low dimensional deterministic chaos as the origin of nonlinear traits in speech fluctuations. As these latter fluctuations are indeed the aspects that humanize synthetic speech, these findings may have an impact in future speech synthesis technologies. Results are robust and independent of the communication language or the number of speakers, pointing towards an universal pattern and yet another hint of complexity in human speech.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
35,125
2211.17204
Semisoft Task Clustering for Multi-Task Learning
Multi-task learning (MTL) aims to improve the performance of multiple related prediction tasks by leveraging useful information from them. Due to their flexibility and ability to reduce unknown coefficients substantially, the task-clustering-based MTL approaches have attracted considerable attention. Motivated by the idea of semisoft clustering of data, we propose a semisoft task clustering approach, which can simultaneously reveal the task cluster structure for both pure and mixed tasks as well as select the relevant features. The main assumption behind our approach is that each cluster has some pure tasks, and each mixed task can be represented by a linear combination of pure tasks in different clusters. To solve the resulting non-convex constrained optimization problem, we design an efficient three-step algorithm. The experimental results based on synthetic and real-world datasets validate the effectiveness and efficiency of the proposed approach. Finally, we extend the proposed approach to a robust task clustering problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
333,899
2012.02097
Recursive Tree Grammar Autoencoders
Machine learning on trees has been mostly focused on trees as input to algorithms. Much less research has investigated trees as output, which has many applications, such as molecule optimization for drug discovery, or hint generation for intelligent tutoring systems. In this work, we propose a novel autoencoder approach, called recursive tree grammar autoencoder (RTG-AE), which encodes trees via a bottom-up parser and decodes trees via a tree grammar, both learned via recursive neural networks that minimize the variational autoencoder loss. The resulting encoder and decoder can then be utilized in subsequent tasks, such as optimization and time series prediction. RTG-AEs are the first model to combine variational autoencoders, grammatical knowledge, and recursive processing. Our key message is that this unique combination of all three elements outperforms models which combine any two of the three. In particular, we perform an ablation study to show that our proposed method improves the autoencoding error, training time, and optimization score on synthetic as well as real datasets compared to four baselines. We further prove that RTG-AEs parse and generate trees in linear time and are expressive enough to handle all regular tree grammars.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
209,644
2003.12801
Probability error bounds for approximation of functions in reproducing kernel Hilbert spaces
We find probability error bounds for approximations of functions $f$ in a separable reproducing kernel Hilbert space $\mathcal{H}$ with reproducing kernel $K$ on a base space $X$, firstly in terms of finite linear combinations of functions of type $K_{x_i}$ and then in terms of the projection $\pi^n_x$ on $\mathrm{Span}\{K_{x_i}\}^n_{i=1}$, for random sequences of points $x=(x_i)_i$ in $X$. Given a probability measure $P$, letting $P_K$ be the measure defined by $\mathrm{d} P_K(x)=K(x,x)\mathrm{d} P(x)$, $x\in X$, our approach is based on the nonexpansive operator \[L^2(X;P_K)\ni\lambda\mapsto L_{P,K}\lambda:=\int_X \lambda(x)K_x\mathrm{d} P(x)\in \mathcal{H},\] where the integral exists in the Bochner sense. Using this operator, we then define a new reproducing kernel Hilbert space, denoted by $\mathcal{H}_P$, that is the operator range of $L_{P,K}$. Our main result establishes bounds, in terms of the operator $L_{P,K}$, on the probability that the Hilbert space distance between an arbitrary function $f\in\mathcal{H}$ and linear combinations of functions of type $K_{x_i}$, for $(x_i)_i$ sampled independently from $P$, falls below a given threshold. For sequences of points $(x_i)_{i=1}^\infty$ constituting a so-called uniqueness set, the orthogonal projections $\pi^n_x$ to $\mathrm{Span}\{K_{x_i}\}^n_{i=1}$ converge in the strong operator topology to the identity operator. We prove that, under the assumption that $\mathcal{H}_P$ is dense in $\mathcal{H}$, any sequence of iid samples from $P$ yields a uniqueness set with probability $1$. This result improves on previous error bounds in weaker norms, such as uniform or $L^p$ norms, which yield only convergence in probability and not a.c. convergence. Two examples that show the applicability of this result to a uniform distribution on a compact interval and to the Hardy space $H^2(\mathbb{D})$ are presented as well.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
170,017
1506.05790
Scalable Semi-Supervised Aggregation of Classifiers
We present and empirically evaluate an efficient algorithm that learns to aggregate the predictions of an ensemble of binary classifiers. The algorithm uses the structure of the ensemble predictions on unlabeled data to yield significant performance improvements. It does this without making assumptions on the structure or origin of the ensemble, without parameters, and as scalably as linear learning. We empirically demonstrate these performance gains with random forests.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
44,335
2001.00281
ZeroQ: A Novel Zero Shot Quantization Framework
Quantization is a promising approach for reducing the inference time and memory footprint of neural networks. However, most existing quantization methods require access to the original training dataset for retraining during quantization. This is often not possible for applications with sensitive or proprietary data, e.g., due to privacy and security concerns. Existing zero-shot quantization methods use different heuristics to address this, but they result in poor performance, especially when quantizing to ultra-low precision. Here, we propose ZeroQ , a novel zero-shot quantization framework to address this. ZeroQ enables mixed-precision quantization without any access to the training or validation data. This is achieved by optimizing for a Distilled Dataset, which is engineered to match the statistics of batch normalization across different layers of the network. ZeroQ supports both uniform and mixed-precision quantization. For the latter, we introduce a novel Pareto frontier based method to automatically determine the mixed-precision bit setting for all layers, with no manual search involved. We extensively test our proposed method on a diverse set of models, including ResNet18/50/152, MobileNetV2, ShuffleNet, SqueezeNext, and InceptionV3 on ImageNet, as well as RetinaNet-ResNet50 on the Microsoft COCO dataset. In particular, we show that ZeroQ can achieve 1.71\% higher accuracy on MobileNetV2, as compared to the recently proposed DFQ method. Importantly, ZeroQ has a very low computational overhead, and it can finish the entire quantization process in less than 30s (0.5\% of one epoch training time of ResNet50 on ImageNet). We have open-sourced the ZeroQ framework\footnote{https://github.com/amirgholami/ZeroQ}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
159,174
2401.12764
Fast Nonlinear Two-Time-Scale Stochastic Approximation: Achieving $O(1/k)$ Finite-Sample Complexity
This paper proposes to develop a new variant of the two-time-scale stochastic approximation to find the roots of two coupled nonlinear operators, assuming only noisy samples of these operators can be observed. Our key idea is to leverage the classic Ruppert-Polyak averaging technique to dynamically estimate the operators through their samples. The estimated values of these averaging steps will then be used in the two-time-scale stochastic approximation updates to find the desired solution. Our main theoretical result is to show that under the strongly monotone condition of the underlying nonlinear operators the mean-squared errors of the iterates generated by the proposed method converge to zero at an optimal rate $O(1/k)$, where $k$ is the number of iterations. Our result significantly improves the existing result of two-time-scale stochastic approximation, where the best known finite-time convergence rate is $O(1/k^{2/3})$. We illustrate this result by applying the proposed method to develop new reinforcement learning algorithms with improved performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
423,492
2501.07331
Efficient Event-based Delay Learning in Spiking Neural Networks
Spiking Neural Networks (SNNs) are attracting increased attention as a more energy-efficient alternative to traditional Artificial Neural Networks. Spiking neurons are stateful and intrinsically recurrent, making them well-suited for spatio-temporal tasks. However, this intrinsic memory is limited by synaptic and membrane time constants. A powerful additional mechanism are delays. In this paper, we propose a novel event-based training method for SNNs with delays, grounded in the EventProp formalism and enabling the calculation of exact gradients with respect to weights and delays. Our method supports multiple spikes per neuron and, to our best knowledge, is the first delay learning algorithm to be applied to recurrent SNNs. We evaluate our method on a simple sequence detection task, and the Yin-Yang, Spiking Heidelberg Digits and Spiking Speech Commands datasets, demonstrating that our algorithm can optimize delays from suboptimal initial conditions and enhance classification accuracy compared to architectures without delays. Finally, we show that our approach uses less than half the memory of the current state-of-the-art delay-learning method and is up to 26x faster.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
524,361
2411.06291
TinyML NLP Approach for Semantic Wireless Sentiment Classification
Natural Language Processing (NLP) operations, such as semantic sentiment analysis and text synthesis, may often impair users' privacy and demand significant on device computational resources. Centralized learning (CL) on the edge offers an alternative energy-efficient approach, yet requires the collection of raw information, which affects the user's privacy. While Federated learning (FL) preserves privacy, it requires high computational energy on board tiny user devices. We introduce split learning (SL) as an energy-efficient alternative, privacy-preserving tiny machine learning (TinyML) scheme and compare it to FL and CL in the presence of Rayleigh fading and additive noise. Our results show that SL reduces processing power and CO2 emissions while maintaining high accuracy, whereas FL offers a balanced compromise between efficiency and privacy. Hence, this study provides insights into deploying energy-efficient, privacy-preserving NLP models on edge devices.
false
false
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
507,047
2312.02213
JarviX: A LLM No code Platform for Tabular Data Analysis and Optimization
In this study, we introduce JarviX, a sophisticated data analytics framework. JarviX is designed to employ Large Language Models (LLMs) to facilitate an automated guide and execute high-precision data analyzes on tabular datasets. This framework emphasizes the significance of varying column types, capitalizing on state-of-the-art LLMs to generate concise data insight summaries, propose relevant analysis inquiries, visualize data effectively, and provide comprehensive explanations for results drawn from an extensive data analysis pipeline. Moreover, JarviX incorporates an automated machine learning (AutoML) pipeline for predictive modeling. This integration forms a comprehensive and automated optimization cycle, which proves particularly advantageous for optimizing machine configuration. The efficacy and adaptability of JarviX are substantiated through a series of practical use case studies.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
412,766
1203.6750
Adaptive Gaussian Mixture Filter Based on Statistical Linearization
Gaussian mixtures are a common density representation in nonlinear, non-Gaussian Bayesian state estimation. Selecting an appropriate number of Gaussian components, however, is difficult as one has to trade of computational complexity against estimation accuracy. In this paper, an adaptive Gaussian mixture filter based on statistical linearization is proposed. Depending on the nonlinearity of the considered estimation problem, this filter dynamically increases the number of components via splitting. For this purpose, a measure is introduced that allows for quantifying the locally induced linearization error at each Gaussian mixture component. The deviation between the nonlinear and the linearized state space model is evaluated for determining the splitting direction. The proposed approach is not restricted to a specific statistical linearization method. Simulations show the superior estimation performance compared to related approaches and common filtering algorithms.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
15,188
2410.04408
Anti-Malicious ISAC Using Proactive Monitoring
In this paper, we investigate proactive monitoring to mitigate malicious activities in integrated sensing and communication (ISAC) systems. Our focus is on a scenario where a cell-free massive multiple-input multiple-output (CF-mMIMO) architecture is exploited by malicious actors. Malicious actors use multiple access points (APs) to illegally sense a legitimate target while communicating with users (UEs), one of which is suspected of illegal activities. In our approach, a proactive monitor overhears the suspicious UE and simultaneously sends a jamming signal to degrade the communication links between the APs and suspicious UE. Simultaneously, the monitor sends a precoded jamming signal toward the legitimate target to hinder the malicious sensing attempts. We derive closed-form expressions for the sensing signal-to-interference-noise ratio (SINR), as well as the received SINR at the UEs and overheard SINR at the monitor. The simulation results show that our anti-malicious CF-mMIMO ISAC strategy can significantly reduce the sensing performance while offering excellent monitoring performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
495,267
math/0103007
Source Coding, Large Deviations, and Approximate Pattern Matching
We present a development of parts of rate-distortion theory and pattern- matching algorithms for lossy data compression, centered around a lossy version of the Asymptotic Equipartition Property (AEP). This treatment closely parallels the corresponding development in lossless compression, a point of view that was advanced in an important paper of Wyner and Ziv in 1989. In the lossless case we review how the AEP underlies the analysis of the Lempel-Ziv algorithm by viewing it as a random code and reducing it to the idealized Shannon code. This also provides information about the redundancy of the Lempel-Ziv algorithm and about the asymptotic behavior of several relevant quantities. In the lossy case we give various versions of the statement of the generalized AEP and we outline the general methodology of its proof via large deviations. Its relationship with Barron's generalized AEP is also discussed. The lossy AEP is applied to: (i) prove strengthened versions of Shannon's source coding theorem and universal coding theorems; (ii) characterize the performance of mismatched codebooks; (iii) analyze the performance of pattern- matching algorithms for lossy compression; (iv) determine the first order asymptotics of waiting times (with distortion) between stationary processes; (v) characterize the best achievable rate of weighted codebooks as an optimal sphere-covering exponent. We then present a refinement to the lossy AEP and use it to: (i) prove second order coding theorems; (ii) characterize which sources are easier to compress; (iii) determine the second order asymptotics of waiting times; (iv) determine the precise asymptotic behavior of longest match-lengths. Extensions to random fields are also given.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,603
1801.00218
Game-theoretic Network Centrality: A Review
Game-theoretic centrality is a flexible and sophisticated approach to identify the most important nodes in a network. It builds upon the methods from cooperative game theory and network theory. The key idea is to treat nodes as players in a cooperative game, where the value of each coalition is determined by certain graph-theoretic properties. Using solution concepts from cooperative game theory, it is then possible to measure how responsible each node is for the worth of the network. The literature on the topic is already quite large, and is scattered among game-theoretic and computer science venues. We review the main game-theoretic network centrality measures from both bodies of literature and organize them into two categories: those that are more focused on the connectivity of nodes, and those that are more focused on the synergies achieved by nodes in groups. We present and explain each centrality, with a focus on algorithms and complexity.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
87,527
1401.7739
Stability robustness of a feedback interconnection of systems with negative imaginary frequency response
A necessary and sufficient condition, expressed simply as the DC loop gain (ie the loop gain at zero frequency) being less than unity, is given in this paper to guarantee the internal stability of a feedback interconnection of Linear Time-Invariant (LTI) Multiple-Input Multiple-Output (MIMO) systems with negative imaginary frequency response. Systems with negative imaginary frequency response arise for example when considering transfer functions from force actuators to co-located position sensors, and are commonly important in for example lightly damped structures. The key result presented here has similar application to the small-gain theorem, which refers to the stability of feedback interconnections of contractive gain systems, and the passivity theorem (or more precisely the positive real theorem in the LTI case), which refers to the stability of feedback interconnections of positive real systems. A complete state-space characterisation of systems with negative imaginary frequency response is also given in this paper and also an example that demonstrates the application of the key result is provided.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
30,482
2412.10644
Model-driven deep neural network for enhanced direction finding with commodity 5G gNodeB
Pervasive and high-accuracy positioning has become increasingly important as a fundamental enabler for intelligent connected devices in mobile networks. Nevertheless, current wireless networks heavily rely on pure model-driven techniques to achieve positioning functionality, often succumbing to performance deterioration due to hardware impairments in practical scenarios. Here we reformulate the direction finding or angle-of-arrival (AoA) estimation problem as an image recovery task of the spatial spectrum and propose a new model-driven deep neural network (MoD-DNN) framework. The proposed MoD-DNN scheme comprises three modules: a multi-task autoencoder-based beamformer, a coarray spectrum generation module, and a model-driven deep learning-based spatial spectrum reconstruction module. Our technique enables automatic calibration of angular-dependent phase error thereby enhancing the resilience of direction-finding precision against realistic system non-idealities. We validate the proposed scheme both using numerical simulations and field tests. The results show that the proposed MoD-DNN framework enables effective spectrum calibration and accurate AoA estimation. To the best of our knowledge, this study marks the first successful demonstration of hybrid data-and-model-driven direction finding utilizing readily available commodity 5G gNodeB.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
517,030
2412.00366
Efficient Multi-Robot Motion Planning for Manifold-Constrained Manipulators by Randomized Scheduling and Informed Path Generation
Multi-robot motion planning for high degree-of-freedom manipulators in shared, constrained, and narrow spaces is a complex problem and essential for many scenarios such as construction, surgery, and more. Traditional coupled and decoupled methods either scale poorly or lack completeness, and hybrid methods that compose paths from individual robots together require the enumeration of many paths before they can find valid composite solutions. This paper introduces Scheduling to Avoid Collisions (StAC), a hybrid approach that more effectively composes paths from individual robots by scheduling (adding random stops and coordination motion along each path) and generates paths that are more likely to be feasible by using bidirectional feedback between the scheduler and motion planner for informed sampling. StAC uses 10 to 100 times fewer paths from the low-level planner than state-of-the-art baselines on challenging problems in manipulator cases.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
512,619
2002.06288
Let Me At Least Learn What You Really Like: Dealing With Noisy Humans When Learning Preferences
Learning the preferences of a human improves the quality of the interaction with the human. The number of queries available to learn preferences maybe limited especially when interacting with a human, and so active learning is a must. One approach to active learning is to use uncertainty sampling to decide the informativeness of a query. In this paper, we propose a modification to uncertainty sampling which uses the expected output value to help speed up learning of preferences. We compare our approach with the uncertainty sampling baseline, as well as conduct an ablation study to test the validity of each component of our approach.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
164,142
2004.12169
Learning to Update Natural Language Comments Based on Code Changes
We formulate the novel task of automatically updating an existing natural language comment based on changes in the body of code it accompanies. We propose an approach that learns to correlate changes across two distinct language representations, to generate a sequence of edits that are applied to the existing comment to reflect the source code modifications. We train and evaluate our model using a dataset that we collected from commit histories of open-source software projects, with each example consisting of a concurrent update to a method and its corresponding comment. We compare our approach against multiple baselines using both automatic metrics and human evaluation. Results reflect the challenge of this task and that our model outperforms baselines with respect to making edits.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
174,152
2007.07453
Graph-Based Social Relation Reasoning
Human beings are fundamentally sociable -- that we generally organize our social lives in terms of relations with other people. Understanding social relations from an image has great potential for intelligent systems such as social chatbots and personal assistants. In this paper, we propose a simpler, faster, and more accurate method named graph relational reasoning network (GR2N) for social relation recognition. Different from existing methods which process all social relations on an image independently, our method considers the paradigm of jointly inferring the relations by constructing a social relation graph. Furthermore, the proposed GR2N constructs several virtual relation graphs to explicitly grasp the strong logical constraints among different types of social relations. Experimental results illustrate that our method generates a reasonable and consistent social relation graph and improves the performance in both accuracy and efficiency.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
187,337
1703.05291
Deep Embedding Forest: Forest-based Serving with Deep Embedding Features
Deep Neural Networks (DNN) have demonstrated superior ability to extract high level embedding vectors from low level features. Despite the success, the serving time is still the bottleneck due to expensive run-time computation of multiple layers of dense matrices. GPGPU, FPGA, or ASIC-based serving systems require additional hardware that are not in the mainstream design of most commercial applications. In contrast, tree or forest-based models are widely adopted because of low serving cost, but heavily depend on carefully engineered features. This work proposes a Deep Embedding Forest model that benefits from the best of both worlds. The model consists of a number of embedding layers and a forest/tree layer. The former maps high dimensional (hundreds of thousands to millions) and heterogeneous low-level features to the lower dimensional (thousands) vectors, and the latter ensures fast serving. Built on top of a representative DNN model called Deep Crossing, and two forest/tree-based models including XGBoost and LightGBM, a two-step Deep Embedding Forest algorithm is demonstrated to achieve on-par or slightly better performance as compared with the DNN counterpart, with only a fraction of serving time on conventional hardware. After comparing with a joint optimization algorithm called partial fuzzification, also proposed in this paper, it is concluded that the two-step Deep Embedding Forest has achieved near optimal performance. Experiments based on large scale data sets (up to 1 billion samples) from a major sponsored search engine proves the efficacy of the proposed model.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
70,052
2205.08263
Contact-less Material Probing with Distributed Sensors: Joint Sensing and Communication Optimization
The utilization of RF signals to probe material properties of objects is of huge interest both in academia as well as industry. To this end, a setup is investigated, in which a transmitter equipped with a two-dimensional multi-antenna array dispatches a signal, which hits objects in the environment and the reflections from the objects are captured by distributed sensors. The received signal at those sensors are then amplified and forwarded to a multiple antenna fusion center, which performs space-time post-processing in order to optimize the information extraction. In this process, optimal design of power allocation per object alongside sensors amplifications is of crucial importance. Here, the power allocation and sensors amplifications is jointly optimized, given maximum-ratio combining (MRC) at the fusion center. We formulate this challenge as a sum-power minimization under per-object SINR constraints, a sum-power constraint at the transmitter and individual power constraints at the sensors. Moreover, the advantage of deploying zero-forcing (ZF) and minimum mean-squared error (MMSE) at the fusion center is discussed. Asymptotic analysis is also provided for the case that large number of sensors are deployed in the sensing environment.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
296,873
2303.10103
Image comparison and scaling via nonlinear elasticity
A nonlinear elasticity model for comparing images is formulated and analyzed, in which optimal transformations between images are sought as minimizers of an integral functional. The existence of minimizers in a suitable class of homeomorphisms between image domains is established under natural hypotheses. We investigate whether for linearly related images the minimization algorithm delivers the linear transformation as the unique minimizer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
352,307
2402.09654
GPT-4's assessment of its performance in a USMLE-based case study
This study investigates GPT-4's assessment of its performance in healthcare applications. A simple prompting technique was used to prompt the LLM with questions taken from the United States Medical Licensing Examination (USMLE) questionnaire and it was tasked to evaluate its confidence score before posing the question and after asking the question. The questionnaire was categorized into two groups-questions with feedback (WF) and questions with no feedback(NF) post-question. The model was asked to provide absolute and relative confidence scores before and after each question. The experimental findings were analyzed using statistical tools to study the variability of confidence in WF and NF groups. Additionally, a sequential analysis was conducted to observe the performance variation for the WF and NF groups. Results indicate that feedback influences relative confidence but doesn't consistently increase or decrease it. Understanding the performance of LLM is paramount in exploring its utility in sensitive areas like healthcare. This study contributes to the ongoing discourse on the reliability of AI, particularly of LLMs like GPT-4, within healthcare, offering insights into how feedback mechanisms might be optimized to enhance AI-assisted medical education and decision support.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
true
false
false
false
429,615
1705.06510
Entropic selection of concepts unveils hidden topics in documents corpora
The organization and evolution of science has recently become itself an object of scientific quantitative investigation, thanks to the wealth of information that can be extracted from scientific documents, such as citations between papers and co-authorship between researchers. However, only few studies have focused on the concepts that characterize full documents and that can be extracted and analyzed, revealing the deeper organization of scientific knowledge. Unfortunately, several concepts can be so common across documents that they hinder the emergence of the underlying topical structure of the document corpus, because they give rise to a large amount of spurious and trivial relations among documents. To identify and remove common concepts, we introduce a method to gauge their relevance according to an objective information-theoretic measure related to the statistics of their occurrence across the document corpus. After progressively removing concepts that, according to this metric, can be considered as generic, we find that the topic organization displays a correspondingly more refined structure.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
true
73,644
2110.14031
Surrogate Regret Bounds for Polyhedral Losses
Surrogate risk minimization is an ubiquitous paradigm in supervised machine learning, wherein a target problem is solved by minimizing a surrogate loss on a dataset. Surrogate regret bounds, also called excess risk bounds, are a common tool to prove generalization rates for surrogate risk minimization. While surrogate regret bounds have been developed for certain classes of loss functions, such as proper losses, general results are relatively sparse. We provide two general results. The first gives a linear surrogate regret bound for any polyhedral (piecewise-linear and convex) surrogate, meaning that surrogate generalization rates translate directly to target rates. The second shows that for sufficiently non-polyhedral surrogates, the regret bound is a square root, meaning fast surrogate generalization rates translate to slow rates for the target. Together, these results suggest polyhedral surrogates are optimal in many cases.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,385
2105.13017
Minimax Optimal Fixed-Budget Best Arm Identification in Linear Bandits
We study the problem of best arm identification in linear bandits in the fixed-budget setting. By leveraging properties of the G-optimal design and incorporating it into the arm allocation rule, we design a parameter-free algorithm, Optimal Design-based Linear Best Arm Identification (OD-LinBAI). We provide a theoretical analysis of the failure probability of OD-LinBAI. Instead of all the optimality gaps, the performance of OD-LinBAI depends only on the gaps of the top $d$ arms, where $d$ is the effective dimension of the linear bandit instance. Complementarily, we present a minimax lower bound for this problem. The upper and lower bounds show that OD-LinBAI is minimax optimal up to constant multiplicative factors in the exponent, which is a significant theoretical improvement over existing methods (e.g., BayesGap, Peace, LinearExploration and GSE), and settles the question of ascertaining the difficulty of learning the best arm in the fixed-budget setting. Finally, numerical experiments demonstrate considerable empirical improvements over existing algorithms on a variety of real and synthetic datasets.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
237,179
2012.05489
AI Driven Knowledge Extraction from Clinical Practice Guidelines: Turning Research into Practice
Background and Objectives: Clinical Practice Guidelines (CPGs) represent the foremost methodology for sharing state-of-the-art research findings in the healthcare domain with medical practitioners to limit practice variations, reduce clinical cost, improve the quality of care, and provide evidence based treatment. However, extracting relevant knowledge from the plethora of CPGs is not feasible for already burdened healthcare professionals, leading to large gaps between clinical findings and real practices. It is therefore imperative that state-of-the-art Computing research, especially machine learning is used to provide artificial intelligence based solution for extracting the knowledge from CPGs and reducing the gap between healthcare research/guidelines and practice. Methods: This research presents a novel methodology for knowledge extraction from CPGs to reduce the gap and turn the latest research findings into clinical practice. First, our system classifies the CPG sentences into four classes such as condition-action, condition-consequences, action, and not-applicable based on the information presented in a sentence. We use deep learning with state-of-the-art word embedding, improved word vectors technique in classification process. Second, it identifies qualifier terms in the classified sentences, which assist in recognizing the condition and action phrases in a sentence. Finally, the condition and action phrase are processed and transformed into plain rule If Condition(s) Then Action format. Results: We evaluate the methodology on three different domains guidelines including Hypertension, Rhinosinusitis, and Asthma. The deep learning model classifies the CPG sentences with an accuracy of 95%. While rule extraction was validated by user-centric approach, which achieved a Jaccard coefficient of 0.6, 0.7, and 0.4 with three human experts extracted rules, respectively.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
210,801
1812.06309
Extending classical surrogate modelling to high-dimensions through supervised dimensionality reduction: a data-driven approach
Thanks to their versatility, ease of deployment and high-performance, surrogate models have become staple tools in the arsenal of uncertainty quantification (UQ). From local interpolants to global spectral decompositions, surrogates are characterised by their ability to efficiently emulate complex computational models based on a small set of model runs used for training. An inherent limitation of many surrogate models is their susceptibility to the curse of dimensionality, which traditionally limits their applicability to a maximum of $\mathcal{O}(10^2)$ input dimensions. We present a novel approach at high-dimensional surrogate modelling that is model-, dimensionality reduction- and surrogate model- agnostic (black box), and can enable the solution of high dimensional (i.e. up to $\mathcal{O}(10^4)$) problems. After introducing the general algorithm, we demonstrate its performance by combining Kriging and polynomial chaos expansions surrogates and kernel principal component analysis. In particular, we compare the generalisation performance that the resulting surrogates achieve to the classical sequential application of dimensionality reduction followed by surrogate modelling on several benchmark applications, comprising an analytical function and two engineering applications of increasing dimensionality and complexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
116,586
2405.01205
Error-Driven Uncertainty Aware Training
Neural networks are often overconfident about their predictions, which undermines their reliability and trustworthiness. In this work, we present a novel technique, named Error-Driven Uncertainty Aware Training (EUAT), which aims to enhance the ability of neural classifiers to estimate their uncertainty correctly, namely to be highly uncertain when they output inaccurate predictions and low uncertain when their output is accurate. The EUAT approach operates during the model's training phase by selectively employing two loss functions depending on whether the training examples are correctly or incorrectly predicted by the model. This allows for pursuing the twofold goal of i) minimizing model uncertainty for correctly predicted inputs and ii) maximizing uncertainty for mispredicted inputs, while preserving the model's misprediction rate. We evaluate EUAT using diverse neural models and datasets in the image recognition domains considering both non-adversarial and adversarial settings. The results show that EUAT outperforms existing approaches for uncertainty estimation (including other uncertainty-aware training techniques, calibration, ensembles, and DEUP) by providing uncertainty estimates that not only have higher quality when evaluated via statistical metrics (e.g., correlation with residuals) but also when employed to build binary classifiers that decide whether the model's output can be trusted or not and under distributional data shifts.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
451,273
1202.0876
A Coding Theoretic Approach for Evaluating Accumulate Distribution on Minimum Cut Capacity of Weighted Random Graphs
The multicast capacity of a directed network is closely related to the $s$-$t$ maximum flow, which is equal to the $s$-$t$ minimum cut capacity due to the max-flow min-cut theorem. If the topology of a network (or link capacities) is dynamically changing or have stochastic nature, it is not so trivial to predict statistical properties on the maximum flow. In this paper, we present a coding theoretic approach for evaluating the accumulate distribution of the minimum cut capacity of weighted random graphs. The main feature of our approach is to utilize the correspondence between the cut space of a graph and a binary LDGM (low-density generator-matrix) code with column weight 2. The graph ensemble treated in the paper is a weighted version of Erd\H{o}s-R\'{e}nyi random graph ensemble. The main contribution of our work is a combinatorial lower bound for the accumulate distribution of the minimum cut capacity. From some computer experiments, it is observed that the lower bound derived here reflects the actual statistical behavior of the minimum cut capacity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
14,138
2002.06460
HighRes-net: Recursive Fusion for Multi-Frame Super-Resolution of Satellite Imagery
Generative deep learning has sparked a new wave of Super-Resolution (SR) algorithms that enhance single images with impressive aesthetic results, albeit with imaginary details. Multi-frame Super-Resolution (MFSR) offers a more grounded approach to the ill-posed problem, by conditioning on multiple low-resolution views. This is important for satellite monitoring of human impact on the planet -- from deforestation, to human rights violations -- that depend on reliable imagery. To this end, we present HighRes-net, the first deep learning approach to MFSR that learns its sub-tasks in an end-to-end fashion: (i) co-registration, (ii) fusion, (iii) up-sampling, and (iv) registration-at-the-loss. Co-registration of low-resolution views is learned implicitly through a reference-frame channel, with no explicit registration mechanism. We learn a global fusion operator that is applied recursively on an arbitrary number of low-resolution pairs. We introduce a registered loss, by learning to align the SR output to a ground-truth through ShiftNet. We show that by learning deep representations of multiple views, we can super-resolve low-resolution signals and enhance Earth Observation data at scale. Our approach recently topped the European Space Agency's MFSR competition on real-world satellite imagery.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
164,193
2305.02782
A Momentum-Incorporated Non-Negative Latent Factorization of Tensors Model for Dynamic Network Representation
A large-scale dynamic network (LDN) is a source of data in many big data-related applications due to their large number of entities and large-scale dynamic interactions. They can be modeled as a high-dimensional incomplete (HDI) tensor that contains a wealth of knowledge about time patterns. A Latent factorization of tensors (LFT) model efficiently extracts this time pattern, which can be established using stochastic gradient descent (SGD) solvers. However, LFT models based on SGD are often limited by training schemes and have poor tail convergence. To solve this problem, this paper proposes a novel nonlinear LFT model (MNNL) based on momentum-incorporated SGD, which extracts non-negative latent factors from HDI tensors to make training unconstrained and compatible with general training schemes, while improving convergence accuracy and speed. Empirical studies on two LDN datasets show that compared to existing models, the MNNL model has higher prediction accuracy and convergence speed.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
362,172
1306.3682
Frequency Domain Design of Fractional Order PID Controller for AVR System Using Chaotic Multi-objective Optimization
A fractional order (FO) PID or FOPID controller is designed for an Automatic Voltage Regulator (AVR) system with the consideration of contradictory performance objectives. An improved evolutionary Non-dominated Sorting Genetic Algorithm (NSGA-II), augmented with a chaotic Henon map is used for the multi-objective optimization based design procedure. The Henon map as the random number generator outperforms the original NSGA-II algorithm and its Logistic map assisted version for obtaining a better design trade-off with an FOPID controller. The Pareto fronts showing the trade-offs between the different design objectives have also been shown for both the FOPID controller and the conventional PID controller to enunciate the relative merits and demerits of each. The design is done in frequency domain and hence stability and robustness of the design is automatically guaranteed unlike the other time domain optimization based controller design methods.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
25,231
1702.07409
Founsure 1.0: An Erasure Code Library with Efficient Repair and Update Features
Founsure is an open-source software library that implements a multi-dimensional graph-based erasure coding entirely based on fast exclusive OR (XOR) logic. Its implementation utilizes compiler optimizations and multi-threading to generate the right assembly code for the given multi-core CPU architecture with vector processing capabilities. Founsure possesses important features that shall find various applications in modern data storage, communication, and networked computer systems, in which the data needs protection against device, hardware, and node failures. As data size reached unprecedented levels, these systems have become hungry for network bandwidth, computational resources, and average consumed power. To address that, the proposed library provides a three-dimensional design space that trades off the computational complexity, coding overhead, and data/node repair bandwidth to meet different requirements of modern distributed data storage and processing systems. Founsure library enables efficient encoding, decoding, repairs/rebuilds, and updates while all the required data storage and computations are distributed across the network nodes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,777
2310.01297
Co-audit: tools to help humans double-check AI-generated content
Users are increasingly being warned to check AI-generated content for correctness. Still, as LLMs (and other generative models) generate more complex output, such as summaries, tables, or code, it becomes harder for the user to audit or evaluate the output for quality or correctness. Hence, we are seeing the emergence of tool-assisted experiences to help the user double-check a piece of AI-generated content. We refer to these as co-audit tools. Co-audit tools complement prompt engineering techniques: one helps the user construct the input prompt, while the other helps them check the output response. As a specific example, this paper describes recent research on co-audit tools for spreadsheet computations powered by generative models. We explain why co-audit experiences are essential for any application of generative AI where quality is important and errors are consequential (as is common in spreadsheet computations). We propose a preliminary list of principles for co-audit, and outline research challenges.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
396,358
2205.15555
Graph-level Neural Networks: Current Progress and Future Directions
Graph-structured data consisting of objects (i.e., nodes) and relationships among objects (i.e., edges) are ubiquitous. Graph-level learning is a matter of studying a collection of graphs instead of a single graph. Traditional graph-level learning methods used to be the mainstream. However, with the increasing scale and complexity of graphs, Graph-level Neural Networks (GLNNs, deep learning-based graph-level learning methods) have been attractive due to their superiority in modeling high-dimensional data. Thus, a survey on GLNNs is necessary. To frame this survey, we propose a systematic taxonomy covering GLNNs upon deep neural networks, graph neural networks, and graph pooling. The representative and state-of-the-art models in each category are focused on this survey. We also investigate the reproducibility, benchmarks, and new graph datasets of GLNNs. Finally, we conclude future directions to further push forward GLNNs. The repository of this survey is available at https://github.com/GeZhangMQ/Awesome-Graph-level-Neural-Networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
299,775
2311.07075
GazeForensics: DeepFake Detection via Gaze-guided Spatial Inconsistency Learning
DeepFake detection is pivotal in personal privacy and public safety. With the iterative advancement of DeepFake techniques, high-quality forged videos and images are becoming increasingly deceptive. Prior research has seen numerous attempts by scholars to incorporate biometric features into the field of DeepFake detection. However, traditional biometric-based approaches tend to segregate biometric features from general ones and freeze the biometric feature extractor. These approaches resulted in the exclusion of valuable general features, potentially leading to a performance decline and, consequently, a failure to fully exploit the potential of biometric information in assisting DeepFake detection. Moreover, insufficient attention has been dedicated to scrutinizing gaze authenticity within the realm of DeepFake detection in recent years. In this paper, we introduce GazeForensics, an innovative DeepFake detection method that utilizes gaze representation obtained from a 3D gaze estimation model to regularize the corresponding representation within our DeepFake detection model, while concurrently integrating general features to further enhance the performance of our model. Experiment results reveal that our proposed GazeForensics outperforms the current state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
407,186
2004.04295
Severing the Edge Between Before and After: Neural Architectures for Temporal Ordering of Events
In this paper, we propose a neural architecture and a set of training methods for ordering events by predicting temporal relations. Our proposed models receive a pair of events within a span of text as input and they identify temporal relations (Before, After, Equal, Vague) between them. Given that a key challenge with this task is the scarcity of annotated data, our models rely on either pretrained representations (i.e. RoBERTa, BERT or ELMo), transfer and multi-task learning (by leveraging complementary datasets), and self-training techniques. Experiments on the MATRES dataset of English documents establish a new state-of-the-art on this task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
171,834
2203.13843
Quantifying Demonstration Quality for Robot Learning and Generalization
Learning from Demonstration (LfD) seeks to democratize robotics by enabling diverse end-users to teach robots to perform a task by providing demonstrations. However, most LfD techniques assume users provide optimal demonstrations. This is not always the case in real applications where users are likely to provide demonstrations of varying quality, that may change with expertise and other factors. Demonstration quality plays a crucial role in robot learning and generalization. Hence, it is important to quantify the quality of the provided demonstrations before using them for robot learning. In this paper, we propose quantifying the quality of the demonstrations based on how well they perform in the learned task. We hypothesize that task performance can give an indication of the generalization performance on similar tasks. The proposed approach is validated in a user study (N = 27). Users with different robotics expertise levels were recruited to teach a PR2 robot a generic task (pressing a button) under different task constraints. They taught the robot in two sessions on two different days to capture their teaching behaviour across sessions. The task performance was utilized to classify the provided demonstrations into high-quality and low-quality sets. The results show a significant Pearson correlation coefficient (R = 0.85, p < 0.0001) between the task performance and generalization performance across all participants. We also found that users clustered into two groups: Users who provided high-quality demonstrations from the first session, assigned to the fast-adapters group, and users who provided low-quality demonstrations in the first session and then improved with practice, assigned to the slow-adapters group. These results highlight the importance of quantifying demonstration quality, which can be indicative of the adaptation level of the user to the task.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
287,772
2305.18921
Large Car-following Data Based on Lyft level-5 Open Dataset: Following Autonomous Vehicles vs. Human-driven Vehicles
Car-Following (CF), as a fundamental driving behaviour, has significant influences on the safety and efficiency of traffic flow. Investigating how human drivers react differently when following autonomous vs. human-driven vehicles (HV) is thus critical for mixed traffic flow. Research in this field can be expedited with trajectory datasets collected by Autonomous Vehicles (AVs). However, trajectories collected by AVs are noisy and not readily applicable for studying CF behaviour. This paper extracts and enhances two categories of CF data, HV-following-AV (H-A) and HV-following-HV (H-H), from the open Lyft level-5 dataset. First, CF pairs are selected based on specific rules. Next, the quality of raw data is assessed by anomaly analysis. Then, the raw CF data is corrected and enhanced via motion planning, Kalman filtering, and wavelet denoising. As a result, 29k+ H-A and 42k+ H-H car-following segments are obtained, with a total driving distance of 150k+ km. A diversity assessment shows that the processed data cover complete CF regimes for calibrating CF models. This open and ready-to-use dataset provides the opportunity to investigate the CF behaviours of following AVs vs. HVs from real-world data. It can further facilitate studies on exploring the impact of AVs on mixed urban traffic.
true
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
369,298
2312.15571
A Survey on Open-Set Image Recognition
Open-set image recognition (OSR) aims to both classify known-class samples and identify unknown-class samples in the testing set, which supports robust classifiers in many realistic applications, such as autonomous driving, medical diagnosis, security monitoring, etc. In recent years, open-set recognition methods have achieved more and more attention, since it is usually difficult to obtain holistic information about the open world for model training. In this paper, we aim to summarize the up-to-date development of recent OSR methods, considering their rapid development in recent two or three years. Specifically, we firstly introduce a new taxonomy, under which we comprehensively review the existing DNN-based OSR methods. Then, we compare the performances of some typical and state-of-the-art OSR methods on both coarse-grained datasets and fine-grained datasets under both standard-dataset setting and cross-dataset setting, and further give the analysis of the comparison. Finally, we discuss some open issues and possible future directions in this community.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
418,051
2304.07022
Label Dependencies-aware Set Prediction Networks for Multi-label Text Classification
Multi-label text classification involves extracting all relevant labels from a sentence. Given the unordered nature of these labels, we propose approaching the problem as a set prediction task. To address the correlation between labels, we leverage Graph Convolutional Networks and construct an adjacency matrix based on the statistical relations between labels. Additionally, we enhance recall ability by applying the Bhattacharyya distance to the output distributions of the set prediction networks. We evaluate the effectiveness of our approach on two multi-label datasets and demonstrate its superiority over previous baselines through experimental results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
358,196
2202.12808
High-Dimensional Sparse Bayesian Learning without Covariance Matrices
Sparse Bayesian learning (SBL) is a powerful framework for tackling the sparse coding problem. However, the most popular inference algorithms for SBL become too expensive for high-dimensional settings, due to the need to store and compute a large covariance matrix. We introduce a new inference scheme that avoids explicit construction of the covariance matrix by solving multiple linear systems in parallel to obtain the posterior moments for SBL. Our approach couples a little-known diagonal estimation result from numerical linear algebra with the conjugate gradient algorithm. On several simulations, our method scales better than existing approaches in computation time and memory, especially for structured dictionaries capable of fast matrix-vector multiplication.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
282,367
2210.11680
Twin Contrastive Learning for Online Clustering
This paper proposes to perform online clustering by conducting twin contrastive learning (TCL) at the instance and cluster level. Specifically, we find that when the data is projected into a feature space with a dimensionality of the target cluster number, the rows and columns of its feature matrix correspond to the instance and cluster representation, respectively. Based on the observation, for a given dataset, the proposed TCL first constructs positive and negative pairs through data augmentations. Thereafter, in the row and column space of the feature matrix, instance- and cluster-level contrastive learning are respectively conducted by pulling together positive pairs while pushing apart the negatives. To alleviate the influence of intrinsic false-negative pairs and rectify cluster assignments, we adopt a confidence-based criterion to select pseudo-labels for boosting both the instance- and cluster-level contrastive learning. As a result, the clustering performance is further improved. Besides the elegant idea of twin contrastive learning, another advantage of TCL is that it could independently predict the cluster assignment for each instance, thus effortlessly fitting online scenarios. Extensive experiments on six widely-used image and text benchmarks demonstrate the effectiveness of TCL. The code will be released on GitHub.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
325,406
1805.03482
Full 3D Reconstruction of Transparent Objects
Numerous techniques have been proposed for reconstructing 3D models for opaque objects in past decades. However, none of them can be directly applied to transparent objects. This paper presents a fully automatic approach for reconstructing complete 3D shapes of transparent objects. Through positioning an object on a turntable, its silhouettes and light refraction paths under different viewing directions are captured. Then, starting from an initial rough model generated from space carving, our algorithm progressively optimizes the model under three constraints: surface and refraction normal consistency, surface projection and silhouette consistency, and surface smoothness. Experimental results on both synthetic and real objects demonstrate that our method can successfully recover the complex shapes of transparent objects and faithfully reproduce their light refraction properties.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
97,059
1912.00998
A Multigrid Method for Efficiently Training Video Models
Training competitive deep video models is an order of magnitude slower than training their counterpart image models. Slow training causes long research cycles, which hinders progress in video understanding research. Following standard practice for training image models, video model training assumes a fixed mini-batch shape: a specific number of clips, frames, and spatial size. However, what is the optimal shape? High resolution models perform well, but train slowly. Low resolution models train faster, but they are inaccurate. Inspired by multigrid methods in numerical optimization, we propose to use variable mini-batch shapes with different spatial-temporal resolutions that are varied according to a schedule. The different shapes arise from resampling the training data on multiple sampling grids. Training is accelerated by scaling up the mini-batch size and learning rate when shrinking the other dimensions. We empirically demonstrate a general and robust grid schedule that yields a significant out-of-the-box training speedup without a loss in accuracy for different models (I3D, non-local, SlowFast), datasets (Kinetics, Something-Something, Charades), and training settings (with and without pre-training, 128 GPUs or 1 GPU). As an illustrative example, the proposed multigrid method trains a ResNet-50 SlowFast network 4.5x faster (wall-clock time, same hardware) while also improving accuracy (+0.8% absolute) on Kinetics-400 compared to the baseline training method. Code is available online.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
155,955
2010.15423
Tilde at WMT 2020: News Task Systems
This paper describes Tilde's submission to the WMT2020 shared task on news translation for both directions of the English-Polish language pair in both the constrained and the unconstrained tracks. We follow our submissions from the previous years and build our baseline systems to be morphologically motivated sub-word unit-based Transformer base models that we train using the Marian machine translation toolkit. Additionally, we experiment with different parallel and monolingual data selection schemes, as well as sampled back-translation. Our final models are ensembles of Transformer base and Transformer big models that feature right-to-left re-ranking.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
203,766
1808.05475
Strong Coordination over Noisy Channels
We study the problem of strong coordination of the actions of two nodes $X$ and $Y$ that communicate over a discrete memoryless channel (DMC) such that the actions follow a prescribed joint probability distribution. We propose two novel random coding schemes and a polar coding scheme for this noisy strong coordination problem, and derive inner bounds for the respective strong coordination capacity region. The first scheme is a joint coordination-channel coding scheme that utilizes the randomness provided by the DMC to reduce the amount of local randomness required to generate the sequence of actions at Node $Y$. Based on this random coding scheme, we provide a characterization of the capacity region for two special cases of the noisy strong coordination setup, namely, when the actions at Node $Y$ are determined by Node $X$ and when the DMC is a deterministic channel. The second scheme exploits separate coordination and channel coding where local randomness is extracted from the channel after decoding. The third scheme is a joint coordination-channel polar coding scheme for strong coordination. We show that polar codes are able to achieve the established inner bound to the noisy strong coordination capacity region and thus provide a constructive alternative to a random coding proof. Our polar coding scheme also offers a constructive solution to a channel simulation problem where a DMC and shared randomness are employed together to simulate another DMC. Finally, by leveraging the random coding results for this problem, we present an example in which the proposed joint scheme is able to strictly outperform the separate scheme in terms of achievable communication rate for the same amount of injected randomness into both systems. Thus, we establish the sub-optimality of the separation of strong coordination and channel coding with respect to the communication rate over the DMC.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
105,355
1010.4603
Write Channel Model for Bit-Patterned Media Recording
We propose a new write channel model for bit-patterned media recording that reflects the data dependence of write synchronization errors. It is shown that this model accommodates both substitution-like errors and insertion-deletion errors whose statistics are determined by an underlying channel state process. We study information theoretic properties of the write channel model, including the capacity, symmetric information rate, Markov-1 rate and the zero-error capacity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
7,984
0712.2223
Entanglement-Assisted Quantum Convolutional Coding
We show how to protect a stream of quantum information from decoherence induced by a noisy quantum communication channel. We exploit preshared entanglement and a convolutional coding structure to develop a theory of entanglement-assisted quantum convolutional coding. Our construction produces a Calderbank-Shor-Steane (CSS) entanglement-assisted quantum convolutional code from two arbitrary classical binary convolutional codes. The rate and error-correcting properties of the classical convolutional codes directly determine the corresponding properties of the resulting entanglement-assisted quantum convolutional code. We explain how to encode our CSS entanglement-assisted quantum convolutional codes starting from a stream of information qubits, ancilla qubits, and shared entangled bits.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,027
2203.14126
Robust No-Regret Learning in Min-Max Stackelberg Games
The behavior of no-regret learning algorithms is well understood in two-player min-max (i.e, zero-sum) games. In this paper, we investigate the behavior of no-regret learning in min-max games with dependent strategy sets, where the strategy of the first player constrains the behavior of the second. Such games are best understood as sequential, i.e., min-max Stackelberg, games. We consider two settings, one in which only the first player chooses their actions using a no-regret algorithm while the second player best responds, and one in which both players use no-regret algorithms. For the former case, we show that no-regret dynamics converge to a Stackelberg equilibrium. For the latter case, we introduce a new type of regret, which we call Lagrangian regret, and show that if both players minimize their Lagrangian regrets, then play converges to a Stackelberg equilibrium. We then observe that online mirror descent (OMD) dynamics in these two settings correspond respectively to a known nested (i.e., sequential) gradient descent-ascent (GDA) algorithm and a new simultaneous GDA-like algorithm, thereby establishing convergence of these algorithms to Stackelberg equilibrium. Finally, we analyze the robustness of OMD dynamics to perturbations by investigating online min-max Stackelberg games. We prove that OMD dynamics are robust for a large class of online min-max games with independent strategy sets. In the dependent case, we demonstrate the robustness of OMD dynamics experimentally by simulating them in online Fisher markets, a canonical example of a min-max Stackelberg game with dependent strategy sets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
287,888
2501.10499
Learning More With Less: Sample Efficient Dynamics Learning and Model-Based RL for Loco-Manipulation
Combining the agility of legged locomotion with the capabilities of manipulation, loco-manipulation platforms have the potential to perform complex tasks in real-world applications. To this end, state-of-the-art quadrupeds with attached manipulators, such as the Boston Dynamics Spot, have emerged to provide a capable and robust platform. However, both the complexity of loco-manipulation control, as well as the black-box nature of commercial platforms pose challenges for developing accurate dynamics models and control policies. We address these challenges by developing a hand-crafted kinematic model for a quadruped-with-arm platform and, together with recent advances in Bayesian Neural Network (BNN)-based dynamics learning using physical priors, efficiently learn an accurate dynamics model from data. We then derive control policies for loco-manipulation via model-based reinforcement learning (RL). We demonstrate the effectiveness of this approach on hardware using the Boston Dynamics Spot with a manipulator, accurately performing dynamic end-effector trajectory tracking even in low data regimes.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
525,552
2203.06457
3D-GIF: 3D-Controllable Object Generation via Implicit Factorized Representations
While NeRF-based 3D-aware image generation methods enable viewpoint control, limitations still remain to be adopted to various 3D applications. Due to their view-dependent and light-entangled volume representation, the 3D geometry presents unrealistic quality and the color should be re-rendered for every desired viewpoint. To broaden the 3D applicability from 3D-aware image generation to 3D-controllable object generation, we propose the factorized representations which are view-independent and light-disentangled, and training schemes with randomly sampled light conditions. We demonstrate the superiority of our method by visualizing factorized representations, re-lighted images, and albedo-textured meshes. In addition, we show that our approach improves the quality of the generated geometry via visualization and quantitative comparison. To the best of our knowledge, this is the first work that extracts albedo-textured meshes with unposed 2D images without any additional labels or assumptions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
285,122
1508.02556
Assessment of LTE Wireless Access for Monitoring of Energy Distribution in the Smart Grid
While LTE is becoming widely rolled out for human-type services, it is also a promising solution for cost-efficient connectivity of the smart grid monitoring equipment. This is a type of machine-to-machine (M2M) traffic that consists mainly of sporadic uplink transmissions. In such a setting, the amount of traffic that can be served in a cell is not constrained by the data capacity, but rather by the signaling constraints in the random access channel and control channel. In this paper we explore these limitations using a detailed simulation of the LTE access reservation protocol (ARP). We find that 1) assigning more random access opportunities may actually worsen performance; and 2) the additional signaling that follows the ARP has very large impact on the capacity in terms of the number of supported devices; we observed a reduction in the capacity by almost a factor of 3. This suggests that a lightweight access method, with a reduced number of signaling messages, needs to be considered in standardization for M2M applications. Additionally we propose a tractable analytical model to calculate the outage that can be rapidly implemented and evaluated. The model accounts for the features of the random access, control channel and uplink and downlink data channels, as well as retransmissions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
45,916
0704.1818
Low-density graph codes that are optimal for source/channel coding and binning
We describe and analyze the joint source/channel coding properties of a class of sparse graphical codes based on compounding a low-density generator matrix (LDGM) code with a low-density parity check (LDPC) code. Our first pair of theorems establish that there exist codes from this ensemble, with all degrees remaining bounded independently of block length, that are simultaneously optimal as both source and channel codes when encoding and decoding are performed optimally. More precisely, in the context of lossy compression, we prove that finite degree constructions can achieve any pair $(R, D)$ on the rate-distortion curve of the binary symmetric source. In the context of channel coding, we prove that finite degree codes can achieve any pair $(C, p)$ on the capacity-noise curve of the binary symmetric channel. Next, we show that our compound construction has a nested structure that can be exploited to achieve the Wyner-Ziv bound for source coding with side information (SCSI), as well as the Gelfand-Pinsker bound for channel coding with side information (CCSI). Although the current results are based on optimal encoding and decoding, the proposed graphical codes have sparse structure and high girth that renders them well-suited to message-passing and other efficient decoding procedures.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
45
1611.01578
Neural Architecture Search with Reinforcement Learning
Neural networks are powerful and flexible models that work well for many difficult learning tasks in image, speech and natural language understanding. Despite their success, neural networks are still hard to design. In this paper, we use a recurrent network to generate the model descriptions of neural networks and train this RNN with reinforcement learning to maximize the expected accuracy of the generated architectures on a validation set. On the CIFAR-10 dataset, our method, starting from scratch, can design a novel network architecture that rivals the best human-invented architecture in terms of test set accuracy. Our CIFAR-10 model achieves a test error rate of 3.65, which is 0.09 percent better and 1.05x faster than the previous state-of-the-art model that used a similar architectural scheme. On the Penn Treebank dataset, our model can compose a novel recurrent cell that outperforms the widely-used LSTM cell, and other state-of-the-art baselines. Our cell achieves a test set perplexity of 62.4 on the Penn Treebank, which is 3.6 perplexity better than the previous state-of-the-art model. The cell can also be transferred to the character language modeling task on PTB and achieves a state-of-the-art perplexity of 1.214.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
63,393
1711.03576
Performance of Source transmit Antenna selection for MIMO cooperative communication System Based DF protocol: Symbol Error Rate and Diversity order
In this work, we study the performance of a single relay Multiple Input Multiple Output (MIMO) cooperative communication system based on Decode and Forward (DF) relaying protocol for two strategies using transmit antenna selection at the source. The first strategy uses one antenna between the relay and the destination, and the second strategy uses Space Time Block Coding (STBC). All channels follow the Rayleigh fading distribution. We derive the expression and Upper Bound for Symbol Error Rate (SER) for M-ary Phase Shift Keying (M-PSK), and the diversity order for both strategies. The analytical results show that the second strategy performs better than the first one for the same diversity order and the same Rate R.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
84,237
2404.14678
3DBench: A Scalable 3D Benchmark and Instruction-Tuning Dataset
Evaluating the performance of Multi-modal Large Language Models (MLLMs), integrating both point cloud and language, presents significant challenges. The lack of a comprehensive assessment hampers determining whether these models truly represent advancements, thereby impeding further progress in the field. Current evaluations heavily rely on classification and caption tasks, falling short in providing a thorough assessment of MLLMs. A pressing need exists for a more sophisticated evaluation method capable of thoroughly analyzing the spatial understanding and expressive capabilities of these models. To address these issues, we introduce a scalable 3D benchmark, accompanied by a large-scale instruction-tuning dataset known as 3DBench, providing an extensible platform for a comprehensive evaluation of MLLMs. Specifically, we establish the benchmark that spans a wide range of spatial and semantic scales, from object-level to scene-level, addressing both perception and planning tasks. Furthermore, we present a rigorous pipeline for automatically constructing scalable 3D instruction-tuning datasets, covering 10 diverse multi-modal tasks with more than 0.23 million QA pairs generated in total. Thorough experiments evaluating trending MLLMs, comparisons against existing datasets, and variations of training protocols demonstrate the superiority of 3DBench, offering valuable insights into current limitations and potential research directions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
448,767
2309.10149
Analysis of the Memorization and Generalization Capabilities of AI Agents: Are Continual Learners Robust?
In continual learning (CL), an AI agent (e.g., autonomous vehicles or robotics) learns from non-stationary data streams under dynamic environments. For the practical deployment of such applications, it is important to guarantee robustness to unseen environments while maintaining past experiences. In this paper, a novel CL framework is proposed to achieve robust generalization to dynamic environments while retaining past knowledge. The considered CL agent uses a capacity-limited memory to save previously observed environmental information to mitigate forgetting issues. Then, data points are sampled from the memory to estimate the distribution of risks over environmental change so as to obtain predictors that are robust with unseen changes. The generalization and memorization performance of the proposed framework are theoretically analyzed. This analysis showcases the tradeoff between memorization and generalization with the memory size. Experiments show that the proposed algorithm outperforms memory-based CL baselines across all environments while significantly improving the generalization performance on unseen target environments.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
392,876
1307.2342
Model Selection with Low Complexity Priors
Regularization plays a pivotal role when facing the challenge of solving ill-posed inverse problems, where the number of observations is smaller than the ambient dimension of the object to be estimated. A line of recent work has studied regularization models with various types of low-dimensional structures. In such settings, the general approach is to solve a regularized optimization problem, which combines a data fidelity term and some regularization penalty that promotes the assumed low-dimensional/simple structure. This paper provides a general framework to capture this low-dimensional structure through what we coin partly smooth functions relative to a linear manifold. These are convex, non-negative, closed and finite-valued functions that will promote objects living on low-dimensional subspaces. This class of regularizers encompasses many popular examples such as the L1 norm, L1-L2 norm (group sparsity), as well as several others including the Linfty norm. We also show that the set of partly smooth functions relative to a linear manifold is closed under addition and pre-composition by a linear operator, which allows to cover mixed regularization, and the so-called analysis-type priors (e.g. total variation, fused Lasso, finite-valued polyhedral gauges). Our main result presents a unified sharp analysis of exact and robust recovery of the low-dimensional subspace model associated to the object to recover from partial measurements. This analysis is illustrated on a number of special and previously studied cases, and on an analysis of the performance of Linfty regularization in a compressed sensing scenario.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
25,708
2210.11931
Deep Reinforcement Learning for Inverse Inorganic Materials Design
A major obstacle to the realization of novel inorganic materials with desirable properties is the inability to perform efficient optimization across both materials properties and synthesis of those materials. In this work, we propose a reinforcement learning (RL) approach to inverse inorganic materials design, which can identify promising compounds with specified properties and synthesizability constraints. Our model learns chemical guidelines such as charge and electronegativity neutrality while maintaining chemical diversity and uniqueness. We demonstrate a multi-objective RL approach, which can generate novel compounds with targeted materials properties including formation energy and bulk/shear modulus alongside a lower sintering temperature synthesis objectives. Using this approach, the model can predict promising compounds of interest, while suggesting an optimized chemical design space for inorganic materials discovery.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
325,510
2502.14397
PhotoDoodle: Learning Artistic Image Editing from Few-Shot Pairwise Data
We introduce PhotoDoodle, a novel image editing framework designed to facilitate photo doodling by enabling artists to overlay decorative elements onto photographs. Photo doodling is challenging because the inserted elements must appear seamlessly integrated with the background, requiring realistic blending, perspective alignment, and contextual coherence. Additionally, the background must be preserved without distortion, and the artist's unique style must be captured efficiently from limited training data. These requirements are not addressed by previous methods that primarily focus on global style transfer or regional inpainting. The proposed method, PhotoDoodle, employs a two-stage training strategy. Initially, we train a general-purpose image editing model, OmniEditor, using large-scale data. Subsequently, we fine-tune this model with EditLoRA using a small, artist-curated dataset of before-and-after image pairs to capture distinct editing styles and techniques. To enhance consistency in the generated results, we introduce a positional encoding reuse mechanism. Additionally, we release a PhotoDoodle dataset featuring six high-quality styles. Extensive experiments demonstrate the advanced performance and robustness of our method in customized image editing, opening new possibilities for artistic creation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
535,815
2205.07598
Cell-Free MmWave Massive MIMO Systems with Low-Capacity Fronthaul Links and Low-Resolution ADC/DACs
In this paper, we consider the uplink channel estimation phase and downlink data transmission phase of cell-free millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems with low-capacity fronthaul links and low-resolution analog-to-digital converters/digital-to-analog converters (ADC/DACs). In cell-free massive MIMO, a control unit dictates the baseband processing at a geographical scale, while the base stations communicate with the control unit through fronthaul links. Unlike most of previous works in cell-free massive MIMO with finite-capacity fronthaul links, we consider the general case where the fronthaul capacity and ADC/DAC resolution are not necessarily the same. In particular, the fronthaul compression and ADC/DAC quantization occur independently where each one is modeled based on the information theoretic argument and additive quantization noise model (AQNM). Then, we address the codebook design problem that aims to minimize the channel estimation error for the independent and identically distributed (i.i.d.) and colored compression noise cases. Also, we propose an alternating optimization (AO) method to tackle the max-min fairness problem. In essence, the AO method alternates between two subproblems that correspond to the power allocation and codebook design problems. The AO method proposed for the zero-forcing (ZF) precoder is guaranteed to converge, whereas the one for the maximum ratio transmission (MRT) precoder has no such guarantee. Finally, the performance of the proposed schemes is evaluated by the simulation results in terms of both energy and spectral efficiency. The numerical results show that the proposed scheme for the ZF precoder yields spectral and energy efficiency 28% and 15% higher than that of the best baseline.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
296,656
2411.02065
AM Flow: Adapters for Temporal Processing in Action Recognition
Deep learning models, in particular \textit{image} models, have recently gained generalisability and robustness. %are becoming more general and robust by the day. In this work, we propose to exploit such advances in the realm of \textit{video} classification. Video foundation models suffer from the requirement of extensive pretraining and a large training time. Towards mitigating such limitations, we propose "\textit{Attention Map (AM) Flow}" for image models, a method for identifying pixels relevant to motion in each input video frame. In this context, we propose two methods to compute AM flow, depending on camera motion. AM flow allows the separation of spatial and temporal processing, while providing improved results over combined spatio-temporal processing (as in video models). Adapters, one of the popular techniques in parameter efficient transfer learning, facilitate the incorporation of AM flow into pretrained image models, mitigating the need for full-finetuning. We extend adapters to "\textit{temporal processing adapters}" by incorporating a temporal processing unit into the adapters. Our work achieves faster convergence, therefore reducing the number of epochs needed for training. Moreover, we endow an image model with the ability to achieve state-of-the-art results on popular action recognition datasets. This reduces training time and simplifies pretraining. We present experiments on Kinetics-400, Something-Something v2, and Toyota Smarthome datasets, showcasing state-of-the-art or comparable results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
505,339
2206.10298
ViralBERT: A User Focused BERT-Based Approach to Virality Prediction
Recently, Twitter has become the social network of choice for sharing and spreading information to a multitude of users through posts called 'tweets'. Users can easily re-share these posts to other users through 'retweets', which allow information to cascade to many more users, increasing its outreach. Clearly, being able to know the extent to which a post can be retweeted has great value in advertising, influencing and other such campaigns. In this paper we propose ViralBERT, which can be used to predict the virality of tweets using content- and user-based features. We employ a method of concatenating numerical features such as hashtags and follower numbers to tweet text, and utilise two BERT modules: one for semantic representation of the combined text and numerical features, and another module purely for sentiment analysis of text, as both the information within text and it's ability to elicit an emotional response play a part in retweet proneness. We collect a dataset of 330k tweets to train ViralBERT and validate the efficacy of our model using baselines from current studies in this field. Our experiments show that our approach outperforms these baselines, with a 13% increase in both F1 Score and Accuracy compared to the best performing baseline method. We then undergo an ablation study to investigate the importance of chosen features, finding that text sentiment and follower counts, and to a lesser extent mentions and following counts, are the strongest features for the model, and that hashtag counts are detrimental to the model.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
303,862
1601.07124
LIA-RAG: a system based on graphs and divergence of probabilities applied to Speech-To-Text Summarization
This paper aims to introduces a new algorithm for automatic speech-to-text summarization based on statistical divergences of probabilities and graphs. The input is a text from speech conversations with noise, and the output a compact text summary. Our results, on the pilot task CCCS Multiling 2015 French corpus are very encouraging
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
51,384
1610.04005
Stream Reasoning-Based Control of Caching Strategies in CCN Routers
Content-Centric Networking (CCN) research addresses the mismatch between the modern usage of the Internet and its outdated architecture. Importantly, CCN routers may locally cache frequently requested content in order to speed up delivery to end users. Thus, the issue of caching strategies arises, i.e., which content shall be stored and when it should be replaced. In this work, we employ novel techniques towards intelligent administration of CCN routers that autonomously switch between existing strategies in response to changing content request patterns. In particular, we present a router architecture for CCN networks that is controlled by rule-based stream reasoning, following the recent formal framework LARS which extends Answer Set Programming for streams. The obtained possibility for flexible router configuration at runtime allows for faster experimentation and may thus help to advance the further development of CCN. Moreover, the empirical evaluation of our feasibility study shows that the resulting caching agent may give significant performance gains.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
62,329
1703.02921
Transformation-Grounded Image Generation Network for Novel 3D View Synthesis
We present a transformation-grounded image generation network for novel 3D view synthesis from a single image. Instead of taking a 'blank slate' approach, we first explicitly infer the parts of the geometry visible both in the input and novel views and then re-cast the remaining synthesis problem as image completion. Specifically, we both predict a flow to move the pixels from the input to the novel view along with a novel visibility map that helps deal with occulsion/disocculsion. Next, conditioned on those intermediate results, we hallucinate (infer) parts of the object invisible in the input image. In addition to the new network structure, training with a combination of adversarial and perceptual loss results in a reduction in common artifacts of novel view synthesis such as distortions and holes, while successfully generating high frequency details and preserving visual aspects of the input image. We evaluate our approach on a wide range of synthetic and real examples. Both qualitative and quantitative results show our method achieves significantly better results compared to existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,644