id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.15054 | Near Real-Time Position Tracking for Robot-Guided Evacuation | During the evacuation of a building, the rapid and accurate tracking of human evacuees can be used by a guide robot to increase the effectiveness of the evacuation [1],[2]. This paper introduces a near real-time human position tracking solution tailored for evacuation robots. Using a pose detector, our system first identifies human joints in the camera frame in near real-time and then translates the position of these pixels into real-world coordinates via a simple calibration process. We run multiple trials of the system in action in an indoor lab environment and show that the system can achieve an accuracy of 0.55 meters when compared to ground truth. The system can also achieve an average of 3 frames per second (FPS) which was sufficient for our study on robot-guided human evacuation. The potential of our approach extends beyond mere tracking, paving the way for evacuee motion prediction, allowing the robot to proactively respond to human movements during an evacuation. | false | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | 394,833 |
2312.15416 | On Completeness of SDP-Based Barrier Certificate Synthesis over
Unbounded Domains | Barrier certificates, serving as differential invariants that witness system safety, play a crucial role in the verification of cyber-physical systems (CPS). Prevailing computational methods for synthesizing barrier certificates are based on semidefinite programming (SDP) by exploiting Putinar Positivstellensatz. Consequently, these approaches are limited by the Archimedean condition, which requires all variables to be bounded, i.e., systems are defined over bounded domains. For systems over unbounded domains, unfortunately, existing methods become incomplete and may fail to identify potential barrier certificates. In this paper, we address this limitation for the unbounded cases. We first give a complete characterization of polynomial barrier certificates by using homogenization, a recent technique in the optimization community to reduce an unbounded optimization problem to a bounded one. Furthermore, motivated by this formulation, we introduce the definition of homogenized systems and propose a complete characterization of a family of non-polynomial barrier certificates with more expressive power. Experimental results demonstrate that our two approaches are more effective while maintaining a comparable level of efficiency. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 418,005 |
2409.06848 | Shadow Removal Refinement via Material-Consistent Shadow Edges | Shadow boundaries can be confused with material boundaries as both exhibit sharp changes in luminance or contrast within a scene. However, shadows do not modify the intrinsic color or texture of surfaces. Therefore, on both sides of shadow edges traversing regions with the same material, the original color and textures should be the same if the shadow is removed properly. These shadow/shadow-free pairs are very useful but hard-to-collect supervision signals. The crucial contribution of this paper is to learn how to identify those shadow edges that traverse material-consistent regions and how to use them as self-supervision for shadow removal refinement during test time. To achieve this, we fine-tune SAM, an image segmentation foundation model, to produce a shadow-invariant segmentation and then extract material-consistent shadow edges by comparing the SAM segmentation with the shadow mask. Utilizing these shadow edges, we introduce color and texture-consistency losses to enhance the shadow removal process. We demonstrate the effectiveness of our method in improving shadow removal results on more challenging, in-the-wild images, outperforming the state-of-the-art shadow removal methods. Additionally, we propose a new metric and an annotated dataset for evaluating the performance of shadow removal methods without the need for paired shadow/shadow-free data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 487,295 |
2101.10800 | Robust Scheduling of Virtual Power Plant under Exogenous and Endogenous
Uncertainties | Virtual power plant (VPP) provides a flexible solution to distributed energy resources integration by aggregating renewable generation units, conventional power plants, energy storages, and flexible demands. This paper proposes a novel model for determining the optimal offering strategy in the day-ahead energy-reserve market and the optimal self-scheduling plan. It considers exogenous uncertainties (or called decision-independent uncertainties, DIUs) associated with market clearing prices and available wind power generation, as well as the endogenous uncertainties (or called decision-dependent uncertainties, DDUs) pertaining to real-time reserve deployment requests. A tractable solution method based on strong duality theory, McCormick relaxation, and the Benders' decomposition to solve the proposed stochastic adaptive robust optimization with DDUs formulation is developed. Simulation results demonstrate the applicability of the proposed approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 217,054 |
2307.12348 | ResShift: Efficient Diffusion Model for Image Super-resolution by
Residual Shifting | Diffusion-based image super-resolution (SR) methods are mainly limited by the low inference speed due to the requirements of hundreds or even thousands of sampling steps. Existing acceleration sampling techniques inevitably sacrifice performance to some extent, leading to over-blurry SR results. To address this issue, we propose a novel and efficient diffusion model for SR that significantly reduces the number of diffusion steps, thereby eliminating the need for post-acceleration during inference and its associated performance deterioration. Our method constructs a Markov chain that transfers between the high-resolution image and the low-resolution image by shifting the residual between them, substantially improving the transition efficiency. Additionally, an elaborate noise schedule is developed to flexibly control the shifting speed and the noise strength during the diffusion process. Extensive experiments demonstrate that the proposed method obtains superior or at least comparable performance to current state-of-the-art methods on both synthetic and real-world datasets, even only with 15 sampling steps. Our code and model are available at https://github.com/zsyOAOA/ResShift. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 381,234 |
2006.09534 | Towards improving discriminative reconstruction via simultaneous dense
and sparse coding | Discriminative features extracted from the sparse coding model have been shown to perform well for classification. Recent deep learning architectures have further improved reconstruction in inverse problems by considering new dense priors learned from data. We propose a novel dense and sparse coding model that integrates both representation capability and discriminative features. The model studies the problem of recovering a dense vector $\mathbf{x}$ and a sparse vector $\mathbf{u}$ given measurements of the form $\mathbf{y} = \mathbf{A}\mathbf{x}+\mathbf{B}\mathbf{u}$. Our first analysis proposes a geometric condition based on the minimal angle between spanning subspaces corresponding to the matrices $\mathbf{A}$ and $\mathbf{B}$ that guarantees unique solution to the model. The second analysis shows that, under mild assumptions, a convex program recovers the dense and sparse components. We validate the effectiveness of the model on simulated data and propose a dense and sparse autoencoder (DenSaE) tailored to learning the dictionaries from the dense and sparse model. We demonstrate that (i) DenSaE denoises natural images better than architectures derived from the sparse coding model ($\mathbf{B}\mathbf{u}$), (ii) in the presence of noise, training the biases in the latter amounts to implicitly learning the $\mathbf{A}\mathbf{x} + \mathbf{B}\mathbf{u}$ model, (iii) $\mathbf{A}$ and $\mathbf{B}$ capture low- and high-frequency contents, respectively, and (iv) compared to the sparse coding model, DenSaE offers a balance between discriminative power and representation. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 182,572 |
2010.12406 | UNER: Universal Named-Entity RecognitionFramework | We introduce the Universal Named-Entity Recognition (UNER)framework, a 4-level classification hierarchy, and the methodology that isbeing adopted to create the first multilingual UNER corpus: the SETimesparallel corpus annotated for named-entities. First, the English SETimescorpus will be annotated using existing tools and knowledge bases. Afterevaluating the resulting annotations through crowdsourcing campaigns,they will be propagated automatically to other languages within the SE-Times corpora. Finally, as an extrinsic evaluation, the UNER multilin-gual dataset will be used to train and test available NER tools. As part offuture research directions, we aim to increase the number of languages inthe UNER corpus and to investigate possible ways of integrating UNERwith available knowledge graphs to improve named-entity recognition. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 202,682 |
2501.18733 | Integrating LMM Planners and 3D Skill Policies for Generalizable
Manipulation | The recent advancements in visual reasoning capabilities of large multimodal models (LMMs) and the semantic enrichment of 3D feature fields have expanded the horizons of robotic capabilities. These developments hold significant potential for bridging the gap between high-level reasoning from LMMs and low-level control policies utilizing 3D feature fields. In this work, we introduce LMM-3DP, a framework that can integrate LMM planners and 3D skill Policies. Our approach consists of three key perspectives: high-level planning, low-level control, and effective integration. For high-level planning, LMM-3DP supports dynamic scene understanding for environment disturbances, a critic agent with self-feedback, history policy memorization, and reattempts after failures. For low-level control, LMM-3DP utilizes a semantic-aware 3D feature field for accurate manipulation. In aligning high-level and low-level control for robot actions, language embeddings representing the high-level policy are jointly attended with the 3D feature field in the 3D transformer for seamless integration. We extensively evaluate our approach across multiple skills and long-horizon tasks in a real-world kitchen environment. Our results show a significant 1.45x success rate increase in low-level control and an approximate 1.5x improvement in high-level planning accuracy compared to LLM-based baselines. Demo videos and an overview of LMM-3DP are available at https://lmm-3dp-release.github.io. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 528,832 |
2209.12806 | FONDUE: an algorithm to find the optimal dimensionality of the latent
representations of variational autoencoders | When training a variational autoencoder (VAE) on a given dataset, determining the optimal number of latent variables is mostly done by grid search: a costly process in terms of computational time and carbon footprint. In this paper, we explore the intrinsic dimension estimation (IDE) of the data and latent representations learned by VAEs. We show that the discrepancies between the IDE of the mean and sampled representations of a VAE after only a few steps of training reveal the presence of passive variables in the latent space, which, in well-behaved VAEs, indicates a superfluous number of dimensions. Using this property, we propose FONDUE: an algorithm which quickly finds the number of latent dimensions after which the mean and sampled representations start to diverge (i.e., when passive variables are introduced), providing a principled method for selecting the number of latent dimensions for VAEs and autoencoders. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 319,664 |
2004.13843 | Template-based Question Answering using Recursive Neural Networks | We propose a neural network-based approach to automatically learn and classify natural language questions into its corresponding template using recursive neural networks. An obvious advantage of using neural networks is the elimination of the need for laborious feature engineering that can be cumbersome and error-prone. The input question is encoded into a vector representation. The model is trained and evaluated on the LC-QuAD dataset (Large-scale Complex Question Answering Dataset). The LC-QuAD queries are annotated based on 38 unique templates that the model attempts to classify. The resulting model is evaluated against both the LC-QuAD dataset and the 7th Question Answering Over Linked Data (QALD-7) dataset. The recursive neural network achieves template classification accuracy of 0.828 on the LC-QuAD dataset and an accuracy of 0.618 on the QALD-7 dataset. When the top-2 most likely templates were considered the model achieves an accuracy of 0.945 on the LC-QuAD dataset and 0.786 on the QALD-7 dataset. After slot filling, the overall system achieves a macro F-score 0.419 on the LC-QuAD dataset and a macro F-score of 0.417 on the QALD-7 dataset. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | 174,682 |
2210.01508 | How Masterly Are People at Playing with Their Vocabulary? Analysis of
the Wordle Game for Latvian | In this paper, we describe adaptation of a simple word guessing game that occupied the hearts and minds of people around the world. There are versions for all three Baltic countries and even several versions of each. We specifically pay attention to the Latvian version and look into how people form their guesses given any already uncovered hints. The paper analyses guess patterns, easy and difficult word characteristics, and player behaviour and response. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 321,289 |
2305.17352 | Is Centralized Training with Decentralized Execution Framework
Centralized Enough for MARL? | Centralized Training with Decentralized Execution (CTDE) has recently emerged as a popular framework for cooperative Multi-Agent Reinforcement Learning (MARL), where agents can use additional global state information to guide training in a centralized way and make their own decisions only based on decentralized local policies. Despite the encouraging results achieved, CTDE makes an independence assumption on agent policies, which limits agents to adopt global cooperative information from each other during centralized training. Therefore, we argue that existing CTDE methods cannot fully utilize global information for training, leading to an inefficient joint-policy exploration and even suboptimal results. In this paper, we introduce a novel Centralized Advising and Decentralized Pruning (CADP) framework for multi-agent reinforcement learning, that not only enables an efficacious message exchange among agents during training but also guarantees the independent policies for execution. Firstly, CADP endows agents the explicit communication channel to seek and take advices from different agents for more centralized training. To further ensure the decentralized execution, we propose a smooth model pruning mechanism to progressively constraint the agent communication into a closed one without degradation in agent cooperation capability. Empirical evaluations on StarCraft II micromanagement and Google Research Football benchmarks demonstrate that the proposed framework achieves superior performance compared with the state-of-the-art counterparts. Our code will be made publicly available. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 368,543 |
2407.07110 | Foundation Models for ECG: Leveraging Hybrid Self-Supervised Learning
for Advanced Cardiac Diagnostics | Using foundation models enhanced by self-supervised learning (SSL) methods presents an innovative approach to electrocardiogram (ECG) analysis, which is crucial for cardiac health monitoring and diagnosis. This study comprehensively evaluates foundation models for ECGs, leveraging SSL methods, including generative and contrastive learning, on a vast dataset comprising approximately 1.3 million ECG samples. By integrating these methods with consideration of the unique characteristics of ECGs, we developed a Hybrid Learning (HL) for foundation models that improve the precision and reliability of cardiac diagnostics. The HL-based foundation model adeptly captures the intricate details of ECGs, enhancing diagnostic capability. The results underscore the considerable potential of SSL-enhanced foundation models in clinical settings, setting the stage for future research into their scalable applications across a broader range of medical diagnostics. This work sets a new standard in the ECG field, emphasizing the transformative influence of tailored, data-driven model training on the effectiveness and accuracy of medical diagnostics. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 471,653 |
2302.06786 | Interference and noise cancellation for joint communication radar (JCR)
system based on contextual information | This paper examines the separation of wireless communication and radar signals, thereby guaranteeing cohabitation and acting as a panacea to spectrum sensing. First, considering that the channel impulse response was known by the receivers (communication and radar), we showed that the optimizing beamforming weights mitigate the interference caused by signals and improve the physical layer security (PLS) of the system. Furthermore, when the channel responses were unknown, we designed an interference filter as a low-complex noise and interference cancellation autoencoder. By mitigating the interference on the legitimate users, the PLS was guaranteed. Results showed that even for a low signal-to-noise ratio, the autoencoder produces low root-mean-square error (RMSE) values. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 345,529 |
1010.3541 | Heterogenous scaling in interevent time of on-line bookmarking | In this paper, we study the statistical properties of bookmarking behaviors in Delicious.com. We find that the interevent time distributions of bookmarking decays powerlike as interevent time increases at both individual and population level. Remarkably, we observe a significant change in the exponent when interevent time increases from intra-day to inter-day range. In addition, dependence of exponent on individual Activity is found to be different in the two ranges. These results suggests that mechanisms driving human actions are different in intra- and inter-day range. Instead of monotonically increasing with Activity, we find that inter-day exponent peaks at value around 3. We further show that less active users are more likely to resemble poisson process in bookmarking. Based on the temporal-preference model, preliminary explanations for this dependence have been given . Finally, a universal behavior in inter-day scale is observed by considering the rescaled variable. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 7,933 |
1608.02307 | SANTIAGO: Spine Association for Neuron Topology Improvement and Graph
Optimization | Developing automated and semi-automated solutions for reconstructing wiring diagrams of the brain from electron micrographs is important for advancing the field of connectomics. While the ultimate goal is to generate a graph of neuron connectivity, most prior automated methods have focused on volume segmentation rather than explicit graph estimation. In these approaches, one of the key, commonly occurring error modes is dendritic shaft-spine fragmentation. We posit that directly addressing this problem of connection identification may provide critical insight into estimating more accurate brain graphs. To this end, we develop a network-centric approach motivated by biological priors image grammars. We build a computer vision pipeline to reconnect fragmented spines to their parent dendrites using both fully-automated and semi-automated approaches. Our experiments show we can learn valid connections despite uncertain segmentation paths. We curate the first known reference dataset for analyzing the performance of various spine-shaft algorithms and demonstrate promising results that recover many previously lost connections. Our automated approach improves the local subgraph score by more than four times and the full graph score by 60 percent. These data, results, and evaluation tools are all available to the broader scientific community. This reframing of the connectomics problem illustrates a semantic, biologically inspired solution to remedy a major problem with neuron tracking. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 59,547 |
2211.00752 | DeltaFinger: a 3-DoF Wearable Haptic Display Enabling High-Fidelity
Force Vector Presentation at a User Finger | This paper presents a novel haptic device DeltaFinger designed to deliver the force of interaction with virtual objects by guiding user's finger with wearable delta mechanism. The developed interface is capable to deliver 3D force vector to the fingertip of the index finger of the user, allowing complex rendering of virtual reality (VR) environment. The developed device is able to produce the kinesthetic feedback up to 1.8 N in vertical projection and 0.9 N in horizontal projection without restricting the motion freedom of of the remaining fingers. The experimental results showed a sufficient precision in perception of force vector with DeltaFinger (mean force vector error of 0.6 rad). The proposed device potentially can be applied to VR communications, medicine, and navigation of the people with vision problems. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 327,985 |
1906.10511 | Benchmarking Neural Machine Translation for Southern African Languages | Unlike major Western languages, most African languages are very low-resourced. Furthermore, the resources that do exist are often scattered and difficult to obtain and discover. As a result, the data and code for existing research has rarely been shared. This has lead a struggle to reproduce reported results, and few publicly available benchmarks for African machine translation models exist. To start to address these problems, we trained neural machine translation models for 5 Southern African languages on publicly-available datasets. Code is provided for training the models and evaluate the models on a newly released evaluation set, with the aim of spur future research in the field for Southern African languages. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 136,450 |
1011.6441 | LP Decodable Permutation Codes based on Linearly Constrained Permutation
Matrices | A set of linearly constrained permutation matrices are proposed for constructing a class of permutation codes. Making use of linear constraints imposed on the permutation matrices, we can formulate a minimum Euclidian distance decoding problem for the proposed class of permutation codes as a linear programming (LP) problem. The main feature of this class of permutation codes, called LP decodable permutation codes, is this LP decodability. It is demonstrated that the LP decoding performance of the proposed class of permutation codes is characterized by the vertices of the code polytope of the code. Two types of linear constraints are discussed; one is structured constraints and another is random constraints. The structured constraints such as pure involution lead to an efficient encoding algorithm. On the other hand, the random constraints enable us to use probabilistic methods for analyzing several code properties such as the average cardinality and the average weight distribution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 8,367 |
2401.07764 | When Large Language Model Agents Meet 6G Networks: Perception,
Grounding, and Alignment | AI agents based on multimodal large language models (LLMs) are expected to revolutionize human-computer interaction and offer more personalized assistant services across various domains like healthcare, education, manufacturing, and entertainment. Deploying LLM agents in 6G networks enables users to access previously expensive AI assistant services via mobile devices democratically, thereby reducing interaction latency and better preserving user privacy. Nevertheless, the limited capacity of mobile devices constrains the effectiveness of deploying and executing local LLMs, which necessitates offloading complex tasks to global LLMs running on edge servers during long-horizon interactions. In this article, we propose a split learning system for LLM agents in 6G networks leveraging the collaboration between mobile devices and edge servers, where multiple LLMs with different roles are distributed across mobile devices and edge servers to perform user-agent interactive tasks collaboratively. In the proposed system, LLM agents are split into perception, grounding, and alignment modules, facilitating inter-module communications to meet extended user requirements on 6G network functions, including integrated sensing and communication, digital twins, and task-oriented communications. Furthermore, we introduce a novel model caching algorithm for LLMs within the proposed system to improve model utilization in context, thus reducing network costs of the collaborative mobile and edge LLM agents. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 421,648 |
1704.04336 | An entity-driven recursive neural network model for chinese discourse
coherence modeling | Chinese discourse coherence modeling remains a challenge taskin Natural Language Processing field.Existing approaches mostlyfocus on the need for feature engineering, whichadoptthe sophisticated features to capture the logic or syntactic or semantic relationships acrosssentences within a text.In this paper, we present an entity-drivenrecursive deep modelfor the Chinese discourse coherence evaluation based on current English discourse coherenceneural network model. Specifically, to overcome the shortage of identifying the entity(nouns) overlap across sentences in the currentmodel, Our combined modelsuccessfully investigatesthe entities information into the recursive neural network freamework.Evaluation results on both sentence ordering and machine translation coherence rating task show the effectiveness of the proposed model, which significantly outperforms the existing strong baseline. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 71,792 |
1811.04376 | Explaining Deep Learning Models using Causal Inference | Although deep learning models have been successfully applied to a variety of tasks, due to the millions of parameters, they are becoming increasingly opaque and complex. In order to establish trust for their widespread commercial use, it is important to formalize a principled framework to reason over these models. In this work, we use ideas from causal inference to describe a general framework to reason over CNN models. Specifically, we build a Structural Causal Model (SCM) as an abstraction over a specific aspect of the CNN. We also formulate a method to quantitatively rank the filters of a convolution layer according to their counterfactual importance. We illustrate our approach with popular CNN architectures such as LeNet5, VGG19, and ResNet32. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 113,076 |
2404.02124 | Exploring Automated Distractor Generation for Math Multiple-choice
Questions via Large Language Models | Multiple-choice questions (MCQs) are ubiquitous in almost all levels of education since they are easy to administer, grade, and are a reliable format in assessments and practices. One of the most important aspects of MCQs is the distractors, i.e., incorrect options that are designed to target common errors or misconceptions among real students. To date, the task of crafting high-quality distractors largely remains a labor and time-intensive process for teachers and learning content designers, which has limited scalability. In this work, we study the task of automated distractor generation in the domain of math MCQs and explore a wide variety of large language model (LLM)-based approaches, from in-context learning to fine-tuning. We conduct extensive experiments using a real-world math MCQ dataset and find that although LLMs can generate some mathematically valid distractors, they are less adept at anticipating common errors or misconceptions among real students. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 443,729 |
cs/0702071 | What is needed to exploit knowledge of primary transmissions? | Recently, Tarokh and others have raised the possibility that a cognitive radio might know the interference signal being transmitted by a strong primary user in a non-causal way, and use this knowledge to increase its data rates. However, there is a subtle difference between knowing the signal transmitted by the primary and the actual interference at our receiver since there is a wireless channel between these two points. We show that even an unknown phase results in a substantial decrease in the data rates that can be achieved, and thus there is a need to feedback interference channel estimates to the cognitive transmitter. We then consider the case of fading channels. We derive an upper bound on the rate for given outage error probability for faded dirt. We give a scheme that uses appropriate "training" to obtain such estimates and quantify this scheme's required overhead as a function of the relevant coherence time and interference power. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 540,155 |
2109.05771 | Perturbation CheckLists for Evaluating NLG Evaluation Metrics | Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e.g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc. Across existing datasets for 6 NLG tasks, we observe that the human evaluation scores on these multiple criteria are often not correlated. For example, there is a very low correlation between human scores on fluency and data coverage for the task of structured data to text generation. This suggests that the current recipe of proposing new automatic evaluation metrics for NLG by showing that they correlate well with scores assigned by humans for a single criteria (overall quality) alone is inadequate. Indeed, our extensive study involving 25 automatic evaluation metrics across 6 different tasks and 18 different evaluation criteria shows that there is no single metric which correlates well with human scores on all desirable criteria, for most NLG tasks. Given this situation, we propose CheckLists for better design and evaluation of automatic metrics. We design templates which target a specific criteria (e.g., coverage) and perturb the output such that the quality gets affected only along this specific criteria (e.g., the coverage drops). We show that existing evaluation metrics are not robust against even such simple perturbations and disagree with scores assigned by humans to the perturbed output. The proposed templates thus allow for a fine-grained assessment of automatic evaluation metrics exposing their limitations and will facilitate better design, analysis and evaluation of such metrics. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 254,936 |
1903.02330 | Self-Supervised Learning of 3D Human Pose using Multi-view Geometry | Training accurate 3D human pose estimators requires large amount of 3D ground-truth data which is costly to collect. Various weakly or self supervised pose estimation methods have been proposed due to lack of 3D data. Nevertheless, these methods, in addition to 2D ground-truth poses, require either additional supervision in various forms (e.g. unpaired 3D ground truth data, a small subset of labels) or the camera parameters in multiview settings. To address these problems, we present EpipolarPose, a self-supervised learning method for 3D human pose estimation, which does not need any 3D ground-truth data or camera extrinsics. During training, EpipolarPose estimates 2D poses from multi-view images, and then, utilizes epipolar geometry to obtain a 3D pose and camera geometry which are subsequently used to train a 3D pose estimator. We demonstrate the effectiveness of our approach on standard benchmark datasets i.e. Human3.6M and MPI-INF-3DHP where we set the new state-of-the-art among weakly/self-supervised methods. Furthermore, we propose a new performance measure Pose Structure Score (PSS) which is a scale invariant, structure aware measure to evaluate the structural plausibility of a pose with respect to its ground truth. Code and pretrained models are available at https://github.com/mkocabas/EpipolarPose | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 123,482 |
1902.00369 | Medical Image Super-Resolution Using a Generative Adversarial Network | During the growing popularity of electronic medical records, electronic medical record (EMR) data has exploded increasingly. It is very meaningful to retrieve high quality EMR in mass data. In this paper, an EMR value network with retrieval function is constructed by taking stroke disease as the research object. It mainly includes: 1) It establishes the electronic medical record database and corresponding stroke knowledge graph. 2) The strategy of similarity measurement is included three parts(patients' chief complaint, pathology results and medical images). Patients' chief complaints are text data, mainly describing patients' symptoms and expressed in words or phrases, and patients' chief complaints are input in independent tick of various symptoms. The data of the pathology results is a structured and digitized expression, so the input method is the same as the patient's chief complaint; Image similarity adopts content-based image retrieval(CBIR) technology. 3) The analytic hierarchy process (AHP) is used to establish the weights of the three types of data and then synthesize them into an indicator. The accuracy rate of similarity in top 5 was more than 85\% based on EMR database with more 200 stroke records using leave-one-out method. It will be the good tool for assistant diagnosis and doctor training, as good quality records are colleted into the databases, like Doctor Watson, in the future. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 120,388 |
2006.04334 | Characterizing Sociolinguistic Variation in the Competing Vaccination
Communities | Public health practitioners and policy makers grapple with the challenge of devising effective message-based interventions for debunking public health misinformation in cyber communities. "Framing" and "personalization" of the message is one of the key features for devising a persuasive messaging strategy. For an effective health communication, it is imperative to focus on "preference-based framing" where the preferences of the target sub-community are taken into consideration. To achieve that, it is important to understand and hence characterize the target sub-communities in terms of their social interactions. In the context of health-related misinformation, vaccination remains to be the most prevalent topic of discord. Hence, in this paper, we conduct a sociolinguistic analysis of the two competing vaccination communities on Twitter: "pro-vaxxers" or individuals who believe in the effectiveness of vaccinations, and "anti-vaxxers" or individuals who are opposed to vaccinations. Our data analysis show significant linguistic variation between the two communities in terms of their usage of linguistic intensifiers, pronouns, and uncertainty words. Our network-level analysis show significant differences between the two communities in terms of their network density, echo-chamberness, and the EI index. We hypothesize that these sociolinguistic differences can be used as proxies to characterize and understand these communities to devise better message interventions. | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 180,653 |
1603.06729 | On the Statistical Analysis of Practical SPARQL Queries | In this paper, we analyze some basic features of SPARQL queries coming from our practical world in a statistical way. These features include three statistic features such as the occurrence frequency of triple patterns, fragments, well-designed patterns and four semantic features such as monotonicity, non-monotonicity, weak monotonicity (old solutions are still served as parts of new solutions when some new triples are added) and satisfiability. All these features contribute to characterize SPARQL queries in different dimensions. We hope that this statistical analysis would provide some useful observation for researchers and engineers who are interested in what practical SPARQL queries look like, so that they could develop some practical heuristics for processing SPARQL queries and build SPARQL query processing engines and benchmarks. Besides, they can narrow the scope of their problems by avoiding those cases that do possibly not happen in our practical world. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 53,533 |
2410.06042 | Weighted Embeddings for Low-Dimensional Graph Representation | Learning low-dimensional numerical representations from symbolic data, e.g., embedding the nodes of a graph into a geometric space, is an important concept in machine learning. While embedding into Euclidean space is common, recent observations indicate that hyperbolic geometry is better suited to represent hierarchical information and heterogeneous data (e.g., graphs with a scale-free degree distribution). Despite their potential for more accurate representations, hyperbolic embeddings also have downsides like being more difficult to compute and harder to use in downstream tasks. We propose embedding into a weighted space, which is closely related to hyperbolic geometry but mathematically simpler. We provide the embedding algorithm WEmbed and demonstrate, based on generated as well as over 2000 real-world graphs, that our weighted embeddings heavily outperform state-of-the-art Euclidean embeddings for heterogeneous graphs while using fewer dimensions. The running time of WEmbed and embedding quality for the remaining instances is on par with state-of-the-art Euclidean embedders. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 496,027 |
2401.09340 | SceneVerse: Scaling 3D Vision-Language Learning for Grounded Scene
Understanding | 3D vision-language grounding, which focuses on aligning language with the 3D physical environment, stands as a cornerstone in the development of embodied agents. In comparison to recent advancements in the 2D domain, grounding language in 3D scenes faces several significant challenges: (i) the inherent complexity of 3D scenes due to the diverse object configurations, their rich attributes, and intricate relationships; (ii) the scarcity of paired 3D vision-language data to support grounded learning; and (iii) the absence of a unified learning framework to distill knowledge from grounded 3D data. In this work, we aim to address these three major challenges in 3D vision-language by examining the potential of systematically upscaling 3D vision-language learning in indoor environments. We introduce the first million-scale 3D vision-language dataset, SceneVerse, encompassing about 68K 3D indoor scenes and comprising 2.5M vision-language pairs derived from both human annotations and our scalable scene-graph-based generation approach. We demonstrate that this scaling allows for a unified pre-training framework, Grounded Pre-training for Scenes (GPS), for 3D vision-language learning. Through extensive experiments, we showcase the effectiveness of GPS by achieving state-of-the-art performance on all existing 3D visual grounding benchmarks. The vast potential of SceneVerse and GPS is unveiled through zero-shot transfer experiments in the challenging 3D vision-language tasks. Project website: https://scene-verse.github.io. | false | false | false | false | true | false | true | true | true | false | false | true | false | false | false | false | false | false | 422,227 |
2303.02052 | Interruptions detection in video conferences | In recent years, video conferencing (VC) popularity has skyrocketed for a wide range of activities. As a result, the number of VC users surged sharply. The sharp increase in VC usage has been accompanied by various newly emerging privacy and security challenges. VC meetings became a target for various security attacks, such as Zoombombing. Other VC-related challenges also emerged. For example, during COVID lockdowns, educators had to teach in online environments struggling with keeping students engaged for extended periods. In parallel, the amount of available VC videos has grown exponentially. Thus, users and companies are limited in finding abnormal segments in VC meetings within the converging volumes of data. Such abnormal events that affect most meeting participants may be indicators of interesting points in time, including security attacks or other changes in meeting climate, like someone joining a meeting or sharing a dramatic content. Here, we present a novel algorithm for detecting abnormal events in VC data. We curated VC publicly available recordings, including meetings with interruptions. We analyzed the videos using our algorithm, extracting time windows where abnormal occurrences were detected. Our algorithm is a pipeline that combines multiple methods in several steps to detect users' faces in each video frame, track face locations during the meeting and generate vector representations of a facial expression for each face in each frame. Vector representations are used to monitor changes in facial expressions throughout the meeting for each participant. The overall change in meeting climate is quantified using those parameters across all participants, and translating them into event anomaly detection. This is the first open pipeline for automatically detecting anomaly events in VC meetings. Our model detects abnormal events with 92.3% precision over the collected dataset. | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 349,207 |
1805.11730 | Learn to Combine Modalities in Multimodal Deep Learning | Combining complementary information from multiple modalities is intuitively appealing for improving the performance of learning-based approaches. However, it is challenging to fully leverage different modalities due to practical challenges such as varying levels of noise and conflicts between modalities. Existing methods do not adopt a joint approach to capturing synergies between the modalities while simultaneously filtering noise and resolving conflicts on a per sample basis. In this work we propose a novel deep neural network based technique that multiplicatively combines information from different source modalities. Thus the model training process automatically focuses on information from more reliable modalities while reducing emphasis on the less reliable modalities. Furthermore, we propose an extension that multiplicatively combines not only the single-source modalities, but a set of mixtured source modalities to better capture cross-modal signal correlations. We demonstrate the effectiveness of our proposed technique by presenting empirical results on three multimodal classification tasks from different domains. The results show consistent accuracy improvements on all three tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 98,993 |
1408.5667 | Dependent Nonparametric Bayesian Group Dictionary Learning for online
reconstruction of Dynamic MR images | In this paper, we introduce a dictionary learning based approach applied to the problem of real-time reconstruction of MR image sequences that are highly undersampled in k-space. Unlike traditional dictionary learning, our method integrates both global and patch-wise (local) sparsity information and incorporates some priori information into the reconstruction process. Moreover, we use a Dependent Hierarchical Beta-process as the prior for the group-based dictionary learning, which adaptively infers the dictionary size and the sparsity of each patch; and also ensures that similar patches are manifested in terms of similar dictionary atoms. An efficient numerical algorithm based on the alternating direction method of multipliers (ADMM) is also presented. Through extensive experimental results we show that our proposed method achieves superior reconstruction quality, compared to the other state-of-the- art DL-based methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 35,572 |
2004.11020 | SimUSR: A Simple but Strong Baseline for Unsupervised Image
Super-resolution | In this paper, we tackle a fully unsupervised super-resolution problem, i.e., neither paired images nor ground truth HR images. We assume that low resolution (LR) images are relatively easy to collect compared to high resolution (HR) images. By allowing multiple LR images, we build a set of pseudo pairs by denoising and downsampling LR images and cast the original unsupervised problem into a supervised learning problem but in one level lower. Though this line of study is easy to think of and thus should have been investigated prior to any complicated unsupervised methods, surprisingly, there are currently none. Even more, we show that this simple method outperforms the state-of-the-art unsupervised method with a dramatically shorter latency at runtime, and significantly reduces the gap to the HR supervised models. We submitted our method in NTIRE 2020 super-resolution challenge and won 1st in PSNR, 2nd in SSIM, and 13th in LPIPS. This simple method should be used as the baseline to beat in the future, especially when multiple LR images are allowed during the training phase. However, even in the zero-shot condition, we argue that this method can serve as a useful baseline to see the gap between supervised and unsupervised frameworks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 173,795 |
2502.14785 | Real-Time Device Reach Forecasting Using HLL and MinHash Data Sketches | Predicting the right number of TVs (Device Reach) in real-time based on a user-specified targeting attributes is imperative for running multi-million dollar ADs business. The traditional approach of SQL queries to join billions of records across multiple targeting dimensions is extremely slow. As a workaround, many applications will have an offline process to crunch these numbers and present the results after many hours. In our case, the solution was an offline process taking 24 hours to onboard a customer resulting in a potential loss of business. To solve this problem, we have built a new real-time prediction system using MinHash and HyperLogLog (HLL) data sketches to compute the device reach at runtime when a user makes a request. However, existing MinHash implementations do not solve the complex problem of multilevel aggregation and intersection. This work will show how we have solved this problem, in addition, we have improved MinHash algorithm to run 4 times faster using Single Instruction Multiple Data (SIMD) vectorized operations for high speed and accuracy with constant space to process billions of records. Finally, by experiments, we prove that the results are as accurate as traditional offline prediction system with an acceptable error rate of 5%. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | true | false | 535,976 |
2106.07732 | Learning Audio-Visual Dereverberation | Reverberation not only degrades the quality of speech for human perception, but also severely impacts the accuracy of automatic speech recognition. Prior work attempts to remove reverberation based on the audio modality only. Our idea is to learn to dereverberate speech from audio-visual observations. The visual environment surrounding a human speaker reveals important cues about the room geometry, materials, and speaker location, all of which influence the precise reverberation effects. We introduce Visually-Informed Dereverberation of Audio (VIDA), an end-to-end approach that learns to remove reverberation based on both the observed monaural sound and visual scene. In support of this new task, we develop a large-scale dataset SoundSpaces-Speech that uses realistic acoustic renderings of speech in real-world 3D scans of homes offering a variety of room acoustics. Demonstrating our approach on both simulated and real imagery for speech enhancement, speech recognition, and speaker identification, we show it achieves state-of-the-art performance and substantially improves over audio-only methods. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 241,024 |
0711.2501 | Error Exponents of Erasure/List Decoding Revisited via Moments of
Distance Enumerators | The analysis of random coding error exponents pertaining to erasure/list decoding, due to Forney, is revisited. Instead of using Jensen's inequality as well as some other inequalities in the derivation, we demonstrate that an exponentially tight analysis can be carried out by assessing the relevant moments of a certain distance enumerator. The resulting bound has the following advantages: (i) it is at least as tight as Forney's bound, (ii) under certain symmetry conditions associated with the channel and the random coding distribution, it is simpler than Forney's bound in the sense that it involves an optimization over one parameter only (rather than two), and (iii) in certain special cases, like the binary symmetric channel (BSC), the optimum value of this parameter can be found in closed form, and so, there is no need to conduct a numerical search. We have not found yet, however, a numerical example where this new bound is strictly better than Forney's bound. This may provide an additional evidence to support Forney's conjecture that his bound is tight for the average code. We believe that the technique we suggest in this paper can be useful in simplifying, and hopefully also improving, exponential error bounds in other problem settings as well. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 903 |
2011.00399 | Temporally-Continuous Probabilistic Prediction using Polynomial
Trajectory Parameterization | A commonly-used representation for motion prediction of actors is a sequence of waypoints (comprising positions and orientations) for each actor at discrete future time-points. While this approach is simple and flexible, it can exhibit unrealistic higher-order derivatives (such as acceleration) and approximation errors at intermediate time steps. To address this issue we propose a simple and general representation for temporally continuous probabilistic trajectory prediction that is based on polynomial trajectory parameterization. We evaluate the proposed representation on supervised trajectory prediction tasks using two large self-driving data sets. The results show realistic higher-order derivatives and better accuracy at interpolated time-points, as well as the benefits of the inferred noise distributions over the trajectories. Extensive experimental studies based on existing state-of-the-art models demonstrate the effectiveness of the proposed approach relative to other representations in predicting the future motions of vehicle, bicyclist, and pedestrian traffic actors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 204,196 |
2208.01847 | Advance sharing of quantum shares for classical secrets | Secret sharing schemes for classical secrets can be classified into classical secret sharing schemes and quantum secret sharing schemes. Classical secret sharing has been known to be able to distribute some shares before a given secret. On the other hand, quantum mechanics extends the capabilities of secret sharing beyond those of classical secret sharing. We propose quantum secret sharing with the capabilities in designing of access structures more flexibly and realizing higher efficiency beyond those of classical secret sharing, that can distribute some shares before a given secret. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 311,294 |
1904.10446 | Generated Loss, Augmented Training, and Multiscale VAE | The variational autoencoder (VAE) framework remains a popular option for training unsupervised generative models, especially for discrete data where generative adversarial networks (GANs) require workaround to create gradient for the generator. In our work modeling US postal addresses, we show that our discrete VAE with tree recursive architecture demonstrates limited capability of capturing field correlations within structured data, even after overcoming the challenge of posterior collapse with scheduled sampling and tuning of the KL-divergence weight $\beta$. Worse, VAE seems to have difficulty mapping its generated samples to the latent space, as their VAE loss lags behind or even increases during the training process. Motivated by this observation, we show that augmenting training data with generated variants (augmented training) and training a VAE with multiple values of $\beta$ simultaneously (multiscale VAE) both improve the generation quality of VAE. Despite their differences in motivation and emphasis, we show that augmented training and multiscale VAE are actually connected and have similar effects on the model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 128,633 |
1410.0925 | A Framework for the Volumetric Integration of Depth Images | Volumetric models have become a popular representation for 3D scenes in recent years. One of the breakthroughs leading to their popularity was KinectFusion, where the focus is on 3D reconstruction using RGB-D sensors. However, monocular SLAM has since also been tackled with very similar approaches. Representing the reconstruction volumetrically as a truncated signed distance function leads to most of the simplicity and efficiency that can be achieved with GPU implementations of these systems. However, this representation is also memory-intensive and limits the applicability to small scale reconstructions. Several avenues have been explored for overcoming this limitation. With the aim of summarizing them and providing for a fast and flexible 3D reconstruction pipeline, we propose a new, unifying framework called InfiniTAM. The core idea is that individual steps like camera tracking, scene representation and integration of new data can easily be replaced and adapted to the needs of the user. Along with the framework we also provide a set of components for scalable reconstruction: two implementations of camera trackers, based on RGB data and on depth data, two representations of the 3D volumetric data, a dense volume and one based on hashes of subblocks, and an optional module for swapping subblocks in and out of the typically limited GPU memory. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 36,509 |
2108.00552 | PSE-Match: A Viewpoint-free Place Recognition Method with Parallel
Semantic Embedding | Accurate localization on autonomous driving cars is essential for autonomy and driving safety, especially for complex urban streets and search-and-rescue subterranean environments where high-accurate GPS is not available. However current odometry estimation may introduce the drifting problems in long-term navigation without robust global localization. The main challenges involve scene divergence under the interference of dynamic environments and effective perception of observation and object layout variance from different viewpoints. To tackle these challenges, we present PSE-Match, a viewpoint-free place recognition method based on parallel semantic analysis of isolated semantic attributes from 3D point-cloud models. Compared with the original point cloud, the observed variance of semantic attributes is smaller. PSE-Match incorporates a divergence place learning network to capture different semantic attributes parallelly through the spherical harmonics domain. Using both existing benchmark datasets and two in-field collected datasets, our experiments show that the proposed method achieves above 70% average recall with top one retrieval and above 95% average recall with top ten retrieval cases. And PSE-Match has also demonstrated an obvious generalization ability with a limited training dataset. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 248,751 |
2305.18387 | Augmenting Character Designers Creativity Using Generative Adversarial
Networks | Recent advances in Generative Adversarial Networks (GANs) continue to attract the attention of researchers in different fields due to the wide range of applications devised to take advantage of their key features. Most recent GANs are focused on realism, however, generating hyper-realistic output is not a priority for some domains, as in the case of this work. The generated outcomes are used here as cognitive components to augment character designers creativity while conceptualizing new characters for different multimedia projects. To select the best-suited GANs for such a creative context, we first present a comparison between different GAN architectures and their performance when trained from scratch on a new visual characters dataset using a single Graphics Processing Unit. We also explore alternative techniques, such as transfer learning and data augmentation, to overcome computational resource limitations, a challenge faced by many researchers in the domain. Additionally, mixed methods are used to evaluate the cognitive value of the generated visuals on character designers agency conceptualizing new characters. The results discussed proved highly effective for this context, as demonstrated by early adaptations to the characters design process. As an extension for this work, the presented approach will be further evaluated as a novel co-design process between humans and machines to investigate where and how the generated concepts are interacting with and influencing the design process outcome. | true | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 369,006 |
2202.08171 | Capitalization Normalization for Language Modeling with an Accurate and
Efficient Hierarchical RNN Model | Capitalization normalization (truecasing) is the task of restoring the correct case (uppercase or lowercase) of noisy text. We propose a fast, accurate and compact two-level hierarchical word-and-character-based recurrent neural network model. We use the truecaser to normalize user-generated text in a Federated Learning framework for language modeling. A case-aware language model trained on this normalized text achieves the same perplexity as a model trained on text with gold capitalization. In a real user A/B experiment, we demonstrate that the improvement translates to reduced prediction error rates in a virtual keyboard application. Similarly, in an ASR language model fusion experiment, we show reduction in uppercase character error rate and word error rate. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 280,786 |
1801.02270 | Perceptual Context in Cognitive Hierarchies | Cognition does not only depend on bottom-up sensor feature abstraction, but also relies on contextual information being passed top-down. Context is higher level information that helps to predict belief states at lower levels. The main contribution of this paper is to provide a formalisation of perceptual context and its integration into a new process model for cognitive hierarchies. Several simple instantiations of a cognitive hierarchy are used to illustrate the role of context. Notably, we demonstrate the use context in a novel approach to visually track the pose of rigid objects with just a 2D camera. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 87,897 |
2310.08929 | Leveraging Image Augmentation for Object Manipulation: Towards
Interpretable Controllability in Object-Centric Learning | The binding problem in artificial neural networks is actively explored with the goal of achieving human-level recognition skills through the comprehension of the world in terms of symbol-like entities. Especially in the field of computer vision, object-centric learning (OCL) is extensively researched to better understand complex scenes by acquiring object representations or slots. While recent studies in OCL have made strides with complex images or videos, the interpretability and interactivity over object representation remain largely uncharted, still holding promise in the field of OCL. In this paper, we introduce a novel method, Slot Attention with Image Augmentation (SlotAug), to explore the possibility of learning interpretable controllability over slots in a self-supervised manner by utilizing an image augmentation strategy. We also devise the concept of sustainability in controllable slots by introducing iterative and reversible controls over slots with two proposed submethods: Auxiliary Identity Manipulation and Slot Consistency Loss. Extensive empirical studies and theoretical validation confirm the effectiveness of our approach, offering a novel capability for interpretable and sustainable control of object representations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 399,605 |
1811.04577 | Forecasting People's Needs in Hurricane Events from Social Network | Social networks can serve as a valuable communication channel for calls for help, offering assistance, and coordinating rescue activities in disaster. Social networks such as Twitter allow users to continuously update relevant information, which is especially useful during a crisis, where the rapidly changing conditions make it crucial to be able to access accurate information promptly. Social media helps those directly affected to inform others of conditions on the ground in real time and thus enables rescue workers to coordinate their efforts more effectively, better meeting the survivors' need. This paper presents a new sequence to sequence based framework for forecasting people's needs during disasters using social media and weather data. It consists of two Long Short-Term Memory (LSTM) models, one of which encodes input sequences of weather information and the other plays as a conditional decoder that decodes the encoded vector and forecasts the survivors' needs. Case studies utilizing data collected during Hurricane Sandy in 2012, Hurricane Harvey and Hurricane Irma in 2017 were analyzed and the results compared with those obtained using a statistical language model n-gram and an LSTM generative model. Our proposed sequence to sequence method forecast people's needs more successfully than either of the other models. This new approach shows great promise for enhancing disaster management activities such as evacuation planning and commodity flow management. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 113,123 |
1809.01093 | Adversarial Attacks on Node Embeddings via Graph Poisoning | The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks. However, despite the proliferation of such methods, there is currently no study of their robustness to adversarial attacks. We provide the first adversarial vulnerability analysis on the widely used family of methods based on random walks. We derive efficient adversarial perturbations that poison the network structure and have a negative effect on both the quality of the embeddings and the downstream tasks. We further show that our attacks are transferable since they generalize to many models and are successful even when the attacker is restricted. | false | false | false | true | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 106,734 |
2003.08737 | A Matlab Toolbox for Feature Importance Ranking | More attention is being paid for feature importance ranking (FIR), in particular when thousands of features can be extracted for intelligent diagnosis and personalized medicine. A large number of FIR approaches have been proposed, while few are integrated for comparison and real-life applications. In this study, a matlab toolbox is presented and a total of 30 algorithms are collected. Moreover, the toolbox is evaluated on a database of 163 ultrasound images. To each breast mass lesion, 15 features are extracted. To figure out the optimal subset of features for classification, all combinations of features are tested and linear support vector machine is used for the malignancy prediction of lesions annotated in ultrasound images. At last, the effectiveness of FIR is analyzed according to performance comparison. The toolbox is online (https://github.com/NicoYuCN/matFIR). In our future work, more FIR methods, feature selection methods and machine learning classifiers will be integrated. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 168,828 |
1709.08924 | UBSegNet: Unified Biometric Region of Interest Segmentation Network | Digital human identity management, can now be seen as a social necessity, as it is essentially required in almost every public sector such as, financial inclusions, security, banking, social networking e.t.c. Hence, in today's rampantly emerging world with so many adversarial entities, relying on a single biometric trait is being too optimistic. In this paper, we have proposed a novel end-to-end, Unified Biometric ROI Segmentation Network (UBSegNet), for extracting region of interest from five different biometric traits viz. face, iris, palm, knuckle and 4-slap fingerprint. The architecture of the proposed UBSegNet consists of two stages: (i) Trait classification and (ii) Trait localization. For these stages, we have used a state of the art region based convolutional neural network (RCNN), comprising of three major parts namely convolutional layers, region proposal network (RPN) along with classification and regression heads. The model has been evaluated over various huge publicly available biometric databases. To the best of our knowledge this is the first unified architecture proposed, segmenting multiple biometric traits. It has been tested over around 5000 * 5 = 25,000 images (5000 images per trait) and produces very good results. Our work on unified biometric segmentation, opens up the vast opportunities in the field of multiple biometric traits based authentication systems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 81,551 |
1807.04073 | A punishment voting algorithm based on super categories construction for
acoustic scene classification | In acoustic scene classification researches, audio segment is usually split into multiple samples. Majority voting is then utilized to ensemble the results of the samples. In this paper, we propose a punishment voting algorithm based on the super categories construction method for acoustic scene classification. Specifically, we propose a DenseNet-like model as the base classifier. The base classifier is trained by the CQT spectrograms generated from the raw audio segments. Taking advantage of the results of the base classifier, we propose a super categories construction method using the spectral clustering. Super classifiers corresponding to the constructed super categories are further trained. Finally, the super classifiers are utilized to enhance the majority voting of the base classifier by punishment voting. Experiments show that the punishment voting obviously improves the performances on both the DCASE2017 Development dataset and the LITIS Rouen dataset. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 102,666 |
2412.16445 | Mixed geometry information regularization for image multiplicative
denoising | This paper focuses on solving the multiplicative gamma denoising problem via a variation model. Variation-based regularization models have been extensively employed in a variety of inverse problem tasks in image processing. However, sufficient geometric priors and efficient algorithms are still very difficult problems in the model design process. To overcome these issues, in this paper we propose a mixed geometry information model, incorporating area term and curvature term as prior knowledge. In addition to its ability to effectively remove multiplicative noise, our model is able to preserve edges and prevent staircasing effects. Meanwhile, to address the challenges stemming from the nonlinearity and non-convexity inherent in higher-order regularization, we propose the efficient additive operator splitting algorithm (AOS) and scalar auxiliary variable algorithm (SAV). The unconditional stability possessed by these algorithms enables us to use large time step. And the SAV method shows higher computational accuracy in our model. We employ the second order SAV algorithm to further speed up the calculation while maintaining accuracy. We demonstrate the effectiveness and efficiency of the model and algorithms by a lot of numerical experiments, where the model we proposed has better features texturepreserving properties without generating any false information. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 519,518 |
0804.0611 | Channel State Feedback Schemes for Multiuser MIMO-OFDM Downlink | Channel state feedback schemes for the MIMO broadcast downlink have been widely studied in the frequency-flat case. This work focuses on the more relevant frequency selective case, where some important new aspects emerge. We consider a MIMO-OFDM broadcast channel and compare achievable ergodic rates under three channel state feedback schemes: analog feedback, direction quantized feedback and "time-domain" channel quantized feedback. The first two schemes are direct extensions of previously proposed schemes. The third scheme is novel, and it is directly inspired by rate-distortion theory of Gaussian correlated sources. For each scheme we derive the conditions under which the system achieves full multiplexing gain. The key difference with respect to the widely treated frequency-flat case is that in MIMO-OFDM the frequency-domain channel transfer function is a Gaussian correlated source. The new time-domain quantization scheme takes advantage of the channel frequency correlation structure and outperforms the other schemes. Furthermore, it is by far simpler to implement than complicated spherical vector quantization. In particular, we observe that no structured codebook design and vector quantization is actually needed for efficient channel state information feedback. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,532 |
2205.09256 | Training Vision-Language Transformers from Captions | Vision-Language Transformers can be learned without low-level human labels (e.g. class labels, bounding boxes, etc). Existing work, whether explicitly utilizing bounding boxes or patches, assumes that the visual backbone must first be trained on ImageNet class prediction before being integrated into a multimodal linguistic pipeline. We show that this is not necessary and introduce a new model Vision-Language from Captions (VLC) built on top of Masked Auto-Encoders that does not require this supervision. In fact, in a head-to-head comparison between ViLT, the current state-of-the-art patch-based vision-language transformer which is pretrained with supervised object classification, and our model, VLC, we find that our approach 1. outperforms ViLT on standard benchmarks, 2. provides more interpretable and intuitive patch visualizations, and 3. is competitive with many larger models that utilize ROIs trained on annotated bounding-boxes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 297,203 |
2311.11252 | Submeter-level Land Cover Mapping of Japan | Deep learning has shown promising performance in submeter-level mapping tasks; however, the annotation cost of submeter-level imagery remains a challenge, especially when applied on a large scale. In this paper, we present the first submeter-level land cover mapping of Japan with eight classes, at a relatively low annotation cost. We introduce a human-in-the-loop deep learning framework leveraging OpenEarthMap, a recently introduced benchmark dataset for global submeter-level land cover mapping, with a U-Net model that achieves national-scale mapping with a small amount of additional labeled data. By adding a small amount of labeled data of areas or regions where a U-Net model trained on OpenEarthMap clearly failed and retraining the model, an overall accuracy of 80\% was achieved, which is a nearly 16 percentage point improvement after retraining. Using aerial imagery provided by the Geospatial Information Authority of Japan, we create land cover classification maps of eight classes for the entire country of Japan. Our framework, with its low annotation cost and high-accuracy mapping results, demonstrates the potential to contribute to the automatic updating of national-scale land cover mapping using submeter-level optical remote sensing data. The mapping results will be made publicly available. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 408,873 |
2406.04769 | Diffusion-based Generative Image Outpainting for Recovery of
FOV-Truncated CT Images | Field-of-view (FOV) recovery of truncated chest CT scans is crucial for accurate body composition analysis, which involves quantifying skeletal muscle and subcutaneous adipose tissue (SAT) on CT slices. This, in turn, enables disease prognostication. Here, we present a method for recovering truncated CT slices using generative image outpainting. We train a diffusion model and apply it to truncated CT slices generated by simulating a small FOV. Our model reliably recovers the truncated anatomy and outperforms the previous state-of-the-art despite being trained on 87% less data. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 461,833 |
2411.07933 | Prediction of Acoustic Communication Performance for AUVs using Gaussian
Process Classification | Cooperating autonomous underwater vehicles (AUVs) often rely on acoustic communication to coordinate their actions effectively. However, the reliability of underwater acoustic communication decreases as the communication range between vehicles increases. Consequently, teams of cooperating AUVs typically make conservative assumptions about the maximum range at which they can communicate reliably. To address this limitation, we propose a novel approach that involves learning a map representing the probability of successful communication based on the locations of the transmitting and receiving vehicles. This probabilistic communication map accounts for factors such as the range between vehicles, environmental noise, and multi-path effects at a given location. In pursuit of this goal, we investigate the application of Gaussian process binary classification to generate the desired communication map. We specialize existing results to this specific binary classification problem and explore methods to incorporate uncertainty in vehicle location into the mapping process. Furthermore, we compare the prediction performance of the probability communication map generated using binary classification with that of a signal-to-noise ratio (SNR) communication map generated using Gaussian process regression. Our approach is experimentally validated using communication and navigation data collected during trials with a pair of Virginia Tech 690 AUVs. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 507,723 |
1905.00626 | On Linear Learning with Manycore Processors | A new generation of manycore processors is on the rise that offers dozens and more cores on a chip and, in a sense, fuses host processor and accelerator. In this paper we target the efficient training of generalized linear models on these machines. We propose a novel approach for achieving parallelism which we call Heterogeneous Tasks on Homogeneous Cores (HTHC). It divides the problem into multiple fundamentally different tasks, which themselves are parallelized. For evaluation, we design a detailed, architecture-cognizant implementation of our scheme on a recent 72-core Knights Landing processor that is adaptive to the cache, memory, and core structure. Our library efficiently supports dense and sparse datasets as well as 4-bit quantized data for further possible gains in performance. We show benchmarks for Lasso and SVM with different data sets against straightforward parallel implementations and prior software. In particular, for Lasso on dense data, we improve the state-of-the-art by an order of magnitude. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 129,526 |
2308.09300 | V2A-Mapper: A Lightweight Solution for Vision-to-Audio Generation by
Connecting Foundation Models | Building artificial intelligence (AI) systems on top of a set of foundation models (FMs) is becoming a new paradigm in AI research. Their representative and generative abilities learnt from vast amounts of data can be easily adapted and transferred to a wide range of downstream tasks without extra training from scratch. However, leveraging FMs in cross-modal generation remains under-researched when audio modality is involved. On the other hand, automatically generating semantically-relevant sound from visual input is an important problem in cross-modal generation studies. To solve this vision-to-audio (V2A) generation problem, existing methods tend to design and build complex systems from scratch using modestly sized datasets. In this paper, we propose a lightweight solution to this problem by leveraging foundation models, specifically CLIP, CLAP, and AudioLDM. We first investigate the domain gap between the latent space of the visual CLIP and the auditory CLAP models. Then we propose a simple yet effective mapper mechanism (V2A-Mapper) to bridge the domain gap by translating the visual input between CLIP and CLAP spaces. Conditioned on the translated CLAP embedding, pretrained audio generative FM AudioLDM is adopted to produce high-fidelity and visually-aligned sound. Compared to previous approaches, our method only requires a quick training of the V2A-Mapper. We further analyze and conduct extensive experiments on the choice of the V2A-Mapper and show that a generative mapper is better at fidelity and variability (FD) while a regression mapper is slightly better at relevance (CS). Both objective and subjective evaluation on two V2A datasets demonstrate the superiority of our proposed method compared to current state-of-the-art approaches - trained with 86% fewer parameters but achieving 53% and 19% improvement in FD and CS, respectively. | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 386,230 |
2206.08967 | Random Forest of Epidemiological Models for Influenza Forecasting | Forecasting the hospitalizations caused by the Influenza virus is vital for public health planning so that hospitals can be better prepared for an influx of patients. Many forecasting methods have been used in real-time during the Influenza seasons and submitted to the CDC for public communication. The forecasting models range from mechanistic models, and auto-regression models to machine learning models. We hypothesize that we can improve forecasting by using multiple mechanistic models to produce potential trajectories and use machine learning to learn how to combine those trajectories into an improved forecast. We propose a Tree Ensemble model design that utilizes the individual predictors of our baseline model SIkJalpha to improve its performance. Each predictor is generated by changing a set of hyper-parameters. We compare our prospective forecasts deployed for the FluSight challenge (2022) to all the other submitted approaches. Our approach is fully automated and does not require any manual tuning. We demonstrate that our Random Forest-based approach is able to improve upon the forecasts of the individual predictors in terms of mean absolute error, coverage, and weighted interval score. Our method outperforms all other models in terms of the mean absolute error and the weighted interval score based on the mean across all weekly submissions in the current season (2022). Explainability of the Random Forest (through analysis of the trees) enables us to gain insights into how it improves upon the individual predictors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,374 |
2403.01874 | A Survey on Evaluation of Out-of-Distribution Generalization | Machine learning models, while progressively advanced, rely heavily on the IID assumption, which is often unfulfilled in practice due to inevitable distribution shifts. This renders them susceptible and untrustworthy for deployment in risk-sensitive applications. Such a significant problem has consequently spawned various branches of works dedicated to developing algorithms capable of Out-of-Distribution (OOD) generalization. Despite these efforts, much less attention has been paid to the evaluation of OOD generalization, which is also a complex and fundamental problem. Its goal is not only to assess whether a model's OOD generalization capability is strong or not, but also to evaluate where a model generalizes well or poorly. This entails characterizing the types of distribution shifts that a model can effectively address, and identifying the safe and risky input regions given a model. This paper serves as the first effort to conduct a comprehensive review of OOD evaluation. We categorize existing research into three paradigms: OOD performance testing, OOD performance prediction, and OOD intrinsic property characterization, according to the availability of test data. Additionally, we briefly discuss OOD evaluation in the context of pretrained models. In closing, we propose several promising directions for future research in OOD evaluation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 434,617 |
1912.00011 | Heuristic Strategies in Uncertain Approval Voting Environments | In many collective decision making situations, agents vote to choose an alternative that best represents the preferences of the group. Agents may manipulate the vote to achieve a better outcome by voting in a way that does not reflect their true preferences. In real world voting scenarios, people often do not have complete information about other voter preferences and it can be computationally complex to identify a strategy that will maximize their expected utility. In such situations, it is often assumed that voters will vote truthfully rather than expending the effort to strategize. However, being truthful is just one possible heuristic that may be used. In this paper, we examine the effectiveness of heuristics in single winner and multi-winner approval voting scenarios with missing votes. In particular, we look at heuristics where a voter ignores information about other voting profiles and makes their decisions based solely on how much they like each candidate. In a behavioral experiment, we show that people vote truthfully in some situations and prioritize high utility candidates in others. We examine when these behaviors maximize expected utility and show how the structure of the voting environment affects both how well each heuristic performs and how humans employ these heuristics. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | true | 155,647 |
1304.7607 | A Discrete State Transition Algorithm for Generalized Traveling Salesman
Problem | Generalized traveling salesman problem (GTSP) is an extension of classical traveling salesman problem (TSP), which is a combinatorial optimization problem and an NP-hard problem. In this paper, an efficient discrete state transition algorithm (DSTA) for GTSP is proposed, where a new local search operator named \textit{K-circle}, directed by neighborhood information in space, has been introduced to DSTA to shrink search space and strengthen search ability. A novel robust update mechanism, restore in probability and risk in probability (Double R-Probability), is used in our work to escape from local minima. The proposed algorithm is tested on a set of GTSP instances. Compared with other heuristics, experimental results have demonstrated the effectiveness and strong adaptability of DSTA and also show that DSTA has better search ability than its competitors. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 24,276 |
2410.07094 | An Approach for Auto Generation of Labeling Functions for Software
Engineering Chatbots | Software engineering (SE) chatbots are increasingly gaining attention for their role in enhancing development processes. At the core of chatbots are the Natural Language Understanding platforms (NLUs), which enable them to comprehend and respond to user queries. Before deploying NLUs, there is a need to train them with labeled data. However, acquiring such labeled data for SE chatbots is challenging due to the scarcity of high-quality datasets. This challenge arises because training SE chatbots requires specialized vocabulary and phrases not found in typical language datasets. Consequently, chatbot developers often resort to manually annotating user queries to gather the data necessary for training effective chatbots, a process that is both time-consuming and resource-intensive. Previous studies propose approaches to support chatbot practitioners in annotating users' posed queries. However, these approaches require human intervention to generate rules, called labeling functions (LFs), that identify and categorize user queries based on specific patterns in the data. To address this issue, we propose an approach to automatically generate LFs by extracting patterns from labeled user queries. We evaluate the effectiveness of our approach by applying it to the queries of four diverse SE datasets (namely AskGit, MSA, Ask Ubuntu, and Stack Overflow) and measure the performance improvement gained from training the NLU on the queries labeled by the generated LFs. We find that the generated LFs effectively label data with AUC scores of up to 85.3%, and NLU's performance improvement of up to 27.2% across the studied datasets. Furthermore, our results show that the number of LFs used to generate LFs affects the labeling performance. We believe that our approach can save time and resources in labeling users' queries, allowing practitioners to focus on core chatbot functionalities. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | true | 496,464 |
1504.08256 | Manipulation is Harder with Incomplete Votes | The Coalitional Manipulation (CM) problem has been studied extensively in the literature for many voting rules. The CM problem, however, has been studied only in the complete information setting, that is, when the manipulators know the votes of the non-manipulators. A more realistic scenario is an incomplete information setting where the manipulators do not know the exact votes of the non- manipulators but may have some partial knowledge of the votes. In this paper, we study a setting where the manipulators know a partial order for each voter that is consistent with the vote of that voter. In this setting, we introduce and study two natural computational problems - (1) Weak Manipulation (WM) problem where the manipulators wish to vote in a way that makes their preferred candidate win in at least one extension of the partial votes of the non-manipulators; (2) Strong Manipulation (SM) problem where the manipulators wish to vote in a way that makes their preferred candidate win in all possible extensions of the partial votes of the non-manipulators. We study the computational complexity of the WM and the SM problems for commonly used voting rules such as plurality, veto, k-approval, k-veto, maximin, Copeland, and Bucklin. Our key finding is that, barring a few exceptions, manipulation becomes a significantly harder problem in the setting of incomplete votes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 42,637 |
1802.04350 | Cost-Aware Learning for Improved Identifiability with Multiple
Experiments | We analyze the sample complexity of learning from multiple experiments where the experimenter has a total budget for obtaining samples. In this problem, the learner should choose a hypothesis that performs well with respect to multiple experiments, and their related data distributions. Each collected sample is associated with a cost which depends on the particular experiments. In our setup, a learner performs $m$ experiments, while incurring a total cost $C$. We first show that learning from multiple experiments allows to improve identifiability. Additionally, by using a Rademacher complexity approach, we show that the gap between the training and generalization error is $O(C^{-1/2})$. We also provide some examples for linear prediction, two-layer neural networks and kernel methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,197 |
2306.03978 | B\"{u}y\"{u}k dil modellerinin T\"{u}rk\c{c}e verisetleri ile
e\u{g}itilmesi ve ince ayarlanmas\i | Large language models have advanced enormously, gained vast attraction and are having a phase of intensed research. Some of the developed models and training datasets have been made open-accessible. Hence these may be further fine-tuned with some techniques to obtain specialized models for specific tasks. When it comes to Turkish language, open-access models do not provide satisfactory coverage. This is also observed over published datasets. In this work, we propose some ideas to mitigate this issue: creating large Turkish datasets, training LLMs with these and fine-tuning pre-trained models with Turkish inputs. We report our findings on Turkish-based trainings with the problems encountered along the way. We conclude with outcomes of these experiments and propose ideas for further works. -- B\"uy\"uk dil modelleri inan{\i}lmaz \"ol\c{c}\"ude geli\c{s}mekte, b\"uy\"uk ilgi toplayarak ve \"uzerlerinde yo\u{g}un ara\c{s}tirmalarin yapildi\u{g}i bir d\"onemdedirler. Geli\c{s}tirilen modeller ve e\u{g}itimde kullanilan verisetlerinden bazilari a\c{c}ik eri\c{s}imli olarak sunulmaktadir. B\"oylece ince ayarlama teknikleri uygulayarak \"ozelle\c{s}mi\c{s} g\"orevler i\c{c}in \c{c}ali\c{s}abilir modeller elde edilmektedir. T\"urk\c{c}e s\"oz konusu oldu\u{g}unda bu modellerinin kapsayicili\u{g}i yeterli d\"uzeyde de\u{g}ildir. Bu durum, yayimlanan verisetlerinde de g\"ozlemlenebilir. Bunu a\c{s}manin yollari T\"urk\c{c}e i\c{c}erikli b\"uy\"uk verisetlerinin olu\c{s}turulmasi, b\"uy\"uk dil modellerinin bunlarla e\u{g}itilmesi ve \"onceden e\u{g}itilmi\c{s} modellerin T\"urk\c{c}e girdilerle ince ayarlanmalari olabilir. Bu \c{c}ali\c{s}mada a\c{c}ik eri\c{s}imli dil modelleri ve verisetleri \"uzerinde durulmakta ve T\"urk\c{c}e temelli bazi deneyler, kar\c{s}ila\c{s}ilan sorunlar ve sonu\c{c}lar irdelenmektedir. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 371,551 |
2502.04420 | KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache
Quantization for Efficient and Nearly Lossless LLM Inference | KV cache quantization can improve Large Language Models (LLMs) inference throughput and latency in long contexts and large batch-size scenarios while preserving LLMs effectiveness. However, current methods have three unsolved issues: overlooking layer-wise sensitivity to KV cache quantization, high overhead of online fine-grained decision-making, and low flexibility to different LLMs and constraints. Therefore, we thoroughly analyze the inherent correlation of layer-wise transformer attention patterns to KV cache quantization errors and study why key cache is more important than value cache for quantization error reduction. We further propose a simple yet effective framework KVTuner to adaptively search for the optimal hardware-friendly layer-wise KV quantization precision pairs for coarse-grained KV cache with multi-objective optimization and directly utilize the offline searched configurations during online inference. To reduce the computational cost of offline calibration, we utilize the intra-layer KV precision pair pruning and inter-layer clustering to reduce the search space. Experimental results show that we can achieve nearly lossless 3.25-bit mixed precision KV cache quantization for LLMs like Llama-3.1-8B-Instruct and 4.0-bit for sensitive models like Qwen2.5-7B-Instruct on mathematical reasoning tasks. The maximum inference throughput can be improved by 38.3% compared with KV8 quantization over various context lengths. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 531,156 |
2311.05602 | Reconstructing Objects in-the-wild for Realistic Sensor Simulation | Reconstructing objects from real world data and rendering them at novel views is critical to bringing realism, diversity and scale to simulation for robotics training and testing. In this work, we present NeuSim, a novel approach that estimates accurate geometry and realistic appearance from sparse in-the-wild data captured at distance and at limited viewpoints. Towards this goal, we represent the object surface as a neural signed distance function and leverage both LiDAR and camera sensor data to reconstruct smooth and accurate geometry and normals. We model the object appearance with a robust physics-inspired reflectance representation effective for in-the-wild data. Our experiments show that NeuSim has strong view synthesis performance on challenging scenarios with sparse training views. Furthermore, we showcase composing NeuSim assets into a virtual world and generating realistic multi-sensor data for evaluating self-driving perception models. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 406,637 |
2007.06850 | A model to support collective reasoning: Formalization, analysis and
computational assessment | Inspired by e-participation systems, in this paper we propose a new model to represent human debates and methods to obtain collective conclusions from them. This model overcomes drawbacks of existing approaches by allowing users to introduce new pieces of information into the discussion, to relate them to existing pieces, and also to express their opinion on the pieces proposed by other users. In addition, our model does not assume that users' opinions are rational in order to extract information from it, an assumption that significantly limits current approaches. Instead, we define a weaker notion of rationality that characterises coherent opinions, and we consider different scenarios based on the coherence of individual opinions and the level of consensus that users have on the debate structure. Considering these two factors, we analyse the outcomes of different opinion aggregation functions that compute a collective decision based on the individual opinions and the debate structure. In particular, we demonstrate that aggregated opinions can be coherent even if there is a lack of consensus and individual opinions are not coherent. We conclude our analysis with a computational evaluation demonstrating that collective opinions can be computed efficiently for real-sized debates. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 187,150 |
2403.16742 | A Branch and Bound method for the exact parameter identification of the
PK/PD model for anesthetic drugs | We address the problem of parameter identification for the standard pharmacokinetic/pharmacodynamic (PK/PD) model for anesthetic drugs. Our main contribution is the development of a global optimization method that guarantees finding the parameters that minimize the one-step ahead prediction error. The method is based on a branch-and-bound algorithm, that can be applied to solve a more general class of nonlinear regression problems. We present some simulation results, based on a dataset of twelve patients. In these simulations, we are always able to identify the exact parameters, despite the non-convexity of the overall identification problem. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 441,174 |
2401.04722 | U-Mamba: Enhancing Long-range Dependency for Biomedical Image
Segmentation | Convolutional Neural Networks (CNNs) and Transformers have been the most popular architectures for biomedical image segmentation, but both of them have limited ability to handle long-range dependencies because of inherent locality or computational complexity. To address this challenge, we introduce U-Mamba, a general-purpose network for biomedical image segmentation. Inspired by the State Space Sequence Models (SSMs), a new family of deep sequence models known for their strong capability in handling long sequences, we design a hybrid CNN-SSM block that integrates the local feature extraction power of convolutional layers with the abilities of SSMs for capturing the long-range dependency. Moreover, U-Mamba enjoys a self-configuring mechanism, allowing it to automatically adapt to various datasets without manual intervention. We conduct extensive experiments on four diverse tasks, including the 3D abdominal organ segmentation in CT and MR images, instrument segmentation in endoscopy images, and cell segmentation in microscopy images. The results reveal that U-Mamba outperforms state-of-the-art CNN-based and Transformer-based segmentation networks across all tasks. This opens new avenues for efficient long-range dependency modeling in biomedical image analysis. The code, models, and data are publicly available at https://wanglab.ai/u-mamba.html. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 420,515 |
1406.0085 | Cooperative Control of Linear Multi-Agent Systems via Distributed Output
Regulation and Transient Synchronization | A wide range of multi-agent coordination problems including reference tracking and disturbance rejection requirements can be formulated as a cooperative output regulation problem. The general framework captures typical problems such as output synchronization, leader-follower synchronization, and many more. In the present paper, we propose a novel distributed regulator for groups of identical and non-identical linear agents. We consider global external signals affecting all agents and local external signals affecting only individual agents in the group. Both signal types may contain references and disturbances. Our main contribution is a novel coupling among the agents based on their transient state components or estimates thereof in the output feedback case. This coupling achieves transient synchronization in order to improve the cooperative behavior of the group in transient phases and guarantee a desired decay rate of the synchronization error. This leads to a cooperative reaction of the group on local disturbances acting on individual agents. The effectiveness of the proposed distributed regulator is illustrated by a vehicle platooning example and a coordination example for a group of four non-identical 3-DoF helicopter models. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 33,521 |
2011.01893 | Iterative Best Response for Multi-Body Asset-Guarding Games | We present a numerical approach to finding optimal trajectories for players in a multi-body, asset-guarding game with nonlinear dynamics and non-convex constraints. Using the Iterative Best Response (IBR) scheme, we solve for each player's optimal strategy assuming the other players' trajectories are known and fixed. Leveraging recent advances in Sequential Convex Programming (SCP), we use SCP as a subroutine within the IBR algorithm to efficiently solve an approximation of each player's constrained trajectory optimization problem. We apply the approach to an asset-guarding game example involving multiple pursuers and a single evader (i.e., n-versus-1 engagements). Resulting evader trajectories are tested in simulation to verify successful evasion against pursuers using conventional intercept guidance laws. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 204,750 |
1705.09368 | Pose Guided Person Image Generation | This paper proposes the novel Pose Guided Person Generation Network (PG$^2$) that allows to synthesize person images in arbitrary poses, based on an image of that person and a novel pose. Our generation framework PG$^2$ utilizes the pose information explicitly and consists of two key stages: pose integration and image refinement. In the first stage the condition image and the target pose are fed into a U-Net-like network to generate an initial but coarse image of the person with the target pose. The second stage then refines the initial and blurry result by training a U-Net-like generator in an adversarial way. Extensive experimental results on both 128$\times$64 re-identification images and 256$\times$256 fashion photos show that our model generates high-quality person images with convincing details. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 74,180 |
1809.09658 | Non-native children speech recognition through transfer learning | This work deals with non-native children's speech and investigates both multi-task and transfer learning approaches to adapt a multi-language Deep Neural Network (DNN) to speakers, specifically children, learning a foreign language. The application scenario is characterized by young students learning English and German and reading sentences in these second-languages, as well as in their mother language. The paper analyzes and discusses techniques for training effective DNN-based acoustic models starting from children native speech and performing adaptation with limited non-native audio material. A multi-lingual model is adopted as baseline, where a common phonetic lexicon, defined in terms of the units of the International Phonetic Alphabet (IPA), is shared across the three languages at hand (Italian, German and English); DNN adaptation methods based on transfer learning are evaluated on significant non-native evaluation sets. Results show that the resulting non-native models allow a significant improvement with respect to a mono-lingual system adapted to speakers of the target language. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 108,750 |
2010.07002 | Fast meningioma segmentation in T1-weighted MRI volumes using a
lightweight 3D deep learning architecture | Automatic and consistent meningioma segmentation in T1-weighted MRI volumes and corresponding volumetric assessment is of use for diagnosis, treatment planning, and tumor growth evaluation. In this paper, we optimized the segmentation and processing speed performances using a large number of both surgically treated meningiomas and untreated meningiomas followed at the outpatient clinic. We studied two different 3D neural network architectures: (i) a simple encoder-decoder similar to a 3D U-Net, and (ii) a lightweight multi-scale architecture (PLS-Net). In addition, we studied the impact of different training schemes. For the validation studies, we used 698 T1-weighted MR volumes from St. Olav University Hospital, Trondheim, Norway. The models were evaluated in terms of detection accuracy, segmentation accuracy and training/inference speed. While both architectures reached a similar Dice score of 70% on average, the PLS-Net was more accurate with an F1-score of up to 88%. The highest accuracy was achieved for the largest meningiomas. Speed-wise, the PLS-Net architecture tended to converge in about 50 hours while 130 hours were necessary for U-Net. Inference with PLS-Net takes less than a second on GPU and about 15 seconds on CPU. Overall, with the use of mixed precision training, it was possible to train competitive segmentation models in a relatively short amount of time using the lightweight PLS-Net architecture. In the future, the focus should be brought toward the segmentation of small meningiomas (less than 2ml) to improve clinical relevance for automatic and early diagnosis as well as speed of growth estimates. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 200,671 |
1503.00796 | On the Convergence and Performance of MF Precoding in Distributed
Massive MU-MIMO Systems | In this paper, we analyze both the rate of convergence and the performance of a matched-filter (MF) precoder in a massive multi-user (MU) multiple-input-multiple-output (MIMO) system, with the aim of determining the impact of distributing the transmit antennas into multiple clusters. We consider cases of transmit spatial correlation, unequal link gains and imperfect channel state information (CSI). Furthermore, we derive a MF signal-to-interference-plus-noise-ratio (SINR) limit as both the number of transmit antennas and the number of users tend to infinity. In our results, we show that both the rate of convergence and performance is strongly dependent on spatial correlation. In the presence of spatial correlation, distributing the antennas into multiple clusters renders significant gains over a co-located antenna array scenario. In uncorrelated scenarios, a co-located antenna cluster has a marginally better mean per-user SINR performance due to its superior single-user signal-to-noise-ratio (SNR) regime, i.e., when a user is close to the base station (BS), the links between the user and all transmit antennas becomes strong. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,751 |
2310.07794 | CRITERIA: a New Benchmarking Paradigm for Evaluating Trajectory
Prediction Models for Autonomous Driving | Benchmarking is a common method for evaluating trajectory prediction models for autonomous driving. Existing benchmarks rely on datasets, which are biased towards more common scenarios, such as cruising, and distance-based metrics that are computed by averaging over all scenarios. Following such a regiment provides a little insight into the properties of the models both in terms of how well they can handle different scenarios and how admissible and diverse their outputs are. There exist a number of complementary metrics designed to measure the admissibility and diversity of trajectories, however, they suffer from biases, such as length of trajectories. In this paper, we propose a new benChmarking paRadIgm for evaluaTing trajEctoRy predIction Approaches (CRITERIA). Particularly, we propose 1) a method for extracting driving scenarios at varying levels of specificity according to the structure of the roads, models' performance, and data properties for fine-grained ranking of prediction models; 2) A set of new bias-free metrics for measuring diversity, by incorporating the characteristics of a given scenario, and admissibility, by considering the structure of roads and kinematic compliancy, motivated by real-world driving constraints. 3) Using the proposed benchmark, we conduct extensive experimentation on a representative set of the prediction models using the large scale Argoverse dataset. We show that the proposed benchmark can produce a more accurate ranking of the models and serve as a means of characterizing their behavior. We further present ablation studies to highlight contributions of different elements that are used to compute the proposed metrics. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 399,122 |
1805.00982 | k-SVRG: Variance Reduction for Large Scale Optimization | Variance reduced stochastic gradient (SGD) methods converge significantly faster than the vanilla SGD counterpart. However, these methods are not very practical on large scale problems, as they either i) require frequent passes over the full data to recompute gradients---without making any progress during this time (like for SVRG), or ii)~they require additional memory that can surpass the size of the input problem (like for SAGA). In this work, we propose $k$-SVRG that addresses these issues by making best use of the \emph{available} memory and minimizes the stalling phases without progress. We prove linear convergence of $k$-SVRG on strongly convex problems and convergence to stationary points on non-convex problems. Numerical experiments show the effectiveness of our method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 96,554 |
2305.09584 | Revisiting Proprioceptive Sensing for Articulated Object Manipulation | Robots that assist humans will need to interact with articulated objects such as cabinets or microwaves. Early work on creating systems for doing so used proprioceptive sensing to estimate joint mechanisms during contact. However, nowadays, almost all systems use only vision and no longer consider proprioceptive information during contact. We believe that proprioceptive information during contact is a valuable source of information and did not find clear motivation for not using it in the literature. Therefore, in this paper, we create a system that, starting from a given grasp, uses proprioceptive sensing to open cabinets with a position-controlled robot and a parallel gripper. We perform a qualitative evaluation of this system, where we find that slip between the gripper and handle limits the performance. Nonetheless, we find that the system already performs quite well. This poses the question: should we make more use of proprioceptive information during contact in articulated object manipulation systems, or is it not worth the added complexity, and can we manage with vision alone? We do not have an answer to this question, but we hope to spark some discussion on the matter. The codebase and videos of the system are available at https://tlpss.github.io/revisiting-proprioception-for-articulated-manipulation/. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 364,692 |
2004.06957 | Self-Supervised training for blind multi-frame video denoising | We propose a self-supervised approach for training multi-frame video denoising networks. These networks predict frame t from a window of frames around t. Our self-supervised approach benefits from the video temporal consistency by penalizing a loss between the predicted frame t and a neighboring target frame, which are aligned using an optical flow. We use the proposed strategy for online internal learning, where a pre-trained network is fine-tuned to denoise a new unknown noise type from a single video. After a few frames, the proposed fine-tuning reaches and sometimes surpasses the performance of a state-of-the-art network trained with supervision. In addition, for a wide range of noise types, it can be applied blindly without knowing the noise distribution. We demonstrate this by showing results on blind denoising of different synthetic and realistic noises. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 172,648 |
1711.04226 | AON: Towards Arbitrarily-Oriented Text Recognition | Recognizing text from natural images is a hot research topic in computer vision due to its various applications. Despite the enduring research of several decades on optical character recognition (OCR), recognizing texts from natural images is still a challenging task. This is because scene texts are often in irregular (e.g. curved, arbitrarily-oriented or seriously distorted) arrangements, which have not yet been well addressed in the literature. Existing methods on text recognition mainly work with regular (horizontal and frontal) texts and cannot be trivially generalized to handle irregular texts. In this paper, we develop the arbitrary orientation network (AON) to directly capture the deep features of irregular texts, which are combined into an attention-based decoder to generate character sequence. The whole network can be trained end-to-end by using only images and word-level annotations. Extensive experiments on various benchmarks, including the CUTE80, SVT-Perspective, IIIT5k, SVT and ICDAR datasets, show that the proposed AON-based method achieves the-state-of-the-art performance in irregular datasets, and is comparable to major existing methods in regular datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 84,358 |
2406.05376 | Adversarial flows: A gradient flow characterization of adversarial
attacks | A popular method to perform adversarial attacks on neuronal networks is the so-called fast gradient sign method and its iterative variant. In this paper, we interpret this method as an explicit Euler discretization of a differential inclusion, where we also show convergence of the discretization to the associated gradient flow. To do so, we consider the concept of p-curves of maximal slope in the case $p=\infty$. We prove existence of $\infty$-curves of maximum slope and derive an alternative characterization via differential inclusions. Furthermore, we also consider Wasserstein gradient flows for potential energies, where we show that curves in the Wasserstein space can be characterized by a representing measure on the space of curves in the underlying Banach space, which fulfill the differential inclusion. The application of our theory to the finite-dimensional setting is twofold: On the one hand, we show that a whole class of normalized gradient descent methods (in particular signed gradient descent) converge, up to subsequences, to the flow, when sending the step size to zero. On the other hand, in the distributional setting, we show that the inner optimization task of adversarial training objective can be characterized via $\infty$-curves of maximum slope on an appropriate optimal transport space. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 462,110 |
2402.05906 | Risk-Sensitive Multi-Agent Reinforcement Learning in Network Aggregative
Markov Games | Classical multi-agent reinforcement learning (MARL) assumes risk neutrality and complete objectivity for agents. However, in settings where agents need to consider or model human economic or social preferences, a notion of risk must be incorporated into the RL optimization problem. This will be of greater importance in MARL where other human or non-human agents are involved, possibly with their own risk-sensitive policies. In this work, we consider risk-sensitive and non-cooperative MARL with cumulative prospect theory (CPT), a non-convex risk measure and a generalization of coherent measures of risk. CPT is capable of explaining loss aversion in humans and their tendency to overestimate/underestimate small/large probabilities. We propose a distributed sampling-based actor-critic (AC) algorithm with CPT risk for network aggregative Markov games (NAMGs), which we call Distributed Nested CPT-AC. Under a set of assumptions, we prove the convergence of the algorithm to a subjective notion of Markov perfect Nash equilibrium in NAMGs. The experimental results show that subjective CPT policies obtained by our algorithm can be different from the risk-neutral ones, and agents with a higher loss aversion are more inclined to socially isolate themselves in an NAMG. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 428,053 |
2204.09172 | Node Deployment in Heterogeneous Rayleigh Fading Sensor Networks | We study a hierarchical heterogeneous Rayleigh fading wireless sensor network (WSN) in which sensor nodes surveil a region of interest (RoI) and use access points (APs) as relays to transmit their sensed information to base stations (BSs). By considering both large-scale path-loss signal attenuation and small-scale signal variation due to Rayleigh fading, we formulate the node deployment problem as an optimization problem intended to minimize the network's wireless communication power consumption. Given ergodic capacity constraints on all wireless links, we study the necessary conditions for an optimal AP and BS deployment. These necessary conditions are then assembled in the form of an iterative algorithm to deploy nodes. Finally, we establish the efficacy and superiority of our proposed node deployment algorithm against similar methods in the literature. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 292,338 |
2302.06025 | Statistical Complexity and Optimal Algorithms for Non-linear Ridge
Bandits | We consider the sequential decision-making problem where the mean outcome is a non-linear function of the chosen action. Compared with the linear model, two curious phenomena arise in non-linear models: first, in addition to the "learning phase" with a standard parametric rate for estimation or regret, there is an "burn-in period" with a fixed cost determined by the non-linear function; second, achieving the smallest burn-in cost requires new exploration algorithms. For a special family of non-linear functions named ridge functions in the literature, we derive upper and lower bounds on the optimal burn-in cost, and in addition, on the entire learning trajectory during the burn-in period via differential equations. In particular, a two-stage algorithm that first finds a good initial action and then treats the problem as locally linear is statistically optimal. In contrast, several classical algorithms, such as UCB and algorithms relying on regression oracles, are provably suboptimal. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 345,259 |
2102.01767 | Automatic analysis of artistic paintings using information-based
measures | The artistic community is increasingly relying on automatic computational analysis for authentication and classification of artistic paintings. In this paper, we identify hidden patterns and relationships present in artistic paintings by analysing their complexity, a measure that quantifies the sum of characteristics of an object. Specifically, we apply Normalized Compression (NC) and the Block Decomposition Method (BDM) to a dataset of 4,266 paintings from 91 authors and examine the potential of these information-based measures as descriptors of artistic paintings. Both measures consistently described the equivalent types of paintings, authors, and artistic movements. Moreover, combining the NC with a measure of the roughness of the paintings creates an efficient stylistic descriptor. Furthermore, by quantifying the local information of each painting, we define a fingerprint that describes critical information regarding the artists' style, their artistic influences, and shared techniques. More fundamentally, this information describes how each author typically composes and distributes the elements across the canvas and, therefore, how their work is perceived. Finally, we demonstrate that regional complexity and two-point height difference correlation function are useful auxiliary features that improve current methodologies in style and author classification of artistic paintings. The whole study is supported by an extensive website (http://panther.web.ua.pt) for fast author characterization and authentication. | false | false | false | false | false | false | true | false | false | true | false | true | false | false | false | false | false | false | 218,216 |
2007.12913 | NoPropaganda at SemEval-2020 Task 11: A Borrowed Approach to Sequence
Tagging and Text Classification | This paper describes our contribution to SemEval-2020 Task 11: Detection Of Propaganda Techniques In News Articles. We start with simple LSTM baselines and move to an autoregressive transformer decoder to predict long continuous propaganda spans for the first subtask. We also adopt an approach from relation extraction by enveloping spans mentioned above with special tokens for the second subtask of propaganda technique classification. Our models report an F-score of 44.6% and a micro-averaged F-score of 58.2% for those tasks accordingly. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 188,961 |
2405.08486 | Gradient Boosting Mapping for Dimensionality Reduction and Feature
Extraction | A fundamental problem in supervised learning is to find a good set of features or distance measures. If the new set of features is of lower dimensionality and can be obtained by a simple transformation of the original data, they can make the model understandable, reduce overfitting, and even help to detect distribution drift. We propose a supervised dimensionality reduction method Gradient Boosting Mapping (GBMAP), where the outputs of weak learners -- defined as one-layer perceptrons -- define the embedding. We show that the embedding coordinates provide better features for the supervised learning task, making simple linear models competitive with the state-of-the-art regressors and classifiers. We also use the embedding to find a principled distance measure between points. The features and distance measures automatically ignore directions irrelevant to the supervised learning task. We also show that we can reliably detect out-of-distribution data points with potentially large regression or classification errors. GBMAP is fast and works in seconds for dataset of million data points or hundreds of features. As a bonus, GBMAP provides a regression and classification performance comparable to the state-of-the-art supervised learning methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 454,111 |
2112.14300 | Time-Incremental Learning from Data Using Temporal Logics | Real-time and human-interpretable decision-making in cyber-physical systems is a significant but challenging task, which usually requires predictions of possible future events from limited data. In this paper, we introduce a time-incremental learning framework: given a dataset of labeled signal traces with a common time horizon, we propose a method to predict the label of a signal that is received incrementally over time, referred to as prefix signal. Prefix signals are the signals that are being observed as they are generated, and their time length is shorter than the common horizon of signals. We present a novel decision-tree based approach to generate a finite number of Signal Temporal Logic (STL) specifications from the given dataset, and construct a predictor based on them. Each STL specification, as a binary classifier of time-series data, captures the temporal properties of the dataset over time. The predictor is constructed by assigning time-variant weights to the STL formulas. The weights are learned by using neural networks, with the goal of minimizing the misclassification rate for the prefix signals defined over the given dataset. The learned predictor is used to predict the label of a prefix signal, by computing the weighted sum of the robustness of the prefix signal with respect to each STL formula. The effectiveness and classification performance of our algorithm are evaluated on an urban-driving and a naval-surveillance case studies. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | true | 273,485 |
0808.0987 | A new graph perspective on max-min fairness in Gaussian parallel
channels | In this work we are concerned with the problem of achieving max-min fairness in Gaussian parallel channels with respect to a general performance function, including channel capacity or decoding reliability as special cases. As our central results, we characterize the laws which determine the value of the achievable max-min fair performance as a function of channel sharing policy and power allocation (to channels and users). In particular, we show that the max-min fair performance behaves as a specialized version of the Lovasz function, or Delsarte bound, of a certain graph induced by channel sharing combinatorics. We also prove that, in addition to such graph, merely a certain 2-norm distance dependent on the allowable power allocations and used performance functions, is sufficient for the characterization of max-min fair performance up to some candidate interval. Our results show also a specific role played by odd cycles in the graph induced by the channel sharing policy and we present an interesting relation between max-min fairness in parallel channels and optimal throughput in an associated interference channel. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 2,175 |
2407.05645 | OneDiff: A Generalist Model for Image Difference Captioning | In computer vision, Image Difference Captioning (IDC) is crucial for accurately describing variations between closely related images. Traditional IDC methods often rely on specialist models, which restrict their applicability across varied contexts. This paper introduces the OneDiff model, a novel generalist approach that utilizes a robust vision-language model architecture, integrating a siamese image encoder with a Visual Delta Module. This innovative configuration allows for the precise detection and articulation of fine-grained differences between image pairs. OneDiff is trained through a dual-phase strategy, encompassing Coupled Sample Training and multi-task learning across a diverse array of data types, supported by our newly developed DiffCap Dataset. This dataset merges real-world and synthetic data, enhancing the training process and bolstering the model's robustness. Extensive testing on diverse IDC benchmarks, such as Spot-the-Diff, Image-Editing-Request, and Birds-to-Words, shows that OneDiff consistently outperforms existing state-of-the-art models in accuracy and adaptability, achieving improvements of up to 97% CIDEr points in average. By setting a new benchmark in IDC, OneDiff paves the way for more versatile and effective applications in detecting and describing visual differences. The code, models, and data will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 471,069 |
2302.00358 | Bandit Convex Optimisation Revisited: FTRL Achieves $\tilde{O}(t^{1/2})$
Regret | We show that a kernel estimator using multiple function evaluations can be easily converted into a sampling-based bandit estimator with expectation equal to the original kernel estimate. Plugging such a bandit estimator into the standard FTRL algorithm yields a bandit convex optimisation algorithm that achieves $\tilde{O}(t^{1/2})$ regret against adversarial time-varying convex loss functions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 343,190 |
2004.04400 | Self-Supervised 3D Human Pose Estimation via Part Guided Novel Image
Synthesis | Camera captured human pose is an outcome of several sources of variation. Performance of supervised 3D pose estimation approaches comes at the cost of dispensing with variations, such as shape and appearance, that may be useful for solving other related tasks. As a result, the learned model not only inculcates task-bias but also dataset-bias because of its strong reliance on the annotated samples, which also holds true for weakly-supervised models. Acknowledging this, we propose a self-supervised learning framework to disentangle such variations from unlabeled video frames. We leverage the prior knowledge on human skeleton and poses in the form of a single part-based 2D puppet model, human pose articulation constraints, and a set of unpaired 3D poses. Our differentiable formalization, bridging the representation gap between the 3D pose and spatial part maps, not only facilitates discovery of interpretable pose disentanglement but also allows us to operate on videos with diverse camera movements. Qualitative results on unseen in-the-wild datasets establish our superior generalization across multiple tasks beyond the primary tasks of 3D pose estimation and part segmentation. Furthermore, we demonstrate state-of-the-art weakly-supervised 3D pose estimation performance on both Human3.6M and MPI-INF-3DHP datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 171,871 |
1202.3753 | Partial Order MCMC for Structure Discovery in Bayesian Networks | We present a new Markov chain Monte Carlo method for estimating posterior probabilities of structural features in Bayesian networks. The method draws samples from the posterior distribution of partial orders on the nodes; for each sampled partial order, the conditional probabilities of interest are computed exactly. We give both analytical and empirical results that suggest the superiority of the new method compared to previous methods, which sample either directed acyclic graphs or linear orders on the nodes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 14,425 |
2302.12563 | Retrieved Sequence Augmentation for Protein Representation Learning | Protein language models have excelled in a variety of tasks, ranging from structure prediction to protein engineering. However, proteins are highly diverse in functions and structures, and current state-of-the-art models including the latest version of AlphaFold rely on Multiple Sequence Alignments (MSA) to feed in the evolutionary knowledge. Despite their success, heavy computational overheads, as well as the de novo and orphan proteins remain great challenges in protein representation learning. In this work, we show that MSAaugmented models inherently belong to retrievalaugmented methods. Motivated by this finding, we introduce Retrieved Sequence Augmentation(RSA) for protein representation learning without additional alignment or pre-processing. RSA links query protein sequences to a set of sequences with similar structures or properties in the database and combines these sequences for downstream prediction. We show that protein language models benefit from the retrieval enhancement on both structure prediction and property prediction tasks, with a 5% improvement on MSA Transformer on average while being 373 times faster. In addition, we show that our model can transfer to new protein domains better and outperforms MSA Transformer on de novo protein prediction. Our study fills a much-encountered gap in protein prediction and brings us a step closer to demystifying the domain knowledge needed to understand protein sequences. Code is available on https://github.com/HKUNLP/RSA. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 347,614 |
2410.05711 | TimeDART: A Diffusion Autoregressive Transformer for Self-Supervised
Time Series Representation | Self-supervised learning has garnered increasing attention in time series analysis for benefiting various downstream tasks and reducing reliance on labeled data. Despite its effectiveness, existing methods often struggle to comprehensively capture both long-term dynamic evolution and subtle local patterns in a unified manner. In this work, we propose TimeDART, a novel self-supervised time series pre-training framework that unifies two powerful generative paradigms to learn more transferable representations. Specifically, we first employ a causal Transformer encoder, accompanied by a patch-based embedding strategy, to model the evolving trends from left to right. Building on this global modeling, we further introduce a denoising diffusion process to capture fine-grained local patterns through forward diffusion and reverse denoising. Finally, we optimize the model in an autoregressive manner. As a result, TimeDART effectively accounts for both global and local sequence features in a coherent way. We conduct extensive experiments on public datasets for time series forecasting and classification. The experimental results demonstrate that TimeDART consistently outperforms previous compared methods, validating the effectiveness of our approach. Our code is available at https://github.com/Melmaphother/TimeDART. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 495,873 |
2412.08843 | Precise Asymptotics and Refined Regret of Variance-Aware UCB | In this paper, we study the behavior of the Upper Confidence Bound-Variance (UCB-V) algorithm for the Multi-Armed Bandit (MAB) problems, a variant of the canonical Upper Confidence Bound (UCB) algorithm that incorporates variance estimates into its decision-making process. More precisely, we provide an asymptotic characterization of the arm-pulling rates for UCB-V, extending recent results for the canonical UCB in Kalvit and Zeevi (2021) and Khamaru and Zhang (2024). In an interesting contrast to the canonical UCB, our analysis reveals that the behavior of UCB-V can exhibit instability, meaning that the arm-pulling rates may not always be asymptotically deterministic. Besides the asymptotic characterization, we also provide non-asymptotic bounds for the arm-pulling rates in the high probability regime, offering insights into the regret analysis. As an application of this high probability result, we establish that UCB-V can achieve a more refined regret bound, previously unknown even for more complicate and advanced variance-aware online decision-making algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,243 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.