id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2105.11816 | Public Transportation Demand Analysis: A Case Study of Metropolitan
Lagos | Modelling, simulation, and forecasting offer a means of facilitating better planning and decision-making. These quantitative approaches can add value beyond traditional methods that do not rely on data and are particularly relevant for public transportation. Lagos is experiencing rapid urbanization and currently has a population of just under 15 million. Both long waiting times and uncertain travel times has driven many people to acquire their own vehicle or use alternative modes of transport. This has significantly increased the number of vehicles on the roads leading to even more traffic and greater traffic congestion. This paper investigates urban travel demand in Lagos and explores passenger dynamics in time and space. Using individual commuter trip data from tickets purchased from the Lagos State Bus Rapid Transit (BRT), the demand patterns through the hours of the day, days of the week and bus stations are analysed. This study aims to quantify demand from actual passenger trips and estimate the impact that dynamic scheduling could have on passenger waiting times. Station segmentation is provided to cluster stations by their demand characteristics in order to tailor specific bus schedules. Intra-day public transportation demand in Lagos BRT is analysed and predictions are compared. Simulations using fixed and dynamic bus scheduling demonstrate that the average waiting time could be reduced by as much as 80%. The load curves, insights and the approach developed will be useful for informing policymaking in Lagos and similar African cities facing the challenges of rapid urbanization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,830 |
1701.03313 | Information-Theoretic Analysis of Refractory Effects in the P300 Speller | The P300 speller is a brain-computer interface that enables people with neuromuscular disorders to communicate based on eliciting event-related potentials (ERP) in electroencephalography (EEG) measurements. One challenge to reliable communication is the presence of refractory effects in the P300 ERP that induces temporal dependence in the user's EEG responses. We propose a model for the P300 speller as a communication channel with memory. By studying the maximum information rate on this channel, we gain insight into the fundamental constraints imposed by refractory effects. We construct codebooks based on the optimal input distribution, and compare them to existing codebooks in literature. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 66,686 |
2406.16662 | Adaptive Coding for Two-Way Wiretap Channel under Strong Secrecy | This paper studies adaptive coding for the two-way wiretap channel. Especially, the strong secrecy metric is of our interest that is defined by the information leakage of transmitted messages to the eavesdropper. First, we consider an adaptive coding, the construction of which is based on running the well studied non-adaptive coding in several rounds and the dependency between the adjacent rounds of transmission is introduced by the key exchange mechanism that is embedded in the non-adaptive coding in each transmission round. As a result, we analyze the reliability and strong secrecy that are measured by the decoding error probability and information leakage, characterize them in terms of the conditional R\'enyi mutual information, and derive inner bounds on the secrecy capacity regions for the TW-WC under strong joint and individual secrecy constraints. Second, we introduce another adaptive coding method that explores the correlation among the outputs at the receivers. With this approach, we show that for the two-way wiretap channel that fulfills the conditionally independent condition, positive transmission rates can be always guaranteed even under the joint secrecy constraint. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 467,216 |
2310.19858 | iGEM: a model system for team science and innovation | Teams are a primary source of innovation in science and technology. Rather than examining the lone genius, scholarly and policy attention has shifted to understanding how team interactions produce new and useful ideas. Yet the organizational roots of innovation remain unclear, in part because of the limitations of current data. This paper introduces the international Genetically Engineered Machine (iGEM) competition, a model system for studying team science and innovation. By combining digital laboratory notebooks with performance data from 2,406 teams over multiple years of participation, we reveal shared dynamical and organizational patterns across teams and identify features associated with team performance and success. This dataset makes visible organizational behavior that is typically hidden, and thus understudied, creating new opportunities for the science of science and innovation. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 404,168 |
1911.06816 | QC-Automator: Deep Learning-based Automated Quality Control for
Diffusion MR Images | Quality assessment of diffusion MRI (dMRI) data is essential prior to any analysis, so that appropriate pre-processing can be used to improve data quality and ensure that the presence of MRI artifacts do not affect the results of subsequent image analysis. Manual quality assessment of the data is subjective, possibly error-prone, and infeasible, especially considering the growing number of consortium-like studies, underlining the need for automation of the process. In this paper, we have developed a deep-learning-based automated quality control (QC) tool, QC-Automator, for dMRI data, that can handle a variety of artifacts such as motion, multiband interleaving, ghosting, susceptibility, herringbone and chemical shifts. QC-Automator uses convolutional neural networks along with transfer learning to train the automated artifact detection on a labeled dataset of ~332000 slices of dMRI data, from 155 unique subjects and 5 scanners with different dMRI acquisitions, achieving a 98% accuracy in detecting artifacts. The method is fast and paves the way for efficient and effective artifact detection in large datasets. It is also demonstrated to be replicable on other datasets with different acquisition parameters. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 153,627 |
2102.12179 | Multichannel LSTM-CNN for Telugu Technical Domain Identification | With the instantaneous growth of text information, retrieving domain-oriented information from the text data has a broad range of applications in Information Retrieval and Natural language Processing. Thematic keywords give a compressed representation of the text. Usually, Domain Identification plays a significant role in Machine Translation, Text Summarization, Question Answering, Information Extraction, and Sentiment Analysis. In this paper, we proposed the Multichannel LSTM-CNN methodology for Technical Domain Identification for Telugu. This architecture was used and evaluated in the context of the ICON shared task TechDOfication 2020 (task h), and our system got 69.9% of the F1 score on the test dataset and 90.01% on the validation set. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 221,646 |
2005.03692 | A Systematic Assessment of Syntactic Generalization in Neural Language
Models | While state-of-the-art neural network models continue to achieve lower perplexity scores on language modeling benchmarks, it remains unknown whether optimizing for broad-coverage predictive performance leads to human-like syntactic knowledge. Furthermore, existing work has not provided a clear picture about the model properties required to produce proper syntactic generalizations. We present a systematic evaluation of the syntactic knowledge of neural language models, testing 20 combinations of model types and data sizes on a set of 34 English-language syntactic test suites. We find substantial differences in syntactic generalization performance by model architecture, with sequential models underperforming other architectures. Factorially manipulating model architecture and training dataset size (1M--40M words), we find that variability in syntactic generalization performance is substantially greater by architecture than by dataset size for the corpora tested in our experiments. Our results also reveal a dissociation between perplexity and syntactic generalization performance. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 176,225 |
2409.10525 | "Is This It?": Towards Ecologically Valid Benchmarks for Situated
Collaboration | We report initial work towards constructing ecologically valid benchmarks to assess the capabilities of large multimodal models for engaging in situated collaboration. In contrast to existing benchmarks, in which question-answer pairs are generated post hoc over preexisting or synthetic datasets via templates, human annotators, or large language models (LLMs), we propose and investigate an interactive system-driven approach, where the questions are generated by users in context, during their interactions with an end-to-end situated AI system. We illustrate how the questions that arise are different in form and content from questions typically found in existing embodied question answering (EQA) benchmarks and discuss new real-world challenge problems brought to the fore. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 488,775 |
1703.00177 | Optical Flow-based 3D Human Motion Estimation from Monocular Video | We present a generative method to estimate 3D human motion and body shape from monocular video. Under the assumption that starting from an initial pose optical flow constrains subsequent human motion, we exploit flow to find temporally coherent human poses of a motion sequence. We estimate human motion by minimizing the difference between computed flow fields and the output of an artificial flow renderer. A single initialization step is required to estimate motion over multiple frames. Several regularization functions enhance robustness over time. Our test scenarios demonstrate that optical flow effectively regularizes the under-constrained problem of human shape and motion estimation from monocular video. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 69,123 |
2209.01161 | Reconstructing editable prismatic CAD from rounded voxel models | Reverse Engineering a CAD shape from other representations is an important geometric processing step for many downstream applications. In this work, we introduce a novel neural network architecture to solve this challenging task and approximate a smoothed signed distance function with an editable, constrained, prismatic CAD model. During training, our method reconstructs the input geometry in the voxel space by decomposing the shape into a series of 2D profile images and 1D envelope functions. These can then be recombined in a differentiable way allowing a geometric loss function to be defined. During inference, we obtain the CAD data by first searching a database of 2D constrained sketches to find curves which approximate the profile images, then extrude them and use Boolean operations to build the final CAD model. Our method approximates the target shape more closely than other methods and outputs highly editable constrained parametric sketches which are compatible with existing CAD software. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 315,792 |
2009.06483 | Unsupervised Domain Adaptation by Uncertain Feature Alignment | Unsupervised domain adaptation (UDA) deals with the adaptation of models from a given source domain with labeled data to an unlabeled target domain. In this paper, we utilize the inherent prediction uncertainty of a model to accomplish the domain adaptation task. The uncertainty is measured by Monte-Carlo dropout and used for our proposed Uncertainty-based Filtering and Feature Alignment (UFAL) that combines an Uncertain Feature Loss (UFL) function and an Uncertainty-Based Filtering (UBF) approach for alignment of features in Euclidean space. Our method surpasses recently proposed architectures and achieves state-of-the-art results on multiple challenging datasets. Code is available on the project website. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 195,659 |
2003.06536 | The p-AAA algorithm for data driven modeling of parametric dynamical
systems | The AAA algorithm has become a popular tool for data-driven rational approximation of single variable functions, such as transfer functions of a linear dynamical system. In the setting of parametric dynamical systems appearing in many prominent applications, the underlying (transfer) function to be modeled is a multivariate function. With this in mind, we develop the AAA framework for approximating multivariate functions where the approximant is constructed in the multivariate barycentric form. The method is data-driven, in the sense that it does not require access to full state-space model and requires only function evaluations. We discuss an extension to the case of matrix-valued functions, i.e., multi-input/multi-output dynamical systems, and provide a connection to the tangential interpolation theory. Several numerical examples illustrate the effectiveness of the proposed approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 168,147 |
2309.15803 | ANNCRIPS: Artificial Neural Networks for Cancer Research In Prediction &
Survival | Prostate cancer is a prevalent malignancy among men aged 50 and older. Current diagnostic methods primarily rely on blood tests, PSA:Prostate-Specific Antigen levels, and Digital Rectal Examinations (DRE). However, these methods suffer from a significant rate of false positive results. This study focuses on the development and validation of an intelligent mathematical model utilizing Artificial Neural Networks (ANNs) to enhance the early detection of prostate cancer. The primary objective of this research paper is to present a novel mathematical model designed to aid in the early detection of prostate cancer, facilitating prompt intervention by healthcare professionals. The model's implementation demonstrates promising potential in reducing the incidence of false positives, thereby improving patient outcomes. Furthermore, we envision that, with further refinement, extensive testing, and validation, this model can evolve into a robust, marketable solution for prostate cancer detection. The long-term goal is to make this solution readily available for deployment in various screening centers, hospitals, and research institutions, ultimately contributing to more effective cancer screening and patient care. | false | true | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 395,124 |
2406.05410 | MLLM-SR: Conversational Symbolic Regression base Multi-Modal Large
Language Models | Formulas are the language of communication between humans and nature. It is an important research topic of artificial intelligence to find expressions from observed data to reflect the relationship between each variable in the data, which is called a symbolic regression problem. The existing symbolic regression methods directly generate expressions according to the given observation data, and we cannot require the algorithm to generate expressions that meet specific requirements according to the known prior knowledge. For example, the expression needs to contain $\sin$ or be symmetric, and so on. Even if it can, it often requires very complex operations, which is very inconvenient. In this paper, based on multi-modal large language models, we propose MLLM-SR, a conversational symbolic regression method that can generate expressions that meet the requirements simply by describing the requirements with natural language instructions. By experimenting on the Nguyen dataset, we can demonstrate that MLLM-SR leads the state-of-the-art baselines in fitting performance. More notably, we experimentally demonstrate that MLLM-SR can well understand the prior knowledge we add to the natural language instructions. Moreover, the addition of prior knowledge can effectively guide MLLM-SR to generate correct expressions. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 462,122 |
2312.07696 | Real-time Network Intrusion Detection via Decision Transformers | Many cybersecurity problems that require real-time decision-making based on temporal observations can be abstracted as a sequence modeling problem, e.g., network intrusion detection from a sequence of arriving packets. Existing approaches like reinforcement learning may not be suitable for such cybersecurity decision problems, since the Markovian property may not necessarily hold and the underlying network states are often not observable. In this paper, we cast the problem of real-time network intrusion detection as casual sequence modeling and draw upon the power of the transformer architecture for real-time decision-making. By conditioning a causal decision transformer on past trajectories, consisting of the rewards, network packets, and detection decisions, our proposed framework will generate future detection decisions to achieve the desired return. It enables decision transformers to be applied to real-time network intrusion detection, as well as a novel tradeoff between the accuracy and timeliness of detection. The proposed solution is evaluated on public network intrusion detection datasets and outperforms several baseline algorithms using reinforcement learning and sequence modeling, in terms of detection accuracy and timeliness. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 415,019 |
2302.05829 | Tighter PAC-Bayes Bounds Through Coin-Betting | We consider the problem of estimating the mean of a sequence of random elements $f(X_1, \theta)$ $, \ldots, $ $f(X_n, \theta)$ where $f$ is a fixed scalar function, $S=(X_1, \ldots, X_n)$ are independent random variables, and $\theta$ is a possibly $S$-dependent parameter. An example of such a problem would be to estimate the generalization error of a neural network trained on $n$ examples where $f$ is a loss function. Classically, this problem is approached through concentration inequalities holding uniformly over compact parameter sets of functions $f$, for example as in Rademacher or VC type analysis. However, in many problems, such inequalities often yield numerically vacuous estimates. Recently, the \emph{PAC-Bayes} framework has been proposed as a better alternative for this class of problems for its ability to often give numerically non-vacuous bounds. In this paper, we show that we can do even better: we show how to refine the proof strategy of the PAC-Bayes bounds and achieve \emph{even tighter} guarantees. Our approach is based on the \emph{coin-betting} framework that derives the numerically tightest known time-uniform concentration inequalities from the regret guarantees of online gambling algorithms. In particular, we derive the first PAC-Bayes concentration inequality based on the coin-betting approach that holds simultaneously for all sample sizes. We demonstrate its tightness showing that by \emph{relaxing} it we obtain a number of previous results in a closed form including Bernoulli-KL and empirical Bernstein inequalities. Finally, we propose an efficient algorithm to numerically calculate confidence sequences from our bound, which often generates nonvacuous confidence bounds even with one sample, unlike the state-of-the-art PAC-Bayes bounds. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,184 |
1811.02701 | Proceedings of the 2018 Workshop on Compositional Approaches in Physics,
NLP, and Social Sciences | The ability to compose parts to form a more complex whole, and to analyze a whole as a combination of elements, is desirable across disciplines. This workshop bring together researchers applying compositional approaches to physics, NLP, cognitive science, and game theory. Within NLP, a long-standing aim is to represent how words can combine to form phrases and sentences. Within the framework of distributional semantics, words are represented as vectors in vector spaces. The categorical model of Coecke et al. [2010], inspired by quantum protocols, has provided a convincing account of compositionality in vector space models of NLP. There is furthermore a history of vector space models in cognitive science. Theories of categorization such as those developed by Nosofsky [1986] and Smith et al. [1988] utilise notions of distance between feature vectors. More recently G\"ardenfors [2004, 2014] has developed a model of concepts in which conceptual spaces provide geometric structures, and information is represented by points, vectors and regions in vector spaces. The same compositional approach has been applied to this formalism, giving conceptual spaces theory a richer model of compositionality than previously [Bolt et al., 2018]. Compositional approaches have also been applied in the study of strategic games and Nash equilibria. In contrast to classical game theory, where games are studied monolithically as one global object, compositional game theory works bottom-up by building large and complex games from smaller components. Such an approach is inherently difficult since the interaction between games has to be considered. Research into categorical compositional methods for this field have recently begun [Ghani et al., 2018]. Moreover, the interaction between the three disciplines of cognitive science, linguistics and game theory is a fertile ground for research. Game theory in cognitive science is a well-established area [Camerer, 2011]. Similarly game theoretic approaches have been applied in linguistics [J\"ager, 2008]. Lastly, the study of linguistics and cognitive science is intimately intertwined [Smolensky and Legendre, 2006, Jackendoff, 2007]. Physics supplies compositional approaches via vector spaces and categorical quantum theory, allowing the interplay between the three disciplines to be examined. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 112,666 |
2207.04380 | Connect the Dots: Tighter Discrete Approximations of Privacy Loss
Distributions | The privacy loss distribution (PLD) provides a tight characterization of the privacy loss of a mechanism in the context of differential privacy (DP). Recent work has shown that PLD-based accounting allows for tighter $(\varepsilon, \delta)$-DP guarantees for many popular mechanisms compared to other known methods. A key question in PLD-based accounting is how to approximate any (potentially continuous) PLD with a PLD over any specified discrete support. We present a novel approach to this problem. Our approach supports both pessimistic estimation, which overestimates the hockey-stick divergence (i.e., $\delta$) for any value of $\varepsilon$, and optimistic estimation, which underestimates the hockey-stick divergence. Moreover, we show that our pessimistic estimate is the best possible among all pessimistic estimates. Experimental evaluation shows that our approach can work with much larger discretization intervals while keeping a similar error bound compared to previous approaches and yet give a better approximation than existing methods. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 307,181 |
1707.05972 | Drone-based Object Counting by Spatially Regularized Regional Proposal
Network | Existing counting methods often adopt regression-based approaches and cannot precisely localize the target objects, which hinders the further analysis (e.g., high-level understanding and fine-grained classification). In addition, most of prior work mainly focus on counting objects in static environments with fixed cameras. Motivated by the advent of unmanned flying vehicles (i.e., drones), we are interested in detecting and counting objects in such dynamic environments. We propose Layout Proposal Networks (LPNs) and spatial kernels to simultaneously count and localize target objects (e.g., cars) in videos recorded by the drone. Different from the conventional region proposal methods, we leverage the spatial layout information (e.g., cars often park regularly) and introduce these spatially regularized constraints into our network to improve the localization accuracy. To evaluate our counting method, we present a new large-scale car parking lot dataset (CARPK) that contains nearly 90,000 cars captured from different parking lots. To the best of our knowledge, it is the first and the largest drone view dataset that supports object counting, and provides the bounding box annotations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 77,332 |
1802.05155 | A Diffusion Approximation Theory of Momentum SGD in Nonconvex
Optimization | Momentum Stochastic Gradient Descent (MSGD) algorithm has been widely applied to many nonconvex optimization problems in machine learning, e.g., training deep neural networks, variational Bayesian inference, and etc. Despite its empirical success, there is still a lack of theoretical understanding of convergence properties of MSGD. To fill this gap, we propose to analyze the algorithmic behavior of MSGD by diffusion approximations for nonconvex optimization problems with strict saddle points and isolated local optima. Our study shows that the momentum helps escape from saddle points, but hurts the convergence within the neighborhood of optima (if without the step size annealing or momentum annealing). Our theoretical discovery partially corroborates the empirical success of MSGD in training deep neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 90,388 |
2107.12594 | On the generalized Hamming weights of hyperbolic codes | A hyperbolic code is an evaluation code that improves a Reed-Muller because the dimension increases while the minimum distance is not penalized. We give the necessary and sufficient conditions, based on the basic parameters of the Reed-Muller, to determine whether a Reed-Muller coincides with a hyperbolic code. Given a hyperbolic code, we find the largest Reed-Muller containing the hyperbolic code and the smallest Reed-Muller in the hyperbolic code. We then prove that similarly to Reed-Muller and Cartesian codes, the $r$-th generalized Hamming weight and the $r$-th footprint of the hyperbolic code coincide. Unlike Reed-Muller and Cartesian, determining the $r$-th footprint of a hyperbolic code is still an open problem. We give upper and lower bounds for the $r$-th footprint of a hyperbolic code that, sometimes, are sharp. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 247,943 |
2305.04609 | SwinDocSegmenter: An End-to-End Unified Domain Adaptive Transformer for
Document Instance Segmentation | Instance-level segmentation of documents consists in assigning a class-aware and instance-aware label to each pixel of the image. It is a key step in document parsing for their understanding. In this paper, we present a unified transformer encoder-decoder architecture for en-to-end instance segmentation of complex layouts in document images. The method adapts a contrastive training with a mixed query selection for anchor initialization in the decoder. Later on, it performs a dot product between the obtained query embeddings and the pixel embedding map (coming from the encoder) for semantic reasoning. Extensive experimentation on competitive benchmarks like PubLayNet, PRIMA, Historical Japanese (HJ), and TableBank demonstrate that our model with SwinL backbone achieves better segmentation performance than the existing state-of-the-art approaches with the average precision of \textbf{93.72}, \textbf{54.39}, \textbf{84.65} and \textbf{98.04} respectively under one billion parameters. The code is made publicly available at: \href{https://github.com/ayanban011/SwinDocSegmenter}{github.com/ayanban011/SwinDocSegmenter} | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 362,839 |
2104.11403 | Low Pass Filter for Anti-aliasing in Temporal Action Localization | In temporal action localization methods, temporal downsampling operations are widely used to extract proposal features, but they often lead to the aliasing problem, due to lacking consideration of sampling rates. This paper aims to verify the existence of aliasing in TAL methods and investigate utilizing low pass filters to solve this problem by inhibiting the high-frequency band. However, the high-frequency band usually contains large amounts of specific information, which is important for model inference. Therefore, it is necessary to make a tradeoff between anti-aliasing and reserving high-frequency information. To acquire optimal performance, this paper learns different cutoff frequencies for different instances dynamically. This design can be plugged into most existing temporal modeling programs requiring only one additional cutoff frequency parameter. Integrating low pass filters to the downsampling operations significantly improves the detection performance and achieves comparable results on THUMOS'14, ActivityNet~1.3, and Charades datasets. Experiments demonstrate that anti-aliasing with low pass filters in TAL is advantageous and efficient. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 231,898 |
2201.10792 | On the Effectiveness of Pinyin-Character Dual-Decoding for End-to-End
Mandarin Chinese ASR | End-to-end automatic speech recognition (ASR) has achieved promising results. However, most existing end-to-end ASR methods neglect the use of specific language characteristics. For Mandarin Chinese ASR tasks, there exist mutual promotion relationship between Pinyin and Character where Chinese characters can be romanized by Pinyin. Based on the above intuition, we first investigate types of end-to-end encoder-decoder based models in the single-input dual-output (SIDO) multi-task framework, after which a novel asynchronous decoding with fuzzy Pinyin sampling method is proposed according to the one-to-one correspondence characteristics between Pinyin and Character. Furthermore, we proposed a two-stage training strategy to make training more stable and converge faster. The results on the test sets of AISHELL-1 dataset show that the proposed enhanced dual-decoder model without a language model is improved by a big margin compared to strong baseline models. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 277,106 |
2407.19231 | Alleviating Over-Smoothing via Aggregation over Compact Manifolds | Graph neural networks (GNNs) have achieved significant success in various applications. Most GNNs learn the node features with information aggregation of its neighbors and feature transformation in each layer. However, the node features become indistinguishable after many layers, leading to performance deterioration: a significant limitation known as over-smoothing. Past work adopted various techniques for addressing this issue, such as normalization and skip-connection of layer-wise output. After the study, we found that the information aggregations in existing work are all contracted aggregations, with the intrinsic property that features will inevitably converge to the same single point after many layers. To this end, we propose the aggregation over compacted manifolds method (ACM) that replaces the existing information aggregation with aggregation over compact manifolds, a special type of manifold, which avoids contracted aggregations. In this work, we theoretically analyze contracted aggregation and its properties. We also provide an extensive empirical evaluation that shows ACM can effectively alleviate over-smoothing and outperforms the state-of-the-art. The code can be found in https://github.com/DongzhuoranZhou/ACM.git. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 476,697 |
2206.06481 | RigNeRF: Fully Controllable Neural 3D Portraits | Volumetric neural rendering methods, such as neural radiance fields (NeRFs), have enabled photo-realistic novel view synthesis. However, in their standard form, NeRFs do not support the editing of objects, such as a human head, within a scene. In this work, we propose RigNeRF, a system that goes beyond just novel view synthesis and enables full control of head pose and facial expressions learned from a single portrait video. We model changes in head pose and facial expressions using a deformation field that is guided by a 3D morphable face model (3DMM). The 3DMM effectively acts as a prior for RigNeRF that learns to predict only residuals to the 3DMM deformations and allows us to render novel (rigid) poses and (non-rigid) expressions that were not present in the input sequence. Using only a smartphone-captured short video of a subject for training, we demonstrate the effectiveness of our method on free view synthesis of a portrait scene with explicit head pose and expression controls. The project page can be found here: http://shahrukhathar.github.io/2022/06/06/RigNeRF.html | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 302,388 |
2502.10764 | Learning to Explain Air Traffic Situation | Understanding how air traffic controllers construct a mental 'picture' of complex air traffic situations is crucial but remains a challenge due to the inherently intricate, high-dimensional interactions between aircraft, pilots, and controllers. Previous work on modeling the strategies of air traffic controllers and their mental image of traffic situations often centers on specific air traffic control tasks or pairwise interactions between aircraft, neglecting to capture the comprehensive dynamics of an air traffic situation. To address this issue, we propose a machine learning-based framework for explaining air traffic situations. Specifically, we employ a Transformer-based multi-agent trajectory model that encapsulates both the spatio-temporal movement of aircraft and social interaction between them. By deriving attention scores from the model, we can quantify the influence of individual aircraft on overall traffic dynamics. This provides explainable insights into how air traffic controllers perceive and understand the traffic situation. Trained on real-world air traffic surveillance data collected from the terminal airspace around Incheon International Airport in South Korea, our framework effectively explicates air traffic situations. This could potentially support and enhance the decision-making and situational awareness of air traffic controllers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 534,038 |
2411.07942 | Towards Low-bit Communication for Tensor Parallel LLM Inference | Tensor parallelism provides an effective way to increase server large language model (LLM) inference efficiency despite adding an additional communication cost. However, as server LLMs continue to scale in size, they will need to be distributed across more devices, magnifying the communication cost. One way to approach this problem is with quantization, but current methods for LLMs tend to avoid quantizing the features that tensor parallelism needs to communicate. Taking advantage of consistent outliers in communicated features, we introduce a quantization method that reduces communicated values on average from 16 bits to 4.2 bits while preserving nearly all of the original performance. For instance, our method maintains around 98.0% and 99.5% of Gemma 2 27B's and Llama 2 13B's original performance, respectively, averaged across all tasks we evaluated on. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 507,728 |
2011.04717 | Real-time Locational Marginal Price Forecasting Using Generative
Adversarial Network | In this paper, we propose a model-free unsupervised learning approach to forecast real-time locational marginal prices (RTLMPs) in wholesale electricity markets. By organizing system-wide hourly RTLMP data into a 3-dimensional (3D) tensor consisting of a series of time-indexed matrices, we formulate the RTLMP forecasting problem as a problem of generating the next matrix with forecasted RTLMPs given the historical RTLMP tensor, and propose a generative adversarial network (GAN) model to forecast RTLMPs. The proposed formulation preserves the spatio-temporal correlations among system-wide RTLMPs in the format of historical RTLMP tensor. The proposed GAN model learns the spatio-temporal correlations using the historical RTLMP tensors and generate RTLMPs that are statistically similar and temporally coherent to the historical RTLMP tensor. The proposed approach forecasts system-wide RTLMPs using only publicly available historical price data, without involving confidential information of system model, such as system parameters, topology, or operating conditions. The effectiveness of the proposed approach is verified through case studies using historical RTLMP data in Southwest Power Pool (SPP). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 205,658 |
2305.02200 | Deep Graph Representation Learning and Optimization for Influence
Maximization | Influence maximization (IM) is formulated as selecting a set of initial users from a social network to maximize the expected number of influenced users. Researchers have made great progress in designing various traditional methods, and their theoretical design and performance gain are close to a limit. In the past few years, learning-based IM methods have emerged to achieve stronger generalization ability to unknown graphs than traditional ones. However, the development of learning-based IM methods is still limited by fundamental obstacles, including 1) the difficulty of effectively solving the objective function; 2) the difficulty of characterizing the diversified underlying diffusion patterns; and 3) the difficulty of adapting the solution under various node-centrality-constrained IM variants. To cope with the above challenges, we design a novel framework DeepIM to generatively characterize the latent representation of seed sets, and we propose to learn the diversified information diffusion pattern in a data-driven and end-to-end manner. Finally, we design a novel objective function to infer optimal seed sets under flexible node-centrality-based budget constraints. Extensive analyses are conducted over both synthetic and real-world datasets to demonstrate the overall performance of DeepIM. The code and data are available at: https://github.com/triplej0079/DeepIM. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 361,949 |
1309.3842 | Estimation of intrinsic volumes from digital grey-scale images | Local algorithms are common tools for estimating intrinsic volumes from black-and-white digital images. However, these algorithms are typically biased in the design based setting, even when the resolution tends to infinity. Moreover, images recorded in practice are most often blurred grey-scale images rather than black-and-white. In this paper, an extended definition of local algorithms, applying directly to grey-scale images without thresholding, is suggested. We investigate the asymptotics of these new algorithms when the resolution tends to infinity and apply this to construct estimators for surface area and integrated mean curvature that are asymptotically unbiased in certain natural settings. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 27,050 |
1907.00236 | Streaming Quantiles Algorithms with Small Space and Update Time | Approximating quantiles and distributions over streaming data has been studied for roughly two decades now. Recently, Karnin, Lang, and Liberty proposed the first asymptotically optimal algorithm for doing so. This manuscript complements their theoretical result by providing a practical variants of their algorithm with improved constants. For a given sketch size, our techniques provably reduce the upper bound on the sketch error by a factor of two. These improvements are verified experimentally. Our modified quantile sketch improves the latency as well by reducing the worst case update time from $O(1/\varepsilon)$ down to $O(\log (1/\varepsilon))$. We also suggest two algorithms for weighted item streams which offer improved asymptotic update times compared to na\"ive extensions. Finally, we provide a specialized data structure for these sketches which reduces both their memory footprints and update times. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | 136,974 |
1702.00694 | Integrating Soft Robotics with ROS - A hybrid pick and place arm | Soft robotic systems present a variety of new opportunities for solving complex problems. The use of soft robotic grippers, for example, can simplify the complexity in tasks such as the of grasping irregular and delicate objects. Adoption of soft robotics by academia and industry, however, has been slow and this is, in-part, due to the amount of hardware and software that must be developed from scratch for each use of soft system components. In this paper we detail the design, fabrication and validation of an open-source framework that we designed to lower the barrier to entry for integrating soft robotic subsystems. This framework is built on ROS (the Robot Operating System) and we use it to demonstrate a modular, soft-hard hybrid system which is capable of completing pick and place tasks. By lowering this barrier to entry we hope that system designers and researchers will find it easy to integrate soft components into their existing ROS-enabled robotic systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 67,690 |
2502.04758 | Differential Privacy of Quantum and Quantum-Inspired-Classical
Recommendation Algorithms | We analyze the DP (differential privacy) properties of the quantum recommendation algorithm and the quantum-inspired-classical recommendation algorithm. We discover that the quantum recommendation algorithm is a privacy curating mechanism on its own, requiring no external noise, which is different from traditional differential privacy mechanisms. In our analysis, a novel perturbation method tailored for SVD (singular value decomposition) and low-rank matrix approximation problems is introduced. Using the perturbation method and random matrix theory, we are able to derive that both the quantum and quantum-inspired-classical algorithms are $\big(\tilde{\mathcal{O}}\big(\frac 1n\big),\,\, \tilde{\mathcal{O}}\big(\frac{1}{\min\{m,n\}}\big)\big)$-DP under some reasonable restrictions, where $m$ and $n$ are numbers of users and products in the input preference database respectively. Nevertheless, a comparison shows that the quantum algorithm has better privacy preserving potential than the classical one. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 531,308 |
1407.2875 | Quantum Dynamics, Minkowski-Hilbert space, and A Quantum Stochastic
Duhamel Principle | In this paper we shall re-visit the well-known Schr\"odinger and Lindblad dynamics of quantum mechanics. However, these equations may be realized as the consequence of a more general, underlying dynamical process. In both cases we shall see that the evolution of a quantum state $P_\psi=\varrho(0)$ has the not so well-known pseudo-quadratic form $\partial_t\varrho(t)=\mathbf{V}^\star\varrho(t)\mathbf{V}$ where $\mathbf{V}$ is a vector operator in a complex Minkowski space and the pseudo-adjoint $\mathbf{V}^\star$ is induced by the Minkowski metric $\boldsymbol{\eta}$. The interesting thing about this formalism is that its derivation has very deep roots in a new understanding of the differential calculus of time. This Minkowski-Hilbert representation of quantum dynamics is called the \emph{Belavkin Formalism}; a beautiful, but not well understood theory of mathematical physics that understands that both deterministic and stochastic dynamics may be `unraveled' in a second-quantized Minkowski space. Working in such a space provided the author with the means to construct a QS (quantum stochastic) Duhamel principle and known applications to a Schr\"odinger dynamics perturbed by a continual measurement process are considered. What is not known, but presented here, is the role of the Lorentz transform in quantum measurement, and the appearance of Riemannian geometry in quantum measurement is also discussed. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,571 |
1912.11221 | FDD Massive MIMO Uplink and Downlink Channel Reciprocity Properties:
Full or Partial Reciprocity? | One challenge for FDD massive MIMO communication system is how to obtain the downlink channel state information (CSI) at the base station. Except for traditional codebook feedback through uplink pilot transmission, some channel reciprocity properties can be utilized through uplink channel estimation and channel parameter estimation algorithms. In this paper, the uplink and downlink channel reciprocity properties are analyzed. It is theoretically proved that not all multipath parameters for FDD downlink and uplink channels are equivalent. Therefore, the so called full reciprocity property does not hold while the partial reciprocity property holds. Moreover, the channel measurement campaign is conducted to verify our theoretical analysis. Finally, in order to support the partial reciprocity property, the revision for the standardization 5G channel model is proposed as well. With the contribution of this paper, the FDD massive MIMO system transmission scheme design could be led to the right direction. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 158,516 |
2005.09996 | Heterogeneous Susceptibilities in Social Influence Models | Network autocorrelation models are widely used to evaluate the impact of social influence on some variable of interest. This is a large class of models that parsimoniously accounts for how one's neighbors influence one's own behaviors or opinions by incorporating the network adjacency matrix into the joint distribution of the data. These models assume homogeneous susceptibility to social influence, however, which may be a strong assumption in many contexts. This paper proposes a hierarchical model that allows the influence parameter to be a function of individual attributes and/or of local network topological features. We derive an approximation of the posterior distribution in a general framework that is applicable to the Durbin, network effects, network disturbances, or network moving average autocorrelation models. The proposed approach can also be applied to investigating determinants of social influence in the context of egocentric network data. We apply our method to a data set collected via mobile phones in which we determine the effect of social influence on physical activity levels, as well as classroom data in which we investigate peer influence on student defiance. With this last data set, we also investigate the performance of the proposed egocentric network model. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 178,054 |
2011.04803 | Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering | Standard first-order stochastic optimization algorithms base their updates solely on the average mini-batch gradient, and it has been shown that tracking additional quantities such as the curvature can help de-sensitize common hyperparameters. Based on this intuition, we explore the use of exact per-sample Hessian-vector products and gradients to construct optimizers that are self-tuning and hyperparameter-free. Based on a dynamics model of the gradient, we derive a process which leads to a curvature-corrected, noise-adaptive online gradient estimate. The smoothness of our updates makes it more amenable to simple step size selection schemes, which we also base off of our estimates quantities. We prove that our model-based procedure converges in the noisy quadratic setting. Though we do not see similar gains in deep learning tasks, we can match the performance of well-tuned optimizers and ultimately, this is an interesting step for constructing self-tuning optimizers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 205,696 |
1504.00580 | Quantum image classification using principal component analysis | We present a novel quantum algorithm for classification of images. The algorithm is constructed using principal component analysis and von Neuman quantum measurements. In order to apply the algorithm we present a new quantum representation of grayscale images. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 41,716 |
2012.03108 | Generating Synthetic Multispectral Satellite Imagery from Sentinel-2 | Multi-spectral satellite imagery provides valuable data at global scale for many environmental and socio-economic applications. Building supervised machine learning models based on these imagery, however, may require ground reference labels which are not available at global scale. Here, we propose a generative model to produce multi-resolution multi-spectral imagery based on Sentinel-2 data. The resulting synthetic images are indistinguishable from real ones by humans. This technique paves the road for future work to generate labeled synthetic imagery that can be used for data augmentation in data scarce regions and applications. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 209,991 |
1712.07733 | A Unified Asymptotic Analysis of Area Spectral Efficiency in Ultradense
Cellular Networks | This paper studies the asymptotic properties of average area spectral efficiency (ASE) of a downlink cellular network in the limit of very dense base station (BS) and user densities. This asymptotic analysis relies on three assumptions: (1) interference is treated as noise; (2) the BS locations are drawn from a Poisson point process; (3) the path loss function is bounded above satisfying mild regularity conditions. We consider three possible definitions of the average ASE, all of which give units of bits per second per unit bandwidth per unit area. When there is no constraint on the minimum operational signal-to-interference-plus-noise ratio (SINR) and instantaneous full channel state information (CSI) is available at the transmitter, the average ASE is proven to saturate to a constant, which we derive in a closed form. For the other two ASE definitions, wherein either a minimum SINR is enforced or CSI is not available, the average ASE is instead shown to collapse to zero at high BS density. We provide several familiar case studies for the class of considered path loss models, and demonstrate that our results cover most previous models and results on ultradense networks as special cases. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 87,087 |
1804.08144 | Union bound for quantum information processing | In this paper, we prove a quantum union bound that is relevant when performing a sequence of binary-outcome quantum measurements on a quantum state. The quantum union bound proved here involves a tunable parameter that can be optimized, and this tunable parameter plays a similar role to a parameter involved in the Hayashi-Nagaoka inequality [IEEE Trans. Inf. Theory, 49(7):1753 (2003)], used often in quantum information theory when analyzing the error probability of a square-root measurement. An advantage of the proof delivered here is that it is elementary, relying only on basic properties of projectors, the Pythagorean theorem, and the Cauchy--Schwarz inequality. As a non-trivial application of our quantum union bound, we prove that a sequential decoding strategy for classical communication over a quantum channel achieves a lower bound on the channel's second-order coding rate. This demonstrates the advantage of our quantum union bound in the non-asymptotic regime, in which a communication channel is called a finite number of times. We expect that the bound will find a range of applications in quantum communication theory, quantum algorithms, and quantum complexity theory. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 95,698 |
2405.16551 | GPU Based Differential Evolution: New Insights and Comparative Study | Differential Evolution (DE) is a highly successful population based global optimisation algorithm, commonly used for solving numerical optimisation problems. However, as the complexity of the objective function increases, the wall-clock run-time of the algorithm suffers as many fitness function evaluations must take place to effectively explore the search space. Due to the inherently parallel nature of the DE algorithm, graphics processing units (GPU) have been used to effectively accelerate both the fitness evaluation and DE algorithm. This work reviews the main architectural choices made in the literature for GPU based DE algorithms and introduces a new GPU based numerical optimisation benchmark to evaluate and compare GPU based DE algorithms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 457,486 |
2202.10745 | Improving Systematic Generalization Through Modularity and Augmentation | Systematic generalization is the ability to combine known parts into novel meaning; an important aspect of efficient human learning, but a weakness of neural network learning. In this work, we investigate how two well-known modeling principles -- modularity and data augmentation -- affect systematic generalization of neural networks in grounded language learning. We analyze how large the vocabulary needs to be to achieve systematic generalization and how similar the augmented data needs to be to the problem at hand. Our findings show that even in the controlled setting of a synthetic benchmark, achieving systematic generalization remains very difficult. After training on an augmented dataset with almost forty times more adverbs than the original problem, a non-modular baseline is not able to systematically generalize to a novel combination of a known verb and adverb. When separating the task into cognitive processes like perception and navigation, a modular neural network is able to utilize the augmented data and generalize more systematically, achieving 70% and 40% exact match increase over state-of-the-art on two gSCAN tests that have not previously been improved. We hope that this work gives insight into the drivers of systematic generalization, and what we still need to improve for neural networks to learn more like humans do. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 281,648 |
1412.8340 | On the Smallest Eigenvalue of General correlated Gaussian Matrices | This paper investigates the behaviour of the spectrum of generally correlated Gaussian random matrices whose columns are zero-mean independent vectors but have different correlations, under the specific regime where the number of their columns and that of their rows grow at infinity with the same pace. This work is, in particular, motivated by applications from statistical signal processing and wireless communications, where this kind of matrices naturally arise. Following the approach proposed in [1], we prove that under some specific conditions, the smallest singular value of generally correlated Gaussian matrices is almost surely away from zero. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 38,909 |
1110.0207 | Analysing complexity of XML Schemas in geospatial web services | XML Schema is the language used to define the structure of messages exchanged between OGC-based web service clients and providers. The size of these schemas has been growing with time, reaching a state that makes its understanding and effective application a hard task. A first step to cope with this situation is to provide different ways to measure the complexity of the schemas. In this regard, we present in this paper an analysis of the complexity of XML schemas in OGC web services. We use a group of metrics found in the literature and introduce new metrics to measure size and/or complexity of these schemas. The use of adequate metrics allows us to quantify the complexity, quality and other properties of the schemas, which can be very useful in different scenarios. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 12,442 |
2501.07746 | A Heterogeneous Multimodal Graph Learning Framework for Recognizing User
Emotions in Social Networks | The rapid expansion of social media platforms has provided unprecedented access to massive amounts of multimodal user-generated content. Comprehending user emotions can provide valuable insights for improving communication and understanding of human behaviors. Despite significant advancements in Affective Computing, the diverse factors influencing user emotions in social networks remain relatively understudied. Moreover, there is a notable lack of deep learning-based methods for predicting user emotions in social networks, which could be addressed by leveraging the extensive multimodal data available. This work presents a novel formulation of personalized emotion prediction in social networks based on heterogeneous graph learning. Building upon this formulation, we design HMG-Emo, a Heterogeneous Multimodal Graph Learning Framework that utilizes deep learning-based features for user emotion recognition. Additionally, we include a dynamic context fusion module in HMG-Emo that is capable of adaptively integrating the different modalities in social media data. Through extensive experiments, we demonstrate the effectiveness of HMG-Emo and verify the superiority of adopting a graph neural network-based approach, which outperforms existing baselines that use rich hand-crafted features. To the best of our knowledge, HMG-Emo is the first multimodal and deep-learning-based approach to predict personalized emotions within online social networks. Our work highlights the significance of exploiting advanced deep learning techniques for less-explored problems in Affective Computing. | false | false | false | true | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 524,493 |
2301.08245 | Booster: a Benchmark for Depth from Images of Specular and Transparent
Surfaces | Estimating depth from images nowadays yields outstanding results, both in terms of in-domain accuracy and generalization. However, we identify two main challenges that remain open in this field: dealing with non-Lambertian materials and effectively processing high-resolution images. Purposely, we propose a novel dataset that includes accurate and dense ground-truth labels at high resolution, featuring scenes containing several specular and transparent surfaces. Our acquisition pipeline leverages a novel deep space-time stereo framework, enabling easy and accurate labeling with sub-pixel precision. The dataset is composed of 606 samples collected in 85 different scenes, each sample includes both a high-resolution pair (12 Mpx) as well as an unbalanced stereo pair (Left: 12 Mpx, Right: 1.1 Mpx), typical of modern mobile devices that mount sensors with different resolutions. Additionally, we provide manually annotated material segmentation masks and 15K unlabeled samples. The dataset is composed of a train set and two test sets, the latter devoted to the evaluation of stereo and monocular depth estimation networks. Our experiments highlight the open challenges and future research directions in this field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 341,147 |
2104.11645 | Software-Defined Edge Computing: A New Architecture Paradigm to Support
IoT Data Analysis | The rapid deployment of Internet of Things (IoT) applications leads to massive data that need to be processed. These IoT applications have specific communication requirements on latency and bandwidth, and present new features on their generated data such as time-dependency. Therefore, it is desirable to reshape the current IoT architectures by exploring their inherent nature of communication and computing to support smart IoT data process and analysis. We introduce in this paper features of IoT data, trends of IoT network architectures, some problems in IoT data analysis, and their solutions. Specifically, we view that software-defined edge computing is a promising architecture to support the unique needs of IoT data analysis. We further present an experiment on data anomaly detection in this architecture, and the comparison between two architectures for ECG diagnosis. Results show that our method is effective and feasible. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 231,979 |
2407.00119 | Efficient Long-distance Latent Relation-aware Graph Neural Network for
Multi-modal Emotion Recognition in Conversations | The task of multi-modal emotion recognition in conversation (MERC) aims to analyze the genuine emotional state of each utterance based on the multi-modal information in the conversation, which is crucial for conversation understanding. Existing methods focus on using graph neural networks (GNN) to model conversational relationships and capture contextual latent semantic relationships. However, due to the complexity of GNN, existing methods cannot efficiently capture the potential dependencies between long-distance utterances, which limits the performance of MERC. In this paper, we propose an Efficient Long-distance Latent Relation-aware Graph Neural Network (ELR-GNN) for multi-modal emotion recognition in conversations. Specifically, we first use pre-extracted text, video and audio features as input to Bi-LSTM to capture contextual semantic information and obtain low-level utterance features. Then, we use low-level utterance features to construct a conversational emotion interaction graph. To efficiently capture the potential dependencies between long-distance utterances, we use the dilated generalized forward push algorithm to precompute the emotional propagation between global utterances and design an emotional relation-aware operator to capture the potential semantic associations between different utterances. Furthermore, we combine early fusion and adaptive late fusion mechanisms to fuse latent dependency information between speaker relationship information and context. Finally, we obtain high-level discourse features and feed them into MLP for emotion prediction. Extensive experimental results show that ELR-GNN achieves state-of-the-art performance on the benchmark datasets IEMOCAP and MELD, with running times reduced by 52\% and 35\%, respectively. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 468,735 |
2006.07565 | Line-of-Sight MIMO for High Capacity Millimeter Wave Backhaul in FDD
Systems | Wireless backhaul is considered to be the key part of the future wireless network with dense small cell traffic and high capacity demand. In this paper, we focus on the design of a high spectral efficiency line-of-sight (LoS) multiple-input multiple-output (MIMO) system for millimeter wave (mmWave) backhaul using dual-polarized frequency division duplex (FDD). High spectral efficiency is very challenging to achieve for the system due to various physical impairments such as phase noise (PHN), timing offset (TO) as well as the poor condition number of the LoS MIMO. In this paper, we propose a holistic solution containing TO compensation, PHN estimation, precoder/decorrelator optimization of the LoS MIMO for wireless backhaul, and the interleaving of each part. We show that the proposed solution has robust performance with end-to-end spectral efficiency of 60 bits/s/Hz for 8x8 MIMO. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 181,854 |
2408.11146 | Swim till You Sink: Computing the Limit of a Game | During 2023, two interesting results were proven about the limit behavior of game dynamics: First, it was shown that there is a game for which no dynamics converges to the Nash equilibria. Second, it was shown that the sink equilibria of a game adequately capture the limit behavior of natural game dynamics. These two results have created a need and opportunity to articulate a principled computational theory of the meaning of the game that is based on game dynamics. Given any game in normal form, and any prior distribution of play, we study the problem of computing the asymptotic behavior of a class of natural dynamics called the noisy replicator dynamics as a limit distribution over the sink equilibria of the game. When the prior distribution has pure strategy support, we prove this distribution can be computed efficiently, in near-linear time to the size of the best-response graph. When the distribution can be sampled -- for example, if it is the uniform distribution over all mixed strategy profiles -- we show through experiments that the limit distribution of reasonably large games can be estimated quite accurately through sampling and simulation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 482,156 |
1608.05766 | On Nonconvex Decentralized Gradient Descent | Consensus optimization has received considerable attention in recent years. A number of decentralized algorithms have been proposed for {convex} consensus optimization. However, to the behaviors or consensus \emph{nonconvex} optimization, our understanding is more limited. When we lose convexity, we cannot hope our algorithms always return global solutions though they sometimes still do sometimes. Somewhat surprisingly, the decentralized consensus algorithms, DGD and Prox-DGD, retain most other properties that are known in the convex setting. In particular, when diminishing (or constant) step sizes are used, we can prove convergence to a (or a neighborhood of) consensus stationary solution under some regular assumptions. It is worth noting that Prox-DGD can handle nonconvex nonsmooth functions if their proximal operators can be computed. Such functions include SCAD and $\ell_q$ quasi-norms, $q\in[0,1)$. Similarly, Prox-DGD can take the constraint to a nonconvex set with an easy projection. To establish these properties, we have to introduce a completely different line of analysis, as well as modify existing proofs that were used the convex setting. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 60,021 |
2109.05702 | Covert queueing problem with a Markovian statistic | Based on the covert communication framework, we consider a covert queueing problem that has a Markovian statistic. Willie jobs arrive according to a Poisson process and require service from server Bob. Bob does not have a queue for jobs to wait and hence when the server is busy, arriving Willie jobs are lost. Willie and Bob enter a contract under which Bob should only serve Willie jobs. As part of the usage statistic, for a sequence of N consecutive jobs that arrived, Bob informs Willie whether each job was served or lost (this is the Markovian statistic). Bob is assumed to be violating the contract and admitting non-Willie (Nillie) jobs according to a Poisson process. For such a setting, we identify the hypothesis testing to be performed (given the Markovian data) by Willie to detect the presence or absence of Nillie jobs. We also characterize the upper bound on arrival rate of Nillie jobs such that the error in the hypothesis testing of Willie is arbitrarily large, ensuring covertness in admitting Nillie jobs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 254,905 |
2409.11432 | A hybrid solution for 2-UAV RAN slicing | It's possible to distribute the Internet to users via drones. However it is then necessary to place the drones according to the positions of the users. Moreover, the 5th Generation (5G) New Radio (NR) technology is designed to accommodate a wide range of applications and industries. The NGNM 5G White Paper \cite{5gwhitepaper} groups these vertical use cases into three categories: - enhanced Mobile Broadband (eMBB) - massive Machine Type Communication (mMTC) - Ultra-Reliable Low-latency Communication (URLLC). Partitioning the physical network into multiple virtual networks appears to be the best way to provide a customised service for each application and limit operational costs. This design is well known as \textit{network slicing}. Each drone must thus slice its bandwidth between each of the 3 user classes. This whole problem (placement + bandwidth) can be defined as an optimization problem, but since it is very hard to solve efficiently, it is almost always addressed by AI in the litterature. In my internship, I wanted to prove that viewing the problem as an optimization problem can still be useful, by building an hybrid solution involving on one hand AI and on the other optimization. I use it to achieve better results than approaches that use only AI, although at the cost of slightly larger (but still reasonable) computation times. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 489,155 |
1412.5263 | Graph Analytics using the Vertica Relational Database | Graph analytics is becoming increasingly popular, with a deluge of new systems for graph analytics having been proposed in the past few years. These systems often start from the assumption that a new storage or query processing system is needed, in spite of graph data being often collected and stored in a relational database in the first place. In this paper, we study Vertica relational database as a platform for graph analytics. We show that vertex-centric graph analysis can be translated to SQL queries, typically involving table scans and joins, and that modern column-oriented databases are very well suited to running such queries. Specifically, we present an experimental evaluation of the Vertica relational database system on a variety of graph analytics, including iterative analysis, a combination of graph and relational analyses, and more complex 1- hop neighborhood graph analytics, showing that it is competitive to two popular vertex-centric graph analytics systems, namely Giraph and GraphLab. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 38,469 |
2202.02491 | Distributed Learning With Sparsified Gradient Differences | A very large number of communications are typically required to solve distributed learning tasks, and this critically limits scalability and convergence speed in wireless communications applications. In this paper, we devise a Gradient Descent method with Sparsification and Error Correction (GD-SEC) to improve the communications efficiency in a general worker-server architecture. Motivated by a variety of wireless communications learning scenarios, GD-SEC reduces the number of bits per communication from worker to server with no degradation in the order of the convergence rate. This enables larger-scale model learning without sacrificing convergence or accuracy. At each iteration of GD-SEC, instead of directly transmitting the entire gradient vector, each worker computes the difference between its current gradient and a linear combination of its previously transmitted gradients, and then transmits the sparsified gradient difference to the server. A key feature of GD-SEC is that any given component of the gradient difference vector will not be transmitted if its magnitude is not sufficiently large. An error correction technique is used at each worker to compensate for the error resulting from sparsification. We prove that GD-SEC is guaranteed to converge for strongly convex, convex, and nonconvex optimization problems with the same order of convergence rate as GD. Furthermore, if the objective function is strongly convex, GD-SEC has a fast linear convergence rate. Numerical results not only validate the convergence rate of GD-SEC but also explore the communication bit savings it provides. Given a target accuracy, GD-SEC can significantly reduce the communications load compared to the best existing algorithms without slowing down the optimization process. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 278,833 |
2107.08567 | Structural Design Recommendations in the Early Design Phase using
Machine Learning | Structural engineering knowledge can be of significant importance to the architectural design team during the early design phase. However, architects and engineers do not typically work together during the conceptual phase; in fact, structural engineers are often called late into the process. As a result, updates in the design are more difficult and time-consuming to complete. At the same time, there is a lost opportunity for better design exploration guided by structural feedback. In general, the earlier in the design process the iteration happens, the greater the benefits in cost efficiency and informed de-sign exploration, which can lead to higher-quality creative results. In order to facilitate an informed exploration in the early design stage, we suggest the automation of fundamental structural engineering tasks and introduce ApproxiFramer, a Machine Learning-based system for the automatic generation of structural layouts from building plan sketches in real-time. The system aims to assist architects by presenting them with feasible structural solutions during the conceptual phase so that they proceed with their design with adequate knowledge of its structural implications. In this paper, we describe the system and evaluate the performance of a proof-of-concept implementation in the domain of orthogonal, metal, rigid structures. We trained a Convolutional Neural Net to iteratively generate structural design solutions for sketch-level building plans using a synthetic dataset and achieved an average error of 2.2% in the predicted positions of the columns. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 246,765 |
2105.00573 | Searchable Hidden Intermediates for End-to-End Models of Decomposable
Sequence Tasks | End-to-end approaches for sequence tasks are becoming increasingly popular. Yet for complex sequence tasks, like speech translation, systems that cascade several models trained on sub-tasks have shown to be superior, suggesting that the compositionality of cascaded systems simplifies learning and enables sophisticated search capabilities. In this work, we present an end-to-end framework that exploits compositionality to learn searchable hidden representations at intermediate stages of a sequence model using decomposed sub-tasks. These hidden intermediates can be improved using beam search to enhance the overall performance and can also incorporate external models at intermediate stages of the network to re-score or adapt towards out-of-domain data. One instance of the proposed framework is a Multi-Decoder model for speech translation that extracts the searchable hidden intermediates from a speech recognition sub-task. The model demonstrates the aforementioned benefits and outperforms the previous state-of-the-art by around +6 and +3 BLEU on the two test sets of Fisher-CallHome and by around +3 and +4 BLEU on the English-German and English-French test sets of MuST-C. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 233,259 |
2411.11079 | Electrostatic Force Regularization for Neural Structured Pruning | The demand for deploying deep convolutional neural networks (DCNNs) on resource-constrained devices for real-time applications remains substantial. However, existing state-of-the-art structured pruning methods often involve intricate implementations, require modifications to the original network architectures, and necessitate an extensive fine-tuning phase. To overcome these challenges, we propose a novel method that, for the first time, incorporates the concepts of charge and electrostatic force from physics into the training process of DCNNs. The magnitude of this force is directly proportional to the product of the charges of the convolution filter and the source filter, and inversely proportional to the square of the distance between them. We applied this electrostatic-like force to the convolution filters, either attracting filters with opposite charges toward non-zero weights or repelling filters with like charges toward zero weights. Consequently, filters subject to repulsive forces have their weights reduced to zero, enabling their removal, while the attractive forces preserve filters with significant weights that retain information. Unlike conventional methods, our approach is straightforward to implement, does not require any architectural modifications, and simultaneously optimizes weights and ranks filter importance, all without the need for extensive fine-tuning. We validated the efficacy of our method on modern DCNN architectures using the MNIST, CIFAR, and ImageNet datasets, achieving competitive performance compared to existing structured pruning approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 508,911 |
1307.1387 | Examining the Classification Accuracy of TSVMs with ?Feature Selection
in Comparison with the GLAD Algorithm | Gene expression data sets are used to classify and predict patient diagnostic categories. As we know, it is extremely difficult and expensive to obtain gene expression labelled examples. Moreover, conventional supervised approaches cannot function properly when labelled data (training examples) are insufficient using Support Vector Machines (SVM) algorithms. Therefore, in this paper, we suggest Transductive Support Vector Machines (TSVMs) as semi-supervised learning algorithms, learning with both labelled samples data and unlabelled samples to perform the classification of microarray data. To prune the superfluous genes and samples we used a feature selection method called Recursive Feature Elimination (RFE), which is supposed to enhance the output of classification and avoid the local optimization problem. We examined the classification prediction accuracy of the TSVM-RFE algorithm in comparison with the Genetic Learning Across Datasets (GLAD) algorithm, as both are semi-supervised learning methods. Comparing these two methods, we found that the TSVM-RFE surpassed both a SVM using RFE and GLAD. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 25,622 |
2412.03307 | Contextual Data Integration for Bike-sharing Demand Prediction with
Graph Neural Networks in Degraded Weather Conditions | Demand for bike sharing is impacted by various factors, such as weather conditions, events, and the availability of other transportation modes. This impact remains elusive due to the complex interdependence of these factors or locationrelated user behavior variations. It is also not clear which factor is additional information which are not already contained in the historical demand. Intermodal dependencies between bike-sharing and other modes are also underexplored, and the value of this information has not been studied in degraded situations. The proposed study analyzes the impact of adding contextual data, such as weather, time embedding, and road traffic flow, to predict bike-sharing Origin-Destination (OD) flows in atypical weather situations Our study highlights a mild relationship between prediction quality of bike-sharing demand and road traffic flow, while the introduced time embedding allows outperforming state-of-the-art results, particularly in the case of degraded weather conditions. Including weather data as an additional input further improves our model with respect to the basic ST-ED-RMGC prediction model by reducing of more than 20% the prediction error in degraded weather condition. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 513,909 |
2010.05324 | Multilingual Offensive Language Identification with Cross-lingual
Embeddings | Offensive content is pervasive in social media and a reason for concern to companies and government organizations. Several studies have been recently published investigating methods to detect the various forms of such content (e.g. hate speech, cyberbulling, and cyberaggression). The clear majority of these studies deal with English partially because most annotated datasets available contain English data. In this paper, we take advantage of English data available by applying cross-lingual contextual word embeddings and transfer learning to make predictions in languages with less resources. We project predictions on comparable data in Bengali, Hindi, and Spanish and we report results of 0.8415 F1 macro for Bengali, 0.8568 F1 macro for Hindi, and 0.7513 F1 macro for Spanish. Finally, we show that our approach compares favorably to the best systems submitted to recent shared tasks on these three languages, confirming the robustness of cross-lingual contextual embeddings and transfer learning for this task. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 200,073 |
1304.2749 | Evidential Reasoning in Image Understanding | In this paper, we present some results of evidential reasoning in understanding multispectral images of remote sensing systems. The Dempster-Shafer approach of combination of evidences is pursued to yield contextual classification results, which are compared with previous results of the Bayesian context free classification, contextual classifications of dynamic programming and stochastic relaxation approaches. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 23,758 |
1011.0492 | Multiscale Bone Remodelling with Spatial P Systems | Many biological phenomena are inherently multiscale, i.e. they are characterized by interactions involving different spatial and temporal scales simultaneously. Though several approaches have been proposed to provide "multilayer" models, only Complex Automata, derived from Cellular Automata, naturally embed spatial information and realize multiscaling with well-established inter-scale integration schemas. Spatial P systems, a variant of P systems in which a more geometric concept of space has been added, have several characteristics in common with Cellular Automata. We propose such a formalism as a basis to rephrase the Complex Automata multiscaling approach and, in this perspective, provide a 2-scale Spatial P system describing bone remodelling. The proposed model not only results to be highly faithful and expressive in a multiscale scenario, but also highlights the need of a deep and formal expressiveness study involving Complex Automata, Spatial P systems and other promising multiscale approaches, such as our shape-based one already resulted to be highly faithful. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 8,109 |
1609.09270 | Pano2CAD: Room Layout From A Single Panorama Image | This paper presents a method of estimating the geometry of a room and the 3D pose of objects from a single 360-degree panorama image. Assuming Manhattan World geometry, we formulate the task as a Bayesian inference problem in which we estimate positions and orientations of walls and objects. The method combines surface normal estimation, 2D object detection and 3D object pose estimation. Quantitative results are presented on a dataset of synthetically generated 3D rooms containing objects, as well as on a subset of hand-labeled images from the public SUN360 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 61,700 |
2211.06770 | MicroISP: Processing 32MP Photos on Mobile Devices with Deep Learning | While neural networks-based photo processing solutions can provide a better image quality compared to the traditional ISP systems, their application to mobile devices is still very limited due to their very high computational complexity. In this paper, we present a novel MicroISP model designed specifically for edge devices, taking into account their computational and memory limitations. The proposed solution is capable of processing up to 32MP photos on recent smartphones using the standard mobile ML libraries and requiring less than 1 second to perform the inference, while for FullHD images it achieves real-time performance. The architecture of the model is flexible, allowing to adjust its complexity to devices of different computational power. To evaluate the performance of the model, we collected a novel Fujifilm UltraISP dataset consisting of thousands of paired photos captured with a normal mobile camera sensor and a professional 102MP medium-format FujiFilm GFX100 camera. The experiments demonstrated that, despite its compact size, the MicroISP model is able to provide comparable or better visual results than the traditional mobile ISP systems, while outperforming the previously proposed efficient deep learning based solutions. Finally, this model is also compatible with the latest mobile AI accelerators, achieving good runtime and low power consumption on smartphone NPUs and APUs. The code, dataset and pre-trained models are available on the project website: https://people.ee.ethz.ch/~ihnatova/microisp.html | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 330,019 |
2306.00310 | Prompt Algebra for Task Composition | We investigate whether prompts learned independently for different tasks can be later combined through prompt algebra to obtain a model that supports composition of tasks. We consider Visual Language Models (VLM) with prompt tuning as our base classifier and formally define the notion of prompt algebra. We propose constrained prompt tuning to improve performance of the composite classifier. In the proposed scheme, prompts are constrained to appear in the lower dimensional subspace spanned by the basis vectors of the pre-trained vocabulary. Further regularization is added to ensure that the learned prompt is grounded correctly to the existing pre-trained vocabulary. We demonstrate the effectiveness of our method on object classification and object-attribute classification datasets. On average, our composite model obtains classification accuracy within 2.5% of the best base model. On UTZappos it improves classification accuracy over the best base model by 8.45% on average. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 369,956 |
2206.10909 | Model-Driven Deep Learning-Based MIMO-OFDM Detector: Design, Simulation,
and Experimental Results | Multiple-input multiple-output orthogonal frequency division multiplexing (MIMO-OFDM), a fundamental transmission scheme, promises high throughput and robustness against multipath fading. However, these benefits rely on the efficient detection strategy at the receiver and come at the expense of the extra bandwidth consumed by the cyclic prefix (CP). We use the iterative orthogonal approximate message passing (OAMP) algorithm in this paper as the prototype of the detector because of its remarkable potential for interference suppression. However, OAMP is computationally expensive for the matrix inversion per iteration. We replace the matrix inversion with the conjugate gradient (CG) method to reduce the complexity of OAMP. We further unfold the CG-based OAMP algorithm into a network and tune the critical parameters through deep learning (DL) to enhance detection performance. Simulation results and complexity analysis show that the proposed scheme has significant gain over other iterative detection methods and exhibits comparable performance to the state-of-the-art DL-based detector at a reduced computational cost. Furthermore, we design a highly efficient CP-free MIMO-OFDM receiver architecture to remove the CP overhead. This architecture first eliminates the intersymbol interference by buffering the previously recovered data and then detects the signal using the proposed detector. Numerical experiments demonstrate that the designed receiver offers a higher spectral efficiency than traditional receivers. Finally, over-the-air tests verify the effectiveness and robustness of the proposed scheme in realistic environments. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 304,080 |
2501.03545 | Beyond Factual Accuracy: Evaluating Coverage of Diverse Factual
Information in Long-form Text Generation | This paper presents ICAT, an evaluation framework for measuring coverage of diverse factual information in long-form text generation. ICAT breaks down a long output text into a list of atomic claims and not only verifies each claim through retrieval from a (reliable) knowledge source, but also computes the alignment between the atomic factual claims and various aspects expected to be presented in the output. We study three implementations of the ICAT framework, each with a different assumption on the availability of aspects and alignment method. By adopting data from the diversification task in the TREC Web Track and the ClueWeb corpus, we evaluate the ICAT framework. We demonstrate strong correlation with human judgments and provide comprehensive evaluation across multiple state-of-the-art LLMs. Our framework further offers interpretable and fine-grained analysis of diversity and coverage. Its modular design allows for easy adaptation to different domains and datasets, making it a valuable tool for evaluating the qualitative aspects of long-form responses produced by LLMs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 522,915 |
0708.4214 | High Rate Single-Symbol Decodable Precoded DSTBCs for Cooperative
Networks | Distributed Orthogonal Space-Time Block Codes (DOSTBCs) achieving full diversity order and single-symbol ML decodability have been introduced recently for cooperative networks and an upper-bound on the maximal rate of such codes along with code constructions has been presented. In this report, we introduce a new class of Distributed STBCs called Semi-orthogonal Precoded Distributed Single-Symbol Decodable STBCs (S-PDSSDC) wherein, the source performs co-ordinate interleaving of information symbols appropriately before transmitting it to all the relays. It is shown that DOSTBCs are a special case of S-PDSSDCs. A special class of S-PDSSDCs having diagonal covariance matrix at the destination is studied and an upper bound on the maximal rate of such codes is derived. The bounds obtained are approximately twice larger than that of the DOSTBCs. A systematic construction of S-PDSSDCs is presented when the number of relays $K \geq 4$. The constructed codes are shown to achieve the upper-bound on the rate when $K$ is of the form 0 modulo 4 or 3 modulo 4. For the rest of the values of $K$, the constructed codes are shown to have rates higher than that of DOSTBCs. It is also shown that S-PDSSDCs cannot be constructed with any form of linear processing at the relays when the source doesn't perform co-ordinate interleaving of the information symbols. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 615 |
2406.18575 | Research on Driver Facial Fatigue Detection Based on Yolov8 Model | In a society where traffic accidents frequently occur, fatigue driving has emerged as a grave issue. Fatigue driving detection technology, especially those based on the YOLOv8 deep learning model, has seen extensive research and application as an effective preventive measure. This paper discusses in depth the methods and technologies utilized in the YOLOv8 model to detect driver fatigue, elaborates on the current research status both domestically and internationally, and systematically introduces the processing methods and algorithm principles for various datasets. This study aims to provide a robust technical solution for preventing and detecting fatigue driving, thereby contributing significantly to reducing traffic accidents and safeguarding lives. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 468,084 |
1607.04731 | Weakly supervised object detection using pseudo-strong labels | Object detection is an import task of computer vision.A variety of methods have been proposed,but methods using the weak labels still do not have a satisfactory result.In this paper,we propose a new framework that using the weakly supervised method's output as the pseudo-strong labels to train a strongly supervised model.One weakly supervised method is treated as black-box to generate class-specific bounding boxes on train dataset.A de-noise method is then applied to the noisy bounding boxes.Then the de-noised pseudo-strong labels are used to train a strongly object detection network.The whole framework is still weakly supervised because the entire process only uses the image-level labels.The experiment results on PASCAL VOC 2007 prove the validity of our framework, and we get result 43.4% on mean average precision compared to 39.5% of the previous best result and 34.5% of the initial method,respectively.And this frame work is simple and distinct,and is promising to be applied to other method easily. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 58,650 |
2407.00896 | Channel Modeling Aided Dataset Generation for AI-Enabled CSI Feedback:
Advances, Challenges, and Solutions | The AI-enabled autoencoder has demonstrated great potential in channel state information (CSI) feedback in frequency division duplex (FDD) multiple input multiple output (MIMO) systems. However, this method completely changes the existing feedback strategies, making it impractical to deploy in recent years. To address this issue, this paper proposes a channel modeling aided data augmentation method based on a limited number of field channel data. Specifically, the user equipment (UE) extracts the primary stochastic parameters of the field channel data and transmits them to the base station (BS). The BS then updates the typical TR 38.901 model parameters with the extracted parameters. In this way, the updated channel model is used to generate the dataset. This strategy comprehensively considers the dataset collection, model generalization, model monitoring, and so on. Simulations verify that our proposed strategy can significantly improve performance compared to the benchmarks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 469,045 |
1704.04800 | Non-parametric Impedance based Stability and Controller Bandwidth
Extraction from Impedance Measurements of HVDC-connected Wind Farms | Impedance measurements have been widely used with the Nyquist plot to estimate the stability of interconnected power systems. Being a black-box method for equivalent and aggregated impedance estimation, its use for the identification of sub-components bandwidth is not a straightforward task. This paper proposes a simple method that will enable to identify the specific part of the equivalent impedance (e.g. controller's bandwidth) that has major impact on the stability of the system. For doing that, the paper analyses the stability of an interconnected system of wind farms and high voltage dc (HVDC) transmission system. The impedance frequency responses of the wind farms and HVDC system from the ac collection point are measured and it is shown by the method proposed in this paper, which controller has major impact in the observed oscillation. A mitigation technique is proposed based on re-tuning of the critical controller bandwidth of the interconnected converters. The method suggested can reveal the internal controllers' dynamics of the wind farm from the measured impedance combined with an analytical expression of the impedance and a transfer function identity when no information about the controllers is provided by the vendors due to confidentiality and industry secrecy. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 71,895 |
2406.15791 | Wireless MapReduce Arrays for Coded Distributed Computing | We consider a wireless distributed computing system based on the MapReduce framework, which consists of three phases: \textit{Map}, \textit{Shuffle}, and \textit{Reduce}. The system consists of a set of distributed nodes assigned to compute arbitrary output functions depending on a file library. The computation of the output functions is decomposed into Map and Reduce functions, and the Shuffle phase, which involves the data exchange, links the two. In our model, the Shuffle phase communication happens over a full-duplex wireless interference channel. For this setting, a coded wireless MapReduce distributed computing scheme exists in the literature, achieving optimal performance under one-shot linear schemes. However, the scheme requires the number of input files to be very large, growing exponentially with the number of nodes. We present schemes that require the number of files to be in the order of the number of nodes and achieve the same performance as the existing scheme. The schemes are obtained by designing a structure called wireless MapReduce array that succinctly represents all three phases in a single array. The wireless MapReduce arrays can also be obtained from the extended placement delivery arrays known for multi-antenna coded caching schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 466,869 |
2308.12831 | EFormer: Enhanced Transformer towards Semantic-Contour Features of
Foreground for Portraits Matting | The portrait matting task aims to extract an alpha matte with complete semantics and finely-detailed contours. In comparison to CNN-based approaches, transformers with self-attention module have a better capacity to capture long-range dependencies and low-frequency semantic information of a portrait. However, the recent research shows that self-attention mechanism struggles with modeling high-frequency contour information and capturing fine contour details, which can lead to bias while predicting the portrait's contours. To deal with this issue, we propose EFormer to enhance the model's attention towards both of the low-frequency semantic and high-frequency contour features. For the high-frequency contours, our research demonstrates that cross-attention module between different resolutions can guide our model to allocate attention appropriately to these contour regions. Supported on this, we can successfully extract the high-frequency detail information around the portrait's contours, which are previously ignored by self-attention. Based on cross-attention module, we further build a semantic and contour detector (SCD) to accurately capture both of the low-frequency semantic and high-frequency contour features. And we design contour-edge extraction branch and semantic extraction branch to extract refined high-frequency contour features and complete low-frequency semantic information, respectively. Finally, we fuse the two kinds of features and leverage segmentation head to generate a predicted portrait matte. Experiments on VideoMatte240K (JPEG SD Format) and Adobe Image Matting (AIM) datasets demonstrate that EFormer outperforms previous portrait matte methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 387,684 |
2302.13960 | Acquisition Conditioned Oracle for Nongreedy Active Feature Acquisition | We develop novel methodology for active feature acquisition (AFA), the study of how to sequentially acquire a dynamic (on a per instance basis) subset of features that minimizes acquisition costs whilst still yielding accurate predictions. The AFA framework can be useful in a myriad of domains, including health care applications where the cost of acquiring additional features for a patient (in terms of time, money, risk, etc.) can be weighed against the expected improvement to diagnostic performance. Previous approaches for AFA have employed either: deep learning RL techniques, which have difficulty training policies in the AFA MDP due to sparse rewards and a complicated action space; deep learning surrogate generative models, which require modeling complicated multidimensional conditional distributions; or greedy policies, which fail to account for how joint feature acquisitions can be informative together for better predictions. In this work we show that we can bypass many of these challenges with a novel, nonparametric oracle based approach, which we coin the acquisition conditioned oracle (ACO). Extensive experiments show the superiority of the ACO to state-of-the-art AFA methods when acquiring features for both predictions and general decision-making. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 348,109 |
2005.07310 | Behind the Scene: Revealing the Secrets of Pre-trained
Vision-and-Language Models | Recent Transformer-based large-scale pre-trained models have revolutionized vision-and-language (V+L) research. Models such as ViLBERT, LXMERT and UNITER have significantly lifted state of the art across a wide range of V+L benchmarks with joint image-text pre-training. However, little is known about the inner mechanisms that destine their impressive success. To reveal the secrets behind the scene of these powerful models, we present VALUE (Vision-And-Language Understanding Evaluation), a set of meticulously designed probing tasks (e.g., Visual Coreference Resolution, Visual Relation Detection, Linguistic Probing Tasks) generalizable to standard pre-trained V+L models, aiming to decipher the inner workings of multimodal pre-training (e.g., the implicit knowledge garnered in individual attention heads, the inherent cross-modal alignment learned through contextualized multimodal embeddings). Through extensive analysis of each archetypal model architecture via these probing tasks, our key observations are: (i) Pre-trained models exhibit a propensity for attending over text rather than images during inference. (ii) There exists a subset of attention heads that are tailored for capturing cross-modal interactions. (iii) Learned attention matrix in pre-trained models demonstrates patterns coherent with the latent alignment between image regions and textual words. (iv) Plotted attention patterns reveal visually-interpretable relations among image regions. (v) Pure linguistic knowledge is also effectively encoded in the attention heads. These are valuable insights serving to guide future work towards designing better model architecture and objectives for multimodal pre-training. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 177,248 |
2201.01388 | End-to-End Autoencoder Communications with Optimized Interference
Suppression | An end-to-end communications system based on Orthogonal Frequency Division Multiplexing (OFDM) is modeled as an autoencoder (AE) for which the transmitter (coding and modulation) and receiver (demodulation and decoding) are represented as deep neural networks (DNNs) of the encoder and decoder, respectively. This AE communications approach is shown to outperform conventional communications in terms of bit error rate (BER) under practical scenarios regarding channel and interference effects as well as training data and embedded implementation constraints. A generative adversarial network (GAN) is trained to augment the training data when there is not enough training data available. Also, the performance is evaluated in terms of the DNN model quantization and the corresponding memory requirements for embedded implementation. Then, interference training and randomized smoothing are introduced to train the AE communications to operate under unknown and dynamic interference (jamming) effects on potentially multiple OFDM symbols. Relative to conventional communications, up to 36 dB interference suppression for a channel reuse of four can be achieved by the AE communications with interference training and randomized smoothing. AE communications is also extended to the multiple-input multiple-output (MIMO) case and its BER performance gain with and without interference effects is demonstrated compared to conventional MIMO communications. | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | false | true | 274,233 |
2210.00489 | Unsupervised Multi-View Object Segmentation Using Radiance Field
Propagation | We present radiance field propagation (RFP), a novel approach to segmenting objects in 3D during reconstruction given only unlabeled multi-view images of a scene. RFP is derived from emerging neural radiance field-based techniques, which jointly encodes semantics with appearance and geometry. The core of our method is a novel propagation strategy for individual objects' radiance fields with a bidirectional photometric loss, enabling an unsupervised partitioning of a scene into salient or meaningful regions corresponding to different object instances. To better handle complex scenes with multiple objects and occlusions, we further propose an iterative expectation-maximization algorithm to refine object masks. RFP is one of the first unsupervised approach for tackling 3D real scene object segmentation for neural radiance field (NeRF) without any supervision, annotations, or other cues such as 3D bounding boxes and prior knowledge of object class. Experiments demonstrate that RFP achieves feasible segmentation results that are more accurate than previous unsupervised image/scene segmentation approaches, and are comparable to existing supervised NeRF-based methods. The segmented object representations enable individual 3D object editing operations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 320,883 |
2411.05960 | A method based on Generative Adversarial Networks for disentangling
physical and chemical properties of stars in astronomical spectra | Data compression techniques focused on information preservation have become essential in the modern era of big data. In this work, an encoder-decoder architecture has been designed, where adversarial training, a modification of the traditional autoencoder, is used in the context of astrophysical spectral analysis. The goal of this proposal is to obtain an intermediate representation of the astronomical stellar spectra, in which the contribution to the flux of a star due to the most influential physical properties (its surface temperature and gravity) disappears and the variance reflects only the effect of the chemical composition over the spectrum. A scheme of deep learning is used with the aim of unraveling in the latent space the desired parameters of the rest of the information contained in the data. This work proposes a version of adversarial training that makes use of a discriminator per parameter to be disentangled, thus avoiding the exponential combination that occurs in the use of a single discriminator, as a result of the discretization of the values to be untangled. To test the effectiveness of the method, synthetic astronomical data are used from the APOGEE and Gaia surveys. In conjunction with the work presented, we also provide a disentangling framework (GANDALF) available to the community, which allows the replication, visualization, and extension of the method to domains of any nature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 506,906 |
2310.19802 | Stochastic Thermodynamics of Learning Parametric Probabilistic Models | We have formulated a family of machine learning problems as the time evolution of Parametric Probabilistic Models (PPMs), inherently rendering a thermodynamic process. Our primary motivation is to leverage the rich toolbox of thermodynamics of information to assess the information-theoretic content of learning a probabilistic model. We first introduce two information-theoretic metrics: Memorized-information (M-info) and Learned-information (L-info), which trace the flow of information during the learning process of PPMs. Then, we demonstrate that the accumulation of L-info during the learning process is associated with entropy production, and parameters serve as a heat reservoir in this process, capturing learned information in the form of M-info. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,138 |
2410.23831 | FRoundation: Are Foundation Models Ready for Face Recognition? | Foundation models are predominantly trained in an unsupervised or self-supervised manner on highly diverse and large-scale datasets, making them broadly applicable to various downstream tasks. In this work, we investigate for the first time whether such models are suitable for the specific domain of face recognition (FR). We further propose and demonstrate the adaptation of these models for FR across different levels of data availability, including synthetic data. Extensive experiments are conducted on multiple foundation models and datasets of varying scales for training and fine-tuning, with evaluation on a wide range of benchmarks. Our results indicate that, despite their versatility, pre-trained foundation models tend to underperform in FR in comparison with similar architectures trained specifically for this task. However, fine-tuning foundation models yields promising results, often surpassing models trained from scratch, particularly when training data is limited. For example, after fine-tuning only on 1K identities, DINOv2 ViT-S achieved average verification accuracy on LFW, CALFW, CPLFW, CFP-FP, and AgeDB30 benchmarks of 87.10%, compared to 64.70% achieved by the same model and without fine-tuning. While training the same model architecture, ViT-S, from scratch on 1k identities reached 69.96%. With access to larger-scale FR training datasets, these performances reach 96.03% and 95.59% for the DINOv2 and CLIP ViT-L models, respectively. In comparison to the ViT-based architectures trained from scratch for FR, fine-tuned same architectures of foundation models achieve similar performance while requiring lower training computational costs and not relying on the assumption of extensive data availability. We further demonstrated the use of synthetic face data, showing improved performances over both pre-trained foundation and ViT models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 504,204 |
2203.15392 | Efficient Hybrid Network: Inducting Scattering Features | Recent work showed that hybrid networks, which combine predefined and learnt filters within a single architecture, are more amenable to theoretical analysis and less prone to overfitting in data-limited scenarios. However, their performance has yet to prove competitive against the conventional counterparts when sufficient amounts of training data are available. In an attempt to address this core limitation of current hybrid networks, we introduce an Efficient Hybrid Network (E-HybridNet). We show that it is the first scattering based approach that consistently outperforms its conventional counterparts on a diverse range of datasets. It is achieved with a novel inductive architecture that embeds scattering features into the network flow using Hybrid Fusion Blocks. We also demonstrate that the proposed design inherits the key property of prior hybrid networks -- an effective generalisation in data-limited scenarios. Our approach successfully combines the best of the two worlds: flexibility and power of learnt features and stability and predictability of scattering representations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 288,384 |
1902.01889 | Analyzing and Improving Representations with the Soft Nearest Neighbor
Loss | We explore and expand the $\textit{Soft Nearest Neighbor Loss}$ to measure the $\textit{entanglement}$ of class manifolds in representation space: i.e., how close pairs of points from the same class are relative to pairs of points from different classes. We demonstrate several use cases of the loss. As an analytical tool, it provides insights into the evolution of class similarity structures during learning. Surprisingly, we find that $\textit{maximizing}$ the entanglement of representations of different classes in the hidden layers is beneficial for discrimination in the final layer, possibly because it encourages representations to identify class-independent similarity structures. Maximizing the soft nearest neighbor loss in the hidden layers leads not only to improved generalization but also to better-calibrated estimates of uncertainty on outlier data. Data that is not from the training distribution can be recognized by observing that in the hidden layers, it has fewer than the normal number of neighbors from the predicted class. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,754 |
2206.07944 | Distributed Online Private Learning of Convex Nondecomposable Objectives | We deal with a general distributed constrained online learning problem with privacy over time-varying networks, where a class of nondecomposable objectives are considered. Under this setting, each node only controls a part of the global decision, and the goal of all nodes is to collaboratively minimize the global cost over a time horizon $T$ while guarantees the security of the transmitted information. For such problems, we first design a novel generic algorithm framework, named as DPSDA, of differentially private distributed online learning using the Laplace mechanism and the stochastic variants of dual averaging method. Note that in the dual updates, all nodes of DPSDA employ the noise-corrupted gradients for more generality. Then, we propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework. In DPSDA-C, the nodes implement a circulation-based communication in the primal updates so as to alleviate the disagreements over time-varying undirected networks. In addition, for the extension to time-varying directed ones, the nodes implement the broadcast-based push-sum dynamics in DPSDA-PS, which can achieve average consensus over arbitrary directed networks. Theoretical results show that both algorithms attain an expected regret upper bound in $\mathcal{O}( \sqrt{T} )$ when the objective function is convex, which matches the best utility achievable by cutting-edge algorithms. Finally, numerical experiment results on both synthetic and real-world datasets verify the effectiveness of our algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 302,944 |
1901.08710 | When Can Neural Networks Learn Connected Decision Regions? | Previous work has questioned the conditions under which the decision regions of a neural network are connected and further showed the implications of the corresponding theory to the problem of adversarial manipulation of classifiers. It has been proven that for a class of activation functions including leaky ReLU, neural networks having a pyramidal structure, that is no layer has more hidden units than the input dimension, produce necessarily connected decision regions. In this paper, we advance this important result by further developing the sufficient and necessary conditions under which the decision regions of a neural network are connected. We then apply our framework to overcome the limits of existing work and further study the capacity to learn connected regions of neural networks for a much wider class of activation functions including those widely used, namely ReLU, sigmoid, tanh, softlus, and exponential linear function. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 119,558 |
2309.14660 | CoFiI2P: Coarse-to-Fine Correspondences for Image-to-Point Cloud
Registration | Image-to-point cloud (I2P) registration is a fundamental task for robots and autonomous vehicles to achieve cross-modality data fusion and localization. Current I2P registration methods primarily focus on estimating correspondences at the point or pixel level, often neglecting global alignment. As a result, I2P matching can easily converge to a local optimum if it lacks high-level guidance from global constraints. To improve the success rate and general robustness, this paper introduces CoFiI2P, a novel I2P registration network that extracts correspondences in a coarse-to-fine manner. First, the image and point cloud data are processed through a two-stream encoder-decoder network for hierarchical feature extraction. Second, a coarse-to-fine matching module is designed to leverage these features and establish robust feature correspondences. Specifically, In the coarse matching phase, a novel I2P transformer module is employed to capture both homogeneous and heterogeneous global information from the image and point cloud data. This enables the estimation of coarse super-point/super-pixel matching pairs with discriminative descriptors. In the fine matching module, point/pixel pairs are established with the guidance of super-point/super-pixel correspondences. Finally, based on matching pairs, the transform matrix is estimated with the EPnP-RANSAC algorithm. Experiments conducted on the KITTI Odometry dataset demonstrate that CoFiI2P achieves impressive results, with a relative rotation error (RRE) of 1.14 degrees and a relative translation error (RTE) of 0.29 meters, while maintaining real-time speed.Additional experiments on the Nuscenes datasets confirm our method's generalizability. The project page is available at \url{https://whu-usi3dv.github.io/CoFiI2P}. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 394,692 |
2303.02314 | Virtual Sparse Convolution for Multimodal 3D Object Detection | Recently, virtual/pseudo-point-based 3D object detection that seamlessly fuses RGB images and LiDAR data by depth completion has gained great attention. However, virtual points generated from an image are very dense, introducing a huge amount of redundant computation during detection. Meanwhile, noises brought by inaccurate depth completion significantly degrade detection precision. This paper proposes a fast yet effective backbone, termed VirConvNet, based on a new operator VirConv (Virtual Sparse Convolution), for virtual-point-based 3D object detection. VirConv consists of two key designs: (1) StVD (Stochastic Voxel Discard) and (2) NRConv (Noise-Resistant Submanifold Convolution). StVD alleviates the computation problem by discarding large amounts of nearby redundant voxels. NRConv tackles the noise problem by encoding voxel features in both 2D image and 3D LiDAR space. By integrating VirConv, we first develop an efficient pipeline VirConv-L based on an early fusion design. Then, we build a high-precision pipeline VirConv-T based on a transformed refinement scheme. Finally, we develop a semi-supervised pipeline VirConv-S based on a pseudo-label framework. On the KITTI car 3D detection test leaderboard, our VirConv-L achieves 85% AP with a fast running speed of 56ms. Our VirConv-T and VirConv-S attains a high-precision of 86.3% and 87.2% AP, and currently rank 2nd and 1st, respectively. The code is available at https://github.com/hailanyi/VirConv. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 349,301 |
2206.01904 | Soft Adversarial Training Can Retain Natural Accuracy | Adversarial training for neural networks has been in the limelight in recent years. The advancement in neural network architectures over the last decade has led to significant improvement in their performance. It sparked an interest in their deployment for real-time applications. This process initiated the need to understand the vulnerability of these models to adversarial attacks. It is instrumental in designing models that are robust against adversaries. Recent works have proposed novel techniques to counter the adversaries, most often sacrificing natural accuracy. Most suggest training with an adversarial version of the inputs, constantly moving away from the original distribution. The focus of our work is to use abstract certification to extract a subset of inputs for (hence we call it 'soft') adversarial training. We propose a training framework that can retain natural accuracy without sacrificing robustness in a constrained setting. Our framework specifically targets moderately critical applications which require a reasonable balance between robustness and accuracy. The results testify to the idea of soft adversarial training for the defense against adversarial attacks. At last, we propose the scope of future work for further improvement of this framework. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 300,660 |
2405.18033 | RT-GS2: Real-Time Generalizable Semantic Segmentation for 3D Gaussian
Representations of Radiance Fields | Gaussian Splatting has revolutionized the world of novel view synthesis by achieving high rendering performance in real-time. Recently, studies have focused on enriching these 3D representations with semantic information for downstream tasks. In this paper, we introduce RT-GS2, the first generalizable semantic segmentation method employing Gaussian Splatting. While existing Gaussian Splatting-based approaches rely on scene-specific training, RT-GS2 demonstrates the ability to generalize to unseen scenes. Our method adopts a new approach by first extracting view-independent 3D Gaussian features in a self-supervised manner, followed by a novel View-Dependent / View-Independent (VDVI) feature fusion to enhance semantic consistency over different views. Extensive experimentation on three different datasets showcases RT-GS2's superiority over the state-of-the-art methods in semantic segmentation quality, exemplified by a 8.01% increase in mIoU on the Replica dataset. Moreover, our method achieves real-time performance of 27.03 FPS, marking an astonishing 901 times speedup compared to existing approaches. This work represents a significant advancement in the field by introducing, to the best of our knowledge, the first real-time generalizable semantic segmentation method for 3D Gaussian representations of radiance fields. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 458,245 |
2004.08572 | Automatic Grading of Knee Osteoarthritis on the Kellgren-Lawrence Scale
from Radiographs Using Convolutional Neural Networks | The severity of knee osteoarthritis is graded using the 5-point Kellgren-Lawrence (KL) scale where healthy knees are assigned grade 0, and the subsequent grades 1-4 represent increasing severity of the affliction. Although several methods have been proposed in recent years to develop models that can automatically predict the KL grade from a given radiograph, most models have been developed and evaluated on datasets not sourced from India. These models fail to perform well on the radiographs of Indian patients. In this paper, we propose a novel method using convolutional neural networks to automatically grade knee radiographs on the KL scale. Our method works in two connected stages: in the first stage, an object detection model segments individual knees from the rest of the image; in the second stage, a regression model automatically grades each knee separately on the KL scale. We train our model using the publicly available Osteoarthritis Initiative (OAI) dataset and demonstrate that fine-tuning the model before evaluating it on a dataset from a private hospital significantly improves the mean absolute error from 1.09 (95% CI: 1.03-1.15) to 0.28 (95% CI: 0.25-0.32). Additionally, we compare classification and regression models built for the same task and demonstrate that regression outperforms classification. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 173,104 |
2402.10184 | Reward Generalization in RLHF: A Topological Perspective | Existing alignment methods share a common topology of information flow, where reward information is collected from humans, modeled with preference learning, and used to tune language models. However, this shared topology has not been systematically characterized, nor have its alternatives been thoroughly explored, leaving the problems of low data efficiency and unreliable generalization unaddressed. As a solution, we introduce a theoretical framework for investigating reward generalization in reinforcement learning from human feedback (RLHF), focusing on the topology of information flow at both macro and micro levels. At the macro level, we portray the RLHF information flow as an autoencoding process over behavior distributions, formalizing the RLHF objective of distributional consistency between human preference and model behavior. At the micro level, we present induced Bayesian networks as a theory of reward generalization in RLHF, introducing fine-grained dataset topologies into generalization bounds. Combining analysis on both levels, we propose reward modeling from tree-structured preference information. It is shown to reduce reward uncertainty by up to $\Theta(\log n/\log\log n)$ times compared to baselines, where $n$ is the dataset size. Validation on three NLP tasks shows that our tree-based reward model achieves an average win rate of 65% against baseline methods, thus improving reward generalization for free via topology design. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | true | 429,857 |
2402.12101 | Design and Performance of Enhanced Spread Spectrum Aloha for Unsourced
Multiple Access | We analyze the performance of enhanced spread spectrum Aloha (E-SSA) in the framework of unsourced multiple access (UMAC). The asynchronous, unframed transmission of E-SSA is modified to enable a direct comparison with framed UMAC schemes and with Polyanskiy's achievability bound. The design of E-SSA is tailored to the UMAC setting, resorting to short polar codes and the use of a timing channel to improve the energy efficiency of the protocol. We assess the impact of the preamble length and of the spreading factor on the system efficiency. The resulting scheme exhibits simplicity at the transmitter and linear complexity with respect to the number of active users at the receiver, approaching the UMAC achievability bound in close competition with the best known UMAC schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 430,715 |
2208.03763 | Label Semantic Knowledge Distillation for Unbiased Scene Graph
Generation | The Scene Graph Generation (SGG) task aims to detect all the objects and their pairwise visual relationships in a given image. Although SGG has achieved remarkable progress over the last few years, almost all existing SGG models follow the same training paradigm: they treat both object and predicate classification in SGG as a single-label classification problem, and the ground-truths are one-hot target labels. However, this prevalent training paradigm has overlooked two characteristics of current SGG datasets: 1) For positive samples, some specific subject-object instances may have multiple reasonable predicates. 2) For negative samples, there are numerous missing annotations. Regardless of the two characteristics, SGG models are easy to be confused and make wrong predictions. To this end, we propose a novel model-agnostic Label Semantic Knowledge Distillation (LS-KD) for unbiased SGG. Specifically, LS-KD dynamically generates a soft label for each subject-object instance by fusing a predicted Label Semantic Distribution (LSD) with its original one-hot target label. LSD reflects the correlations between this instance and multiple predicate categories. Meanwhile, we propose two different strategies to predict LSD: iterative self-KD and synchronous self-KD. Extensive ablations and results on three SGG tasks have attested to the superiority and generality of our proposed LS-KD, which can consistently achieve decent trade-off performance between different predicate categories. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,890 |
2501.07276 | Bridging Smart Meter Gaps: A Benchmark of Statistical, Machine Learning
and Time Series Foundation Models for Data Imputation | The integrity of time series data in smart grids is often compromised by missing values due to sensor failures, transmission errors, or disruptions. Gaps in smart meter data can bias consumption analyses and hinder reliable predictions, causing technical and economic inefficiencies. As smart meter data grows in volume and complexity, conventional techniques struggle with its nonlinear and nonstationary patterns. In this context, Generative Artificial Intelligence offers promising solutions that may outperform traditional statistical methods. In this paper, we evaluate two general-purpose Large Language Models and five Time Series Foundation Models for smart meter data imputation, comparing them with conventional Machine Learning and statistical models. We introduce artificial gaps (30 minutes to one day) into an anonymized public dataset to test inference capabilities. Results show that Time Series Foundation Models, with their contextual understanding and pattern recognition, could significantly enhance imputation accuracy in certain cases. However, the trade-off between computational cost and performance gains remains a critical consideration. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 524,338 |
1411.5923 | Stability and disturbance attenuation for a switched Markov jump linear
system | We address a class of Markov jump linear systems that are characterized by the underlying Markov process being time-inhomogeneous with a priori unknown transition probabilities. Necessary and sufficient conditions for uniform stochastic stability and uniform stochastic disturbance attenuation are reported. In both cases, conditions are expressed as a set of finite-dimensional linear matrix inequalities that can be solved efficiently. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 37,780 |
2411.12901 | Signformer is all you need: Towards Edge AI for Sign Language | Sign language translation, especially in gloss-free paradigm, is confronting a dilemma of impracticality and unsustainability due to growing resource-intensive methodologies. Contemporary state-of-the-arts (SOTAs) have significantly hinged on pretrained sophiscated backbones such as Large Language Models (LLMs), embedding sources, or extensive datasets, inducing considerable parametric and computational inefficiency for sustainable use in real-world scenario. Despite their success, following this research direction undermines the overarching mission of this domain to create substantial value to bridge hard-hearing and common populations. Committing to the prevailing trend of LLM and Natural Language Processing (NLP) studies, we pursue a profound essential change in architecture to achieve ground-up improvements without external aid from pretrained models, prior knowledge transfer, or any NLP strategies considered not-from-scratch. Introducing Signformer, a from-scratch Feather-Giant transforming the area towards Edge AI that redefines extremities of performance and efficiency with LLM-competence and edgy-deployable compactness. In this paper, we present nature analysis of sign languages to inform our algorithmic design and deliver a scalable transformer pipeline with convolution and attention novelty. We achieve new 2nd place on leaderboard with a parametric reduction of 467-1807x against the finests as of 2024 and outcompete almost every other methods in a lighter configuration of 0.57 million parameters. | true | false | false | false | false | false | true | false | true | false | false | true | false | true | false | false | false | false | 509,590 |
2409.02647 | Learning-Based Error Detection System for Advanced Vehicle Instrument
Cluster Rendering | The automotive industry is currently expanding digital display options with every new model that comes onto the market. This entails not just an expansion in dimensions, resolution, and customization choices, but also the capability to employ novel display effects like overlays while assembling the content of the display cluster. Unfortunately, this raises the need for appropriate monitoring systems that can detect rendering errors and apply appropriate countermeasures when required. Classical solutions such as Cyclic Redundancy Checks (CRC) will soon be no longer viable as any sort of alpha blending, warping of scaling of content can cause unwanted CRC violations. Therefore, we propose a novel monitoring approach to verify correctness of displayed content using telltales (e.g. warning signs) as example. It uses a learning-based approach to separate "good" telltales, i.e. those that a human driver will understand correctly, and "corrupted" telltales, i.e. those that will not be visible or perceived correctly. As a result, it possesses inherent resilience against individual pixel errors and implicitly supports changing backgrounds, overlay or scaling effects. This is underlined by our experimental study where all "corrupted" test patterns were correctly classified, while no false alarms were triggered. | true | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 485,780 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.