id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1807.06874 | An Information-theoretic Framework for the Lossy Compression of Link
Streams | Graph compression is a data analysis technique that consists in the replacement of parts of a graph by more general structural patterns in order to reduce its description length. It notably provides interesting exploration tools for the study of real, large-scale, and complex graphs which cannot be grasped at first glance. This article proposes a framework for the compression of temporal graphs, that is for the compression of graphs that evolve with time. This framework first builds on a simple and limited scheme, exploiting structural equivalence for the lossless compression of static graphs, then generalises it to the lossy compression of link streams, a recent formalism for the study of temporal graphs. Such generalisation relies on the natural extension of (bidimensional) relational data by the addition of a third temporal dimension. Moreover, we introduce an information-theoretic measure to quantify and to control the information that is lost during compression, as well as an algebraic characterisation of the space of possible compression patterns to enhance the expressiveness of the initial compression scheme. These contributions lead to the definition of a combinatorial optimisation problem, that is the Lossy Multistream Compression Problem, for which we provide an exact algorithm. | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | true | 103,219 |
2104.07798 | Memory Order Decomposition of Symbolic Sequences | We introduce a general method for the study of memory in symbolic sequences based on higher-order Markov analysis. The Markov process that best represents a sequence is expressed as a mixture of matrices of minimal orders, enabling the definition of the so-called memory profile, which unambiguously reflects the true order of correlations. The method is validated by recovering the memory profiles of tunable synthetic sequences. Finally, we scan real data and showcase with practical examples how our protocol can be used to extract relevant stochastic properties of symbolic sequences. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 230,548 |
2310.19653 | A Note on Generalization in Variational Autoencoders: How Effective Is
Synthetic Data & Overparameterization? | Variational autoencoders (VAEs) are deep probabilistic models that are used in scientific applications. Many works try to mitigate this problem from the probabilistic methods perspective by new inference techniques or training procedures. In this paper, we approach the problem instead from the deep learning perspective by investigating the effectiveness of using synthetic data and overparameterization for improving the generalization performance. Our motivation comes from (1) the recent discussion on whether the increasing amount of publicly accessible synthetic data will improve or hurt currently trained generative models; and (2) the modern deep learning insights that overparameterization improves generalization. Our investigation shows how both training on samples from a pre-trained diffusion model, and using more parameters at certain layers are able to effectively mitigate overfitting in VAEs, therefore improving their generalization, amortized inference, and robustness performance. Our study provides timely insights in the current era of synthetic data and scaling laws. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 404,071 |
2410.22488 | Privacy-Preserving Dynamic Assortment Selection | With the growing demand for personalized assortment recommendations, concerns over data privacy have intensified, highlighting the urgent need for effective privacy-preserving strategies. This paper presents a novel framework for privacy-preserving dynamic assortment selection using the multinomial logit (MNL) bandits model. Our approach employs a perturbed upper confidence bound method, integrating calibrated noise into user utility estimates to balance between exploration and exploitation while ensuring robust privacy protection. We rigorously prove that our policy satisfies Joint Differential Privacy (JDP), which better suits dynamic environments than traditional differential privacy, effectively mitigating inference attack risks. This analysis is built upon a novel objective perturbation technique tailored for MNL bandits, which is also of independent interest. Theoretically, we derive a near-optimal regret bound of $\tilde{O}(\sqrt{T})$ for our policy and explicitly quantify how privacy protection impacts regret. Through extensive simulations and an application to the Expedia hotel dataset, we demonstrate substantial performance enhancements over the benchmark method. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 503,649 |
2106.01718 | Fast improvement of TEM image with low-dose electrons by deep learning | Low-electron-dose observation is indispensable for observing various samples using a transmission electron microscope; consequently, image processing has been used to improve transmission electron microscopy (TEM) images. To apply such image processing to in situ observations, we here apply a convolutional neural network to TEM imaging. Using a dataset that includes short-exposure images and long-exposure images, we develop a pipeline for processed short-exposure images, based on end-to-end training. The quality of images acquired with a total dose of approximately 5 e- per pixel becomes comparable to that of images acquired with a total dose of approximately 1000 e- per pixel. Because the conversion time is approximately 8 ms, in situ observation at 125 fps is possible. This imaging technique enables in situ observation of electron-beam-sensitive specimens. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 238,604 |
2011.07120 | Streaming Attention-Based Models with Augmented Memory for End-to-End
Speech Recognition | Attention-based models have been gaining popularity recently for their strong performance demonstrated in fields such as machine translation and automatic speech recognition. One major challenge of attention-based models is the need of access to the full sequence and the quadratically growing computational cost concerning the sequence length. These characteristics pose challenges, especially for low-latency scenarios, where the system is often required to be streaming. In this paper, we build a compact and streaming speech recognition system on top of the end-to-end neural transducer architecture with attention-based modules augmented with convolution. The proposed system equips the end-to-end models with the streaming capability and reduces the large footprint from the streaming attention-based model using augmented memory. On the LibriSpeech dataset, our proposed system achieves word error rates 2.7% on test-clean and 5.8% on test-other, to our best knowledge the lowest among streaming approaches reported so far. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 206,441 |
2202.06983 | Evolvability Degeneration in Multi-Objective Genetic Programming for
Symbolic Regression | Genetic programming (GP) is one of the best approaches today to discover symbolic regression models. To find models that trade off accuracy and complexity, the non-dominated sorting genetic algorithm II (NSGA-II) is widely used. Unfortunately, it has been shown that NSGA-II can be inefficient: in early generations, low-complexity models over-replicate and take over most of the population. Consequently, studies have proposed different approaches to promote diversity. Here, we study the root of this problem, in order to design a superior approach. We find that the over-replication of low complexity-models is due to a lack of evolvability, i.e., the inability to produce offspring with improved accuracy. We therefore extend NSGA-II to track, over time, the evolvability of models of different levels of complexity. With this information, we limit how many models of each complexity level are allowed to survive the generation. We compare this new version of NSGA-II, evoNSGA-II, with the use of seven existing multi-objective GP approaches on ten widely-used data sets, and find that evoNSGA-II is equal or superior to using these approaches in almost all comparisons. Furthermore, our results confirm that evoNSGA-II behaves as intended: models that are more evolvable form the majority of the population. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 280,384 |
2502.09985 | On Volume Minimization in Conformal Regression | We study the question of volume optimality in split conformal regression, a topic still poorly understood in comparison to coverage control. Using the fact that the calibration step can be seen as an empirical volume minimization problem, we first derive a finite-sample upper-bound on the excess volume loss of the interval returned by the classical split method. This important quantity measures the difference in length between the interval obtained with the split method and the shortest oracle prediction interval. Then, we introduce EffOrt, a methodology that modifies the learning step so that the base prediction function is selected in order to minimize the length of the returned intervals. In particular, our theoretical analysis of the excess volume loss of the prediction sets produced by EffOrt reveals the links between the learning and calibration steps, and notably the impact of the choice of the function class of the base predictor. We also introduce Ad-EffOrt, an extension of the previous method, which produces intervals whose size adapts to the value of the covariate. Finally, we evaluate the empirical performance and the robustness of our methodologies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 533,691 |
2407.03755 | A Computer Vision Approach to Estimate the Localized Sea State | This research presents a novel application of computer vision (CV) and deep learning methods for real-time sea state recognition, aiming to contribute to improving the operational safety and energy efficiency of seagoing vessels, key factors in meeting the legislative carbon reduction targets. Our work focuses on utilizing sea images in operational envelopes captured by a single stationary camera mounted on the ship bridge. The collected images are used to train a deep learning model to automatically recognize the state of the sea based on the Beaufort scale. To recognize the sea state, we used 4 state-of-the-art deep neural networks with different characteristics that proved useful in various computer vision tasks: Resnet-101, NASNet, MobileNet_v2, and Transformer ViT-b32. Furthermore, we have defined a unique large-scale dataset, collected over a broad range of sea conditions from an ocean-going vessel prepared for machine learning. We used the transfer learning approach to fine-tune the models on our dataset. The obtained results demonstrate the potential for this approach to complement traditional methods, particularly where in-situ measurements are unfeasible or interpolated weather buoy data is insufficiently accurate. This study sets the groundwork for further development of sea state classification models to address recognized gaps in maritime research and enable safer and more efficient maritime operations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,275 |
2406.00272 | Temporally Consistent Object Editing in Videos using Extended Attention | Image generation and editing have seen a great deal of advancements with the rise of large-scale diffusion models that allow user control of different modalities such as text, mask, depth maps, etc. However, controlled editing of videos still lags behind. Prior work in this area has focused on using 2D diffusion models to globally change the style of an existing video. On the other hand, in many practical applications, editing localized parts of the video is critical. In this work, we propose a method to edit videos using a pre-trained inpainting image diffusion model. We systematically redesign the forward path of the model by replacing the self-attention modules with an extended version of attention modules that creates frame-level dependencies. In this way, we ensure that the edited information will be consistent across all the video frames no matter what the shape and position of the masked area is. We qualitatively compare our results with state-of-the-art in terms of accuracy on several video editing tasks like object retargeting, object replacement, and object removal tasks. Simulations demonstrate the superior performance of the proposed strategy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 459,772 |
1405.1279 | Generalized friendship paradox in networks with tunable degree-attribute
correlation | One of interesting phenomena due to topological heterogeneities in complex networks is the friendship paradox: Your friends have on average more friends than you do. Recently, this paradox has been generalized for arbitrary node attributes, called generalized friendship paradox (GFP). The origin of GFP at the network level has been shown to be rooted in positive correlations between degrees and attributes. However, how the GFP holds for individual nodes needs to be understood in more detail. For this, we first analyze a solvable model to characterize the paradox holding probability of nodes for the uncorrelated case. Then we numerically study the correlated model of networks with tunable degree-degree and degree-attribute correlations. In contrast to the network level, we find at the individual level that the relevance of degree-attribute correlation to the paradox holding probability may depend on whether the network is assortative or dissortative. These findings help us to understand the interplay between topological structure and node attributes in complex networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 32,854 |
2306.13094 | RXs Directions based Codebook Solution for Passive RIS Beamforming | Recently, reconfigurable intelligent surface (RIS) has immensely been deployed to overcome blockage issue and widen coverage for enabling superior performance 6G networks. Mainly, systems use RIS as an assistant to redirect the transmitter (TX) incident signal towards the receiver (RX) by configuring RIS elements amplitudes and phase shifts in a passive beamforming (PBF) process. Channel estimation (CE) based PBF schemes achieve optimal performance, but they need high overhead and time consumption, especially with high number of RIS elements. Codebook (CB) based PBF solutions can be alternatives to overcome these issues by only searching through a limited reflection patterns (RPs) and determining the optimal one based on a predefined metric. However, they consume high power and time relevant to the used CB size. In this work, we propose a direction based PBF (D-PBF) scheme, where we aim to map between the RXs directions and the codebook RPs and store this information in an updated database (DB). Hence, if the matching between a coming RX and a particular RP exists, the proposed scheme will directly select this RP to configure the RIS elements, otherwise, it memorizes this codeword for future searching. Finally, if the matching failed, searching through the memorized RPs will be done to find the optimal one, then updating the DB accordingly. After a time period, which depends on the CB size, the DB will converge, and the D-PBF scheme will need no searching to select the optimal RP. Hence, the proposed scheme needs extremely lower overhead, power, and time comparable to the CE and conventional CB based solutions, while obtaining acceptable performance in terms of effective rate. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 375,164 |
2405.05015 | Concrete Dense Network for Long-Sequence Time Series Clustering | Time series clustering is fundamental in data analysis for discovering temporal patterns. Despite recent advancements, learning cluster-friendly representations is still challenging, particularly with long and complex time series. Deep temporal clustering methods have been trying to integrate the canonical k-means into end-to-end training of neural networks but fall back on surrogate losses due to the non-differentiability of the hard cluster assignment, yielding sub-optimal solutions. In addition, the autoregressive strategy used in the state-of-the-art RNNs is subject to error accumulation and slow training, while recent research findings have revealed that Transformers are less effective due to time points lacking semantic meaning, to the permutation invariance of attention that discards the chronological order and high computation cost. In light of these observations, we present LoSTer which is a novel dense autoencoder architecture for the long-sequence time series clustering problem (LSTC) capable of optimizing the k-means objective via the Gumbel-softmax reparameterization trick and designed specifically for accurate and fast clustering of long time series. Extensive experiments on numerous benchmark datasets and two real-world applications prove the effectiveness of LoSTer over state-of-the-art RNNs and Transformer-based deep clustering methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 452,765 |
2405.20800 | Shape Constraints in Symbolic Regression using Penalized Least Squares | We study the addition of shape constraints (SC) and their consideration during the parameter identification step of symbolic regression (SR). SC serve as a means to introduce prior knowledge about the shape of the otherwise unknown model function into SR. Unlike previous works that have explored SC in SR, we propose minimizing SC violations during parameter identification using gradient-based numerical optimization. We test three algorithm variants to evaluate their performance in identifying three symbolic expressions from synthetically generated data sets. This paper examines two benchmark scenarios: one with varying noise levels and another with reduced amounts of training data. The results indicate that incorporating SC into the expression search is particularly beneficial when data is scarce. Compared to using SC only in the selection process, our approach of minimizing violations during parameter identification shows a statistically significant benefit in some of our test cases, without being significantly worse in any instance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 459,528 |
1908.03283 | An Insect-scale Self-sufficient Rolling Microrobot | We design an insect-sized rolling microrobot driven by continuously rotating wheels. It measures 18mm$\times$8mm$\times$8mm. There are 2 versions of the robot - a 96mg laser-powered one and a 130mg supercapacitor powered one. The robot can move at 27mm/s (1.5 body lengths per second) with wheels rotating at 300$^\circ$/s, while consuming an average power of 2.5mW. Neither version has any electrical wires coming out of it, with the supercapacitor powered robot also being self-sufficient and is able to roll freely for 8 seconds after a single charge. Low-voltage electromagnetic actuators (1V-3V) along with a novel double-ratcheting mechanism enable the operation of this device. It is, to the best of our knowledge, the lightest and fastest self-sufficient rolling microrobot reported yet. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 141,207 |
1802.00981 | Contextual Bandit with Adaptive Feature Extraction | We consider an online decision making setting known as contextual bandit problem, and propose an approach for improving contextual bandit performance by using an adaptive feature extraction (representation learning) based on online clustering. Our approach starts with an off-line pre-training on unlabeled history of contexts (which can be exploited by our approach, but not by the standard contextual bandit), followed by an online selection and adaptation of encoders. Specifically, given an input sample (context), the proposed approach selects the most appropriate encoding function to extract a feature vector which becomes an input for a contextual bandit, and updates both the bandit and the encoding function based on the context and on the feedback (reward). Our experiments on a variety of datasets, and both in stationary and non-stationary environments of several kinds demonstrate clear advantages of the proposed adaptive representation learning over the standard contextual bandit based on "raw" input contexts. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 89,516 |
2312.08132 | Ultra Low Complexity Deep Learning Based Noise Suppression | This paper introduces an innovative method for reducing the computational complexity of deep neural networks in real-time speech enhancement on resource-constrained devices. The proposed approach utilizes a two-stage processing framework, employing channelwise feature reorientation to reduce the computational load of convolutional operations. By combining this with a modified power law compression technique for enhanced perceptual quality, this approach achieves noise suppression performance comparable to state-of-the-art methods with significantly less computational requirements. Notably, our algorithm exhibits 3 to 4 times less computational complexity and memory usage than prior state-of-the-art approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,206 |
1905.08900 | Enhancing Domain Word Embedding via Latent Semantic Imputation | We present a novel method named Latent Semantic Imputation (LSI) to transfer external knowledge into semantic space for enhancing word embedding. The method integrates graph theory to extract the latent manifold structure of the entities in the affinity space and leverages non-negative least squares with standard simplex constraints and power iteration method to derive spectral embeddings. It provides an effective and efficient approach to combining entity representations defined in different Euclidean spaces. Specifically, our approach generates and imputes reliable embedding vectors for low-frequency words in the semantic space and benefits downstream language tasks that depend on word embedding. We conduct comprehensive experiments on a carefully designed classification problem and language modeling and demonstrate the superiority of the enhanced embedding via LSI over several well-known benchmark embeddings. We also confirm the consistency of the results under different parameter settings of our method. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 131,601 |
1908.04265 | Recursion, Probability, Convolution and Classification for Computations | The main motivation of this work was practical, to offer computationally and theoretical scalable ways to structuring large classes of computation. It started from attempts to optimize R code for machine learning/artificial intelligence algorithms for huge data sets, that due to their size, should be handled into an incremental (online) fashion. Our target are large classes of relational (attribute based), mathematical (index based) or graph computations. We wanted to use powerful computation representations that emerged in AI (artificial intelligence)/ML (machine learning) as BN (Bayesian networks) and CNN (convolution neural networks). For the classes of computation addressed by us, and for our HPC (high performance computing) needs, the current solutions for translating computations into such representation need to be extended. Our results show that the classes of computation targeted by us, could be tree-structured, and a probability distribution (defining a DBN, i.e. Dynamic Bayesian Network) associated with it. More ever, this DBN may be viewed as a recursive CNN (Convolution Neural Network). Within this tree-like structure, classification in classes with size bounded (by a parameterizable may be performed. These results are at the core of very powerful, yet highly practically algorithms for restructuring and parallelizing the computations. The mathematical background required for an in depth presentation and exposing the full generality of our approach) is the subject of a subsequent paper. In this paper, we work in an limited (but important) framework that could be understood with rudiments of linear algebra and graph theory. The focus is in applicability, most of this paper discuss the usefulness of our approach for solving hard compilation problems related to automatic parallelism. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 141,433 |
1703.04930 | On the Support Recovery of Jointly Sparse Gaussian Sources using Sparse
Bayesian Learning | In this work, we provide non-asymptotic, probabilistic guarantees for successful recovery of the common nonzero support of jointly sparse Gaussian sources in the multiple measurement vector (MMV) problem. The support recovery problem is formulated as the marginalized maximum likelihood (or type-II ML) estimation of the variance hyperparameters of a joint sparsity inducing Gaussian prior on the source signals. We derive conditions under which the resulting nonconvex constrained optimization perfectly recovers the nonzero support of a joint-sparse Gaussian source ensemble with arbitrarily high probability. The support error probability decays exponentially with the number of MMVs at a rate that depends on the smallest restricted singular value and the nonnegative null space property of the self Khatri-Rao product of the sensing matrix. Our analysis confirms that nonzero supports of size as high as O($m^2$) are recoverable from $m$ measurements per sparse vector. Our derived sufficient conditions for support consistency of the proposed constrained type-II ML solution also guarantee the support consistency of any global solution of the multiple sparse Bayesian learning (M-SBL) optimization whose nonzero coefficients lie inside a bounded interval. For the case of noiseless measurements, we further show that a single MMV is sufficient for perfect recovery of the $k$-sparse support by M-SBL, provided all subsets of $k + 1$ columns of the sensing matrix are linearly independent. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 69,992 |
2107.07653 | TAPEX: Table Pre-training via Learning a Neural SQL Executor | Recent progress in language model pre-training has achieved a great success via leveraging large-scale unstructured textual data. However, it is still a challenge to apply pre-training on structured tabular data due to the absence of large-scale high-quality tabular data. In this paper, we propose TAPEX to show that table pre-training can be achieved by learning a neural SQL executor over a synthetic corpus, which is obtained by automatically synthesizing executable SQL queries and their execution outputs. TAPEX addresses the data scarcity challenge via guiding the language model to mimic a SQL executor on the diverse, large-scale and high-quality synthetic corpus. We evaluate TAPEX on four benchmark datasets. Experimental results demonstrate that TAPEX outperforms previous table pre-training approaches by a large margin and achieves new state-of-the-art results on all of them. This includes the improvements on the weakly-supervised WikiSQL denotation accuracy to 89.5% (+2.3%), the WikiTableQuestions denotation accuracy to 57.5% (+4.8%), the SQA denotation accuracy to 74.5% (+3.5%), and the TabFact accuracy to 84.2% (+3.2%). To our knowledge, this is the first work to exploit table pre-training via synthetic executable programs and to achieve new state-of-the-art results on various downstream tasks. Our code can be found at https://github.com/microsoft/Table-Pretraining. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 246,482 |
2106.15314 | The cityseer Python package for pedestrian-scale network-based urban
analysis | cityseer-api is a Python package consisting of computational tools for fine-grained street-network and land-use analysis, helpful in assessing the morphological precursors to vibrant neighbourhoods. It is underpinned by network-based methods developed specifically for urban analysis at the pedestrian scale. cityseer-api computes a variety of node and segment-based network centrality methods, land-use accessibility and mixed-use measures, and statistical aggregations. Accessibilities and aggregations are computed dynamically over the street-network while taking walking distance thresholds and the direction of approach into account, and can optionally incorporate spatial impedances and network decomposition to increase spatial precision. The use of Python facilitates compatibility with popular computational tools for network manipulation (NetworkX), geospatial topology (shapely), geospatial data state management (GeoPandas), and the NumPy stack of scientific packages. The provision of robust network cleaning tools aids the use of OpenStreetMap data for network analysis. Underlying loop-intensive algorithms are implemented in Numba JIT compiled code so that the methods scale efficiently to larger cities and regions. Online documentation is available from https://cityseer.benchmarkurbanism.com, and the Github repository is available at https://github.com/benchmark-urbanism/cityseer. Example notebooks are available at https://cityseer.benchmarkurbanism.com/examples/. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 243,704 |
2106.03352 | The Power of Exploiter: Provable Multi-Agent RL in Large State Spaces | Modern reinforcement learning (RL) commonly engages practical problems with large state spaces, where function approximation must be deployed to approximate either the value function or the policy. While recent progresses in RL theory address a rich set of RL problems with general function approximation, such successes are mostly restricted to the single-agent setting. It remains elusive how to extend these results to multi-agent RL, especially due to the new challenges arising from its game-theoretical nature. This paper considers two-player zero-sum Markov Games (MGs). We propose a new algorithm that can provably find the Nash equilibrium policy using a polynomial number of samples, for any MG with low multi-agent Bellman-Eluder dimension -- a new complexity measure adapted from its single-agent version (Jin et al., 2021). A key component of our new algorithm is the exploiter, which facilitates the learning of the main player by deliberately exploiting her weakness. Our theoretical framework is generic, which applies to a wide range of models including but not limited to tabular MGs, MGs with linear or kernel function approximation, and MGs with rich observations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 239,283 |
2210.15230 | How well can Text-to-Image Generative Models understand Ethical Natural
Language Interventions? | Text-to-image generative models have achieved unprecedented success in generating high-quality images based on natural language descriptions. However, it is shown that these models tend to favor specific social groups when prompted with neutral text descriptions (e.g., 'a photo of a lawyer'). Following Zhao et al. (2021), we study the effect on the diversity of the generated images when adding ethical intervention that supports equitable judgment (e.g., 'if all individuals can be a lawyer irrespective of their gender') in the input prompts. To this end, we introduce an Ethical NaTural Language Interventions in Text-to-Image GENeration (ENTIGEN) benchmark dataset to evaluate the change in image generations conditional on ethical interventions across three social axes -- gender, skin color, and culture. Through ENTIGEN framework, we find that the generations from minDALL.E, DALL.E-mini and Stable Diffusion cover diverse social groups while preserving the image quality. Preliminary studies indicate that a large change in the model predictions is triggered by certain phrases such as 'irrespective of gender' in the context of gender bias in the ethical interventions. We release code and annotated data at https://github.com/Hritikbansal/entigen_emnlp. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | true | 326,868 |
1302.4557 | Extracting Three Dimensional Surface Model of Human Kidney from the
Visible Human Data Set using Free Software | Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 22,164 |
1906.02399 | SparseSense: Human Activity Recognition from Highly Sparse Sensor
Data-streams Using Set-based Neural Networks | Batteryless or so called passive wearables are providing new and innovative methods for human activity recognition (HAR), especially in healthcare applications for older people. Passive sensors are low cost, lightweight, unobtrusive and desirably disposable; attractive attributes for healthcare applications in hospitals and nursing homes. Despite the compelling propositions for sensing applications, the data streams from these sensors are characterised by high sparsity---the time intervals between sensor readings are irregular while the number of readings per unit time are often limited. In this paper, we rigorously explore the problem of learning activity recognition models from temporally sparse data. We describe how to learn directly from sparse data using a deep learning paradigm in an end-to-end manner. We demonstrate significant classification performance improvements on real-world passive sensor datasets from older people over the state-of-the-art deep learning human activity recognition models. Further, we provide insights into the model's behaviour through complementary experiments on a benchmark dataset and visualisation of the learned activity feature spaces. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 134,039 |
2501.01062 | Fides: Scalable Censorship-Resistant DAG Consensus via Trusted
Components | Recently, consensus protocols based on Directed Acyclic Graph (DAG) have gained significant attention due to their potential to build robust blockchain systems, particularly in asynchronous networks. In this paper, we propose Fides, an asynchronous DAG-based BFT consensus protocol that leverages Trusted Execution Environments (TEEs) to tackle three major scalability and security challenges faced by existing protocols: (i) the need for a larger quorum size (i.e., at least 3x larger) to tolerate Byzantine replicas, (ii) high communication costs and reliance on expensive cryptographic primitives (i.e., global common coin) to reach agreement in asynchronous networks, and (iii) poor censorship resilience undermining the liveness guarantee. Specifically, Fides adopts four trusted components-Reliable Broadcast, Vertex Validation, Common Coin, and Transaction Disclosure-within TEEs. Incorporating these components enables Fides to achieve linear message complexity, guaranteed censorship resilience, 2x larger quorum size, and lightweight common coin usage. Besides, abstracting these essential components rather than porting the entire protocol into TEE can significantly reduce the Trusted Computing Base (TCB). Experimental evaluations of Fides in local and geo-distributed networks demonstrate its superior performance compared to established state-of-the-art protocols such as Tusk, RCC, HotStuff, and PBFT. The results indicate that Fides achieves a throughput of 400k transactions per second in a geo-distributed network and 810k transactions per second in a local network. Our analysis further explores the protocol's overhead, highlighting its suitability and effectiveness for practical deployment in real-world blockchain systems. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 521,911 |
1703.00377 | Gradient Boosting on Stochastic Data Streams | Boosting is a popular ensemble algorithm that generates more powerful learners by linearly combining base models from a simpler hypothesis class. In this work, we investigate the problem of adapting batch gradient boosting for minimizing convex loss functions to online setting where the loss at each iteration is i.i.d sampled from an unknown distribution. To generalize from batch to online, we first introduce the definition of online weak learning edge with which for strongly convex and smooth loss functions, we present an algorithm, Streaming Gradient Boosting (SGB) with exponential shrinkage guarantees in the number of weak learners. We further present an adaptation of SGB to optimize non-smooth loss functions, for which we derive a O(ln N/N) convergence rate. We also show that our analysis can extend to adversarial online learning setting under a stronger assumption that the online weak learning edge will hold in adversarial setting. We finally demonstrate experimental results showing that in practice our algorithms can achieve competitive results as classic gradient boosting while using less computation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 69,147 |
2102.09700 | AI-SARAH: Adaptive and Implicit Stochastic Recursive Gradient Methods | We present AI-SARAH, a practical variant of SARAH. As a variant of SARAH, this algorithm employs the stochastic recursive gradient yet adjusts step-size based on local geometry. AI-SARAH implicitly computes step-size and efficiently estimates local Lipschitz smoothness of stochastic functions. It is fully adaptive, tune-free, straightforward to implement, and computationally efficient. We provide technical insight and intuitive illustrations on its design and convergence. We conduct extensive empirical analysis and demonstrate its strong performance compared with its classical counterparts and other state-of-the-art first-order methods in solving convex machine learning problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 220,860 |
2108.02566 | Missingness Augmentation: A General Approach for Improving Generative
Imputation Models | Missing data imputation is a fundamental problem in data analysis, and many studies have been conducted to improve its performance by exploring model structures and learning procedures. However, data augmentation, as a simple yet effective method, has not received enough attention in this area. In this paper, we propose a novel data augmentation method called Missingness Augmentation (MisA) for generative imputation models. Our approach dynamically produces incomplete samples at each epoch by utilizing the generator's output, constraining the augmented samples using a simple reconstruction loss, and combining this loss with the original loss to form the final optimization objective. As a general augmentation technique, MisA can be easily integrated into generative imputation frameworks, providing a simple yet effective way to enhance their performance. Experimental results demonstrate that MisA significantly improves the performance of many recently proposed generative imputation models on a variety of tabular and image datasets. The code is available at \url{https://github.com/WYu-Feng/Missingness-Augmentation}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 249,371 |
2007.11469 | Deep Models and Shortwave Infrared Information to Detect Face
Presentation Attacks | This paper addresses the problem of face presentation attack detection using different image modalities. In particular, the usage of short wave infrared (SWIR) imaging is considered. Face presentation attack detection is performed using recent models based on Convolutional Neural Networks using only carefully selected SWIR image differences as input. Conducted experiments show superior performance over similar models acting on either color images or on a combination of different modalities (visible, NIR, thermal and depth), as well as on a SVM-based classifier acting on SWIR image differences. Experiments have been carried on a new public and freely available database, containing a wide variety of attacks. Video sequences have been recorded thanks to several sensors resulting in 14 different streams in the visible, NIR, SWIR and thermal spectra, as well as depth data. The best proposed approach is able to almost perfectly detect all impersonation attacks while ensuring low bonafide classification errors. On the other hand, obtained results show that obfuscation attacks are more difficult to detect. We hope that the proposed database will foster research on this challenging problem. Finally, all the code and instructions to reproduce presented experiments is made available to the research community. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 188,563 |
1609.01499 | Depth Estimation Through a Generative Model of Light Field Synthesis | Light field photography captures rich structural information that may facilitate a number of traditional image processing and computer vision tasks. A crucial ingredient in such endeavors is accurate depth recovery. We present a novel framework that allows the recovery of a high quality continuous depth map from light field data. To this end we propose a generative model of a light field that is fully parametrized by its corresponding depth map. The model allows for the integration of powerful regularization techniques such as a non-local means prior, facilitating accurate depth map estimation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 60,606 |
2102.13144 | The Magic of Superposition: A Survey on Simultaneous Transmission Based
Wireless Systems | In conventional communication systems, any interference between two communicating points is regarded as unwanted noise since it distorts the received signals. On the other hand, allowing simultaneous transmission and intentionally accepting the interference of signals and even benefiting from it have been considered for a range of wireless applications. As prominent examples, non-orthogonal multiple access (NOMA), joint source-channel coding, and the computation codes are designed to exploit this scenario. They also inspired many other fundamental works from network coding to consensus algorithms. Especially, federated learning is an emerging technology that can be applied to distributed machine learning networks by allowing simultaneous transmission. Although various simultaneous transmission applications exist independently in the literature, their main contributions are all based on the same principle; the superposition property. In this survey, we aim to emphasize the connections between these studies and provide a guide for the readers on the wireless communication techniques that benefit from the superposition of signals. We classify the existing literature depending on their purpose and application area and present their contributions. The survey shows that simultaneous transmission can bring scalability, security, low-latency, low-complexity and energy efficiency for certain distributed wireless scenarios which are inevitable with the emerging Internet of things (IoT) applications. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 221,960 |
2003.05570 | Smart Home Energy Management System for Power System Resiliency | The need for resiliency of electricity supply is increasing due to increasing frequency of natural disasters---such as hurricanes---that disrupt supply from the power grid. Rooftop solar photovoltaic (PV) panels together with batteries can provide resiliency in many scenarios. Without intelligent and automated decision making that can trade off conflicting requirements, a large PV system and a large battery is needed to provide meaningful resiliency. By using forecast of solar generation and household demand, an intelligent decision maker can operate the equipment (battery and critical loads) to ensure that the critical loads are serviced to the maximum duration possible. With the aid of such an intelligent control system, a smaller (and thus lower cost) system can service the primary loads for the same duration that a much larger system will be needed to service otherwise. In this paper we propose such an intelligent control system. A model predictive control (MPC) architecture is used that uses available measurements and forecasts to make optimal decisions for batteries and critical loads in real time. The optimization problem is formulated as a MILP (mixed integer linear program) due to the on/off decisions for the loads. Performance is compared with a non-intelligent baseline controller, for a PV-battery system chosen carefully for a single family house in Florida. Simulations are conducted for a one week period during hurricane Irma in 2017. Simulations show that the cost of the PV+battery system to provide a certain resiliency performance, duration the primary load can be serviced successfully, can be halved by the proposed control system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 167,886 |
1610.03932 | The Curvature-Augmented Closest Point Method with Vesicle
Inextensibility Application | The Closest Point method, initially developed by Ruuth and Merriman, allows for the numerical solution of surface partial differential equations without the need for a parameterization of the surface itself. Surface quantities are embedded into the surrounding domain by assigning each value at a given spatial location to the corresponding value at the closest point on the surface. This embedding allows for surface derivatives to be replaced by their Cartesian counterparts (e.g. $\nabla_s = \nabla$). This equivalence is only valid on the surface, and thus, interpolation is used to enforce what is known as the side condition away from the surface. To improve upon the method, this work derives an operator embedding that incorporates curvature information, making it valid in a neighborhood of the surface. With this, direct enforcement of the side condition is no longer needed. Comparisons in $\mathbb{R}^2$ and $\mathbb{R}^3$ show that the resulting Curvature-Augmented Closest Point method has better accuracy and requires less memory, through increased matrix sparsity, than the Closest Point method, while maintaining similar matrix condition numbers. To demonstrate the utility of the method in a physical application, simulations of inextensible, bi-lipid vesicles evolving toward equilibrium shapes are also included. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 62,316 |
2107.10834 | Query2Label: A Simple Transformer Way to Multi-Label Classification | This paper presents a simple and effective approach to solving the multi-label classification problem. The proposed approach leverages Transformer decoders to query the existence of a class label. The use of Transformer is rooted in the need of extracting local discriminative features adaptively for different labels, which is a strongly desired property due to the existence of multiple objects in one image. The built-in cross-attention module in the Transformer decoder offers an effective way to use label embeddings as queries to probe and pool class-related features from a feature map computed by a vision backbone for subsequent binary classifications. Compared with prior works, the new framework is simple, using standard Transformers and vision backbones, and effective, consistently outperforming all previous works on five multi-label classification data sets, including MS-COCO, PASCAL VOC, NUS-WIDE, and Visual Genome. Particularly, we establish $91.3\%$ mAP on MS-COCO. We hope its compact structure, simple implementation, and superior performance serve as a strong baseline for multi-label classification tasks and future studies. The code will be available soon at https://github.com/SlongLiu/query2labels. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 247,408 |
1701.02477 | Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic
Speech Recognition | Multi-task learning (MTL) involves the simultaneous training of two or more related tasks over shared representations. In this work, we apply MTL to audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn a mapping between audio-visual fused features and frame labels obtained from acoustic GMM/HMM model. This is combined with an auxiliary task which maps visual features to frame labels obtained from a separate visual GMM/HMM model. The MTL model is tested at various levels of babble noise and the results are compared with a base-line hybrid DNN-HMM AV-ASR model. Our results indicate that MTL is especially useful at higher level of noise. Compared to base-line, upto 7\% relative improvement in WER is reported at -3 SNR dB | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 66,557 |
2310.03964 | A Learnable Counter-condition Analysis Framework for Functional
Connectivity-based Neurological Disorder Diagnosis | To understand the biological characteristics of neurological disorders with functional connectivity (FC), recent studies have widely utilized deep learning-based models to identify the disease and conducted post-hoc analyses via explainable models to discover disease-related biomarkers. Most existing frameworks consist of three stages, namely, feature selection, feature extraction for classification, and analysis, where each stage is implemented separately. However, if the results at each stage lack reliability, it can cause misdiagnosis and incorrect analysis in afterward stages. In this study, we propose a novel unified framework that systemically integrates diagnoses (i.e., feature selection and feature extraction) and explanations. Notably, we devised an adaptive attention network as a feature selection approach to identify individual-specific disease-related connections. We also propose a functional network relational encoder that summarizes the global topological properties of FC by learning the inter-network relations without pre-defined edges between functional networks. Last but not least, our framework provides a novel explanatory power for neuroscientific interpretation, also termed counter-condition analysis. We simulated the FC that reverses the diagnostic information (i.e., counter-condition FC): converting a normal brain to be abnormal and vice versa. We validated the effectiveness of our framework by using two large resting-state functional magnetic resonance imaging (fMRI) datasets, Autism Brain Imaging Data Exchange (ABIDE) and REST-meta-MDD, and demonstrated that our framework outperforms other competing methods for disease identification. Furthermore, we analyzed the disease-related neurological patterns based on counter-condition analysis. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 397,489 |
1804.01117 | Visual Object Categorization Based on Hierarchical Shape Motifs Learned
From Noisy Point Cloud Decompositions | Object shape is a key cue that contributes to the semantic understanding of objects. In this work we focus on the categorization of real-world object point clouds to particular shape types. Therein surface description and representation of object shape structure have significant influence on shape categorization accuracy, when dealing with real-world scenes featuring noisy, partial and occluded object observations. An unsupervised hierarchical learning procedure is utilized here to symbolically describe surface characteristics on multiple semantic levels. Furthermore, a constellation model is proposed that hierarchically decomposes objects. The decompositions are described as constellations of symbols (shape motifs) in a gradual order, hence reflecting shape structure from local to global, i.e., from parts over groups of parts to entire objects. The combination of this multi-level description of surfaces and the hierarchical decomposition of shapes leads to a representation which allows to conceptualize shapes. An object discrimination has been observed in experiments with seven categories featuring instances with sensor noise, occlusions as well as inter-category and intra-category similarities. Experiments include the evaluation of the proposed description and shape decomposition approach, and comparisons to Fast Point Feature Histograms, a Vocabulary Tree and a neural network-based Deep Learning method. Furthermore, experiments are conducted with alternative datasets which analyze the generalization capability of the proposed approach. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 94,175 |
1809.06582 | Symbolic Tensor Neural Networks for Digital Media - from Tensor
Processing via BNF Graph Rules to CREAMS Applications | This tutorial material on Convolutional Neural Networks (CNN) and its applications in digital media research is based on the concept of Symbolic Tensor Neural Networks. The set of STNN expressions is specified in Backus-Naur Form (BNF) which is annotated by constraints typical for labeled acyclic directed graphs (DAG). The BNF induction begins from a collection of neural unit symbols with extra (up to five) decoration fields (including tensor depth and sharing fields). The inductive rules provide not only the general graph structure but also the specific shortcuts for residual blocks of units. A syntactic mechanism for network fragments modularization is introduced via user defined units and their instances. Moreover, the dual BNF rules are specified in order to generate the Dual Symbolic Tensor Neural Network (DSTNN). The joined interpretation of STNN and DSTNN provides the correct flow of gradient tensors, back propagated at the training stage. The proposed symbolic representation of CNNs is illustrated for six generic digital media applications (CREAMS): Compression, Recognition, Embedding, Annotation, 3D Modeling for human-computer interfacing, and data Security based on digital media objects. In order to make the CNN description and its gradient flow complete, for all presented applications, the symbolic representations of mathematically defined loss/gain functions and gradient flow equations for all used core units, are given. The tutorial is to convince the reader that STNN is not only a convenient symbolic notation for public presentations of CNN based solutions for CREAMS problems but also that it is a design blueprint with a potential for automatic generation of application source code. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 108,094 |
2105.10227 | Random Hash Code Generation for Cancelable Fingerprint Templates using
Vector Permutation and Shift-order Process | Cancelable biometric techniques have been used to prevent the compromise of biometric data by generating and using their corresponding cancelable templates for user authentication. However, the non-invertible distance preserving transformation methods employed in various schemes are often vulnerable to information leakage since matching is performed in the transformed domain. In this paper, we propose a non-invertible distance preserving scheme based on vector permutation and shift-order process. First, the dimension of feature vectors is reduced using kernelized principle component analysis (KPCA) prior to randomly permuting the extracted vector features. A shift-order process is then applied to the generated features in order to achieve non-invertibility and combat similarity-based attacks. The generated hash codes are resilient to different security and privacy attacks whilst fulfilling the major revocability and unlinkability requirements. Experimental evaluation conducted on 6 datasets of FVC2002 and FVC2004 reveals a high-performance accuracy of the proposed scheme better than other existing state-of-the-art schemes. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 236,320 |
2208.10813 | Unsupervised Question Answering via Answer Diversifying | Unsupervised question answering is an attractive task due to its independence on labeled data. Previous works usually make use of heuristic rules as well as pre-trained models to construct data and train QA models. However, most of these works regard named entity (NE) as the only answer type, which ignores the high diversity of answers in the real world. To tackle this problem, we propose a novel unsupervised method by diversifying answers, named DiverseQA. Specifically, the proposed method is composed of three modules: data construction, data augmentation and denoising filter. Firstly, the data construction module extends the extracted named entity into a longer sentence constituent as the new answer span to construct a QA dataset with diverse answers. Secondly, the data augmentation module adopts an answer-type dependent data augmentation process via adversarial training in the embedding level. Thirdly, the denoising filter module is designed to alleviate the noise in the constructed data. Extensive experiments show that the proposed method outperforms previous unsupervised models on five benchmark datasets, including SQuADv1.1, NewsQA, TriviaQA, BioASQ, and DuoRC. Besides, the proposed method shows strong performance in the few-shot learning setting. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 314,214 |
1804.04128 | Coloring with Words: Guiding Image Colorization Through Text-based
Palette Generation | This paper proposes a novel approach to generate multiple color palettes that reflect the semantics of input text and then colorize a given grayscale image according to the generated color palette. In contrast to existing approaches, our model can understand rich text, whether it is a single word, a phrase, or a sentence, and generate multiple possible palettes from it. For this task, we introduce our manually curated dataset called Palette-and-Text (PAT). Our proposed model called Text2Colors consists of two conditional generative adversarial networks: the text-to-palette generation networks and the palette-based colorization networks. The former captures the semantics of the text input and produce relevant color palettes. The latter colorizes a grayscale image using the generated color palette. Our evaluation results show that people preferred our generated palettes over ground truth palettes and that our model can effectively reflect the given palette when colorizing an image. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,765 |
1910.05453 | vq-wav2vec: Self-Supervised Learning of Discrete Speech Representations | We propose vq-wav2vec to learn discrete representations of audio segments through a wav2vec-style self-supervised context prediction task. The algorithm uses either a gumbel softmax or online k-means clustering to quantize the dense representations. Discretization enables the direct application of algorithms from the NLP community which require discrete inputs. Experiments show that BERT pre-training achieves a new state of the art on TIMIT phoneme classification and WSJ speech recognition. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 149,068 |
2006.16309 | Adversarial Learning for Debiasing Knowledge Graph Embeddings | Knowledge Graphs (KG) are gaining increasing attention in both academia and industry. Despite their diverse benefits, recent research have identified social and cultural biases embedded in the representations learned from KGs. Such biases can have detrimental consequences on different population and minority groups as applications of KG begin to intersect and interact with social spheres. This paper aims at identifying and mitigating such biases in Knowledge Graph (KG) embeddings. As a first step, we explore popularity bias -- the relationship between node popularity and link prediction accuracy. In case of node2vec graph embeddings, we find that prediction accuracy of the embedding is negatively correlated with the degree of the node. However, in case of knowledge-graph embeddings (KGE), we observe an opposite trend. As a second step, we explore gender bias in KGE, and a careful examination of popular KGE algorithms suggest that sensitive attribute like the gender of a person can be predicted from the embedding. This implies that such biases in popular KGs is captured by the structural properties of the embedding. As a preliminary solution to debiasing KGs, we introduce a novel framework to filter out the sensitive attribute information from the KG embeddings, which we call FAN (Filtering Adversarial Network). We also suggest the applicability of FAN for debiasing other network embeddings which could be explored in future work. | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,762 |
2412.20403 | A Novel Supervisory Control Algorithm to Avoid Deadlock in a
Manufacturing System Based on Petri Net in Presence of Resource Failure | It is well established that resource failure, including robots and machines, in a manufacturing system can result in deadlocks. This issue not only hampers the system's performance but can also inflict significant damage on the manufacturing process. In this paper, we present a new algorithm developed through modeling of a manufacturing system using Petri net that ensures the liveness of the net in the event of such a failure. To detect possible failures, we first design a recovery subnet that is integrated into the resource. Next, we analyze the effects of failures on each state of the network to identify forbidden states. Finally, we propose an algorithm that optimally adds control places and establishes new constant vectors within the network, enabling effective management of remaining resources across different parts of the net. The proposed algorithm has been implemented in a system featuring three manufacturing lines, demonstrating its error-free operation while ensuring key properties such as boundedness, liveness, and performance continuity within the net. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 521,212 |
1909.03350 | An Algorithm for Multi-Attribute Diverse Matching | Bipartite b-matching, where agents on one side of a market are matched to one or more agents or items on the other, is a classical model that is used in myriad application areas such as healthcare, advertising, education, and general resource allocation. Traditionally, the primary goal of such models is to maximize a linear function of the constituent matches (e.g., linear social welfare maximization) subject to some constraints. Recent work has studied a new goal of balancing whole-match diversity and economic efficiency, where the objective is instead a monotone submodular function over the matching. Basic versions of this problem are solvable in polynomial time. In this work, we prove that the problem of simultaneously maximizing diversity along several features (e.g., country of citizenship, gender, skills) is NP-hard. To address this problem, we develop the first combinatorial algorithm that constructs provably-optimal diverse b-matchings in pseudo-polynomial time. We also provide a Mixed-Integer Quadratic formulation for the same problem and show that our method guarantees optimal solutions and takes less computation time for a reviewer assignment application. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 144,445 |
1010.4726 | Information Maximization Fails to Maximize Expected Utility in a Simple
Foraging Model | Information theory has explained the organization of many biological phenomena, from the physiology of sensory receptive fields to the variability of certain DNA sequence ensembles. Some scholars have proposed that information should provide the central explanatory principle in biology, in the sense that any behavioral strategy that is optimal for an organism's survival must necessarily involve efficient information processing. We challenge this view by providing a counterexample. We present an analytically tractable model for a particular instance of a perception-action loop: a creature searching for a food source confined to a one-dimensional ring world. The model incorporates the statistical structure of the creature's world, the effects of the creature's actions on that structure, and the creature's strategic decision process. The model takes the form of a Markov process on an infinite dimensional state space. To analyze it we construct an exact coarse graining that reduces the model to a Markov process on a finite number of "information states". This technique allows us to make quantitative comparisons between the performance of an information-theoretically optimal strategy with other candidate strategies on a food gathering task. We find that: 1. Information optimal search does not necessarily optimize utility (expected food gain). 2. The rank ordering of search strategies by information performance does not predict their ordering by expected food obtained. 3. The relative advantage of different strategies depends on the statistical structure of the environment, in particular the variability of motion of the source. We conclude that there is no simple relationship between information and utility. Behavioral optimality does not imply information efficiency, nor is there a simple tradeoff between gaining information about a food source versus obtaining the food itself. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,990 |
2103.11226 | Demystifying the Effects of Non-Independence in Federated Learning | Federated Learning (FL) enables statistical models to be built on user-generated data without compromising data security and user privacy. For this reason, FL is well suited for on-device learning from mobile devices where data is abundant and highly privatized. Constrained by the temporal availability of mobile devices, only a subset of devices is accessible to participate in the iterative protocol consisting of training and aggregation. In this study, we take a step toward better understanding the effect of non-independent data distributions arising from block-cyclic sampling. By conducting extensive experiments on visual classification, we measure the effects of block-cyclic sampling (both standalone and in combination with non-balanced block distributions). Specifically, we measure the alterations induced by block-cyclic sampling from the perspective of accuracy, fairness, and convergence rate. Experimental results indicate robustness to cycling over a two-block structure, e.g., due to time zones. In contrast, drawing data samples dependently from a multi-block structure significantly degrades the performance and rate of convergence by up to 26%. Moreover, we find that this performance degeneration is further aggravated by unbalanced block distributions to a point that can no longer be adequately compensated by higher communication and more frequent synchronization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 225,721 |
1301.7363 | Empirical Analysis of Predictive Algorithms for Collaborative Filtering | Collaborative filtering or recommender systems use a database about user preferences to predict additional topics or products a new user might like. In this paper we describe several algorithms designed for this task, including techniques based on correlation coefficients, vector-based similarity calculations, and statistical Bayesian methods. We compare the predictive accuracy of the various methods in a set of representative problem domains. We use two basic classes of evaluation metrics. The first characterizes accuracy over a set of individual predictions in terms of average absolute deviation. The second estimates the utility of a ranked list of suggested items. This metric uses an estimate of the probability that a user will see a recommendation in an ordered list. Experiments were run for datasets associated with 3 application areas, 4 experimental protocols, and the 2 evaluation metrics for the various algorithms. Results indicate that for a wide range of conditions, Bayesian networks with decision trees at each node and correlation methods outperform Bayesian-clustering and vector-similarity methods. Between correlation and Bayesian networks, the preferred method depends on the nature of the dataset, nature of the application (ranked versus one-by-one presentation), and the availability of votes with which to make predictions. Other considerations include the size of database, speed of predictions, and learning time. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 21,597 |
2111.12046 | Distributed energy control in electric energy systems | The power interactions of any component in electric energy systems with the rest of the system happen naturally, as governed by the energy conservation principles. There may, however, occur instances when the rate at which power gets generated by one component through local energy conversion is not exactly the same as that absorbed by rest of the system. This is when instabilities get induced. To model and control such instabilities, this paper generalizes the notion of interaction variable used to characterize diverse system components in a unified manner. The same variable captures aggregate system-wide effects and sets reference points for multi-layered distributed output feedback control. It has a physical interpretation of instantaneous power and generalized reactive power. The higher layer design utilizes the interactive energy state-space model to derive intermediate reactive power control, which becomes a control command to the lower layer physical model. This command is implemented using either Feedback Linearizing Control (FBLC) or Sliding Mode Control (SMC), for which sufficient stability conditions are stated. This paper claims that the proposed design is fundamental to aligning dynamic interactions between components for stability and feasibility. Without loss of generality, we utilize a simple RLC circuit with a controllable voltage source for illustrations, which is a simplified representation of any controllable component in microgrids. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 267,850 |
2501.17903 | Free Agent in Agent-Based Mixture-of-Experts Generative AI Framework | Multi-agent systems commonly distribute tasks among specialized, autonomous agents, yet they often lack mechanisms to replace or reassign underperforming agents in real time. Inspired by the free-agency model of Major League Baseball, the Reinforcement Learning Free Agent (RLFA) algorithm introduces a reward-based mechanism to detect and remove agents exhibiting persistent underperformance and seamlessly insert more capable ones. Each agent internally uses a mixture-of-experts (MoE) approach, delegating incoming tasks to specialized sub-models under the guidance of a gating function. A primary use case is fraud detection, where RLFA promptly swaps out an agent whose detection accuracy dips below a preset threshold. A new agent is tested in a probationary mode, and upon demonstrating superior performance, fully replaces the underperformer. This dynamic, free-agency cycle ensures sustained accuracy, quicker adaptation to emerging threats, and minimal disruption to ongoing operations. By continually refreshing its roster of agents, the system fosters ongoing improvements and more resilient collaboration in multi-agent Generative AI environments. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 528,509 |
2409.16950 | Dynamic Obstacle Avoidance through Uncertainty-Based Adaptive Planning
with Diffusion | By framing reinforcement learning as a sequence modeling problem, recent work has enabled the use of generative models, such as diffusion models, for planning. While these models are effective in predicting long-horizon state trajectories in deterministic environments, they face challenges in dynamic settings with moving obstacles. Effective collision avoidance demands continuous monitoring and adaptive decision-making. While replanning at every timestep could ensure safety, it introduces substantial computational overhead due to the repetitive prediction of overlapping state sequences -- a process that is particularly costly with diffusion models, known for their intensive iterative sampling procedure. We propose an adaptive generative planning approach that dynamically adjusts replanning frequency based on the uncertainty of action predictions. Our method minimizes the need for frequent, computationally expensive, and redundant replanning while maintaining robust collision avoidance performance. In experiments, we obtain a 13.5% increase in the mean trajectory length and a 12.7% increase in mean reward over long-horizon planning, indicating a reduction in collision rates and an improved ability to navigate the environment safely. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 491,591 |
2311.16171 | Multi-Agent Learning of Efficient Fulfilment and Routing Strategies in
E-Commerce | This paper presents an integrated algorithmic framework for minimising product delivery costs in e-commerce (known as the cost-to-serve or C2S). One of the major challenges in e-commerce is the large volume of spatio-temporally diverse orders from multiple customers, each of which has to be fulfilled from one of several warehouses using a fleet of vehicles. This results in two levels of decision-making: (i) selection of a fulfillment node for each order (including the option of deferral to a future time), and then (ii) routing of vehicles (each of which can carry multiple orders originating from the same warehouse). We propose an approach that combines graph neural networks and reinforcement learning to train the node selection and vehicle routing agents. We include real-world constraints such as warehouse inventory capacity, vehicle characteristics such as travel times, service times, carrying capacity, and customer constraints including time windows for delivery. The complexity of this problem arises from the fact that outcomes (rewards) are driven both by the fulfillment node mapping as well as the routing algorithms, and are spatio-temporally distributed. Our experiments show that this algorithmic pipeline outperforms pure heuristic policies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 410,807 |
1811.04943 | Modeling and Performance of Uplink Cache-Enabled Massive MIMO
Heterogeneous Networks | A significant burden on wireless networks is brought by the uploading of user-generated contents to the Internet by means of applications such as the social media. To cope with this mobile data tsunami, we develop a novel MIMO network architecture with randomly located base stations (BSs) a large number of antennas employing cache-enabled \textit{uplink} transmission. In particular, we formulate a scenario, where the users upload their content to their strongest base stations (BSs), which are Poisson point process (PPP) distributed. In addition, the BSs, exploiting the benefits of massive MIMO, upload their contents to the core network by means of a finite-rate backhaul. After proposing the caching policies, where we propose the {modified} von Mises distribution as the popularity distribution function, we derive the outage probability and the average delivery rate by taking advantage of tools from the deterministic equivalent (DE) and stochastic geometry analyses. Numerical results investigate the realistic performance gains of the proposed heterogeneous cache-enabled uplink on the network in terms of cardinal operating parameters. For example, insights regarding the BSs storage size are exposed. Moreover, the impacts of the key parameters such the file popularity distribution, and the target bitrate are investigated. Specifically, the outage probability decreases if the storage size is increased, while the average delivery rate increases. In addition, the concentration parameter, defining the number of files stored at the intermediate nodes (popularity), affects directly the proposed metrics. A higher target rate results in higher outage because fewer users obey this constraint. Also, we demonstrate that a denser network decreases the outage and increases the delivery rate. Hence, the introduction of caching at the uplink of the system design ameliorates the network performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 113,207 |
1703.04399 | Frequency Synchronization for Uplink Massive MIMO Systems | In this paper, we propose a frequency synchronization scheme for multiuser orthogonal frequency division multiplexing (OFDM) uplink with a large-scale uniform linear array (ULA) at base station (BS) by exploiting the angle information of users. Considering that the incident signal at BS from each user can be restricted within a certain angular spread, the proposed scheme could perform carrier frequency offset (CFO) estimation for each user individually through a \textit{joint spatial-frequency alignment} procedure and can be completed efficiently with the aided of fast Fourier transform (FFT). A multi-branch receive beamforming is further designed to yield an equivalent single user transmission model for which the conventional single-user channel estimation and data detection can be carried out. To make the study complete, the theoretical performance analysis of the CFO estimation is also conducted. We further develop a user grouping scheme to deal with the unexpected scenarios that some users may not be separated well from the spatial domain. Finally, various numerical results are provided to verify the proposed studies. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 69,897 |
2502.01913 | Composite Gaussian Processes Flows for Learning Discontinuous Multimodal
Policies | Learning control policies for real-world robotic tasks often involve challenges such as multimodality, local discontinuities, and the need for computational efficiency. These challenges arise from the complexity of robotic environments, where multiple solutions may coexist. To address these issues, we propose Composite Gaussian Processes Flows (CGP-Flows), a novel semi-parametric model for robotic policy. CGP-Flows integrate Overlapping Mixtures of Gaussian Processes (OMGPs) with the Continuous Normalizing Flows (CNFs), enabling them to model complex policies addressing multimodality and local discontinuities. This hybrid approach retains the computational efficiency of OMGPs while incorporating the flexibility of CNFs. Experiments conducted in both simulated and real-world robotic tasks demonstrate that CGP-flows significantly improve performance in modeling control policies. In a simulation task, we confirmed that CGP-Flows had a higher success rate compared to the baseline method, and the success rate of GCP-Flow was significantly different from the success rate of other baselines in chi-square tests. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 530,086 |
2308.04661 | Unified Matrix Factorization with Dynamic Multi-view Clustering | Matrix factorization (MF) is a classical collaborative filtering algorithm for recommender systems. It decomposes the user-item interaction matrix into a product of low-dimensional user representation matrix and item representation matrix. In typical recommendation scenarios, the user-item interaction paradigm is usually a two-stage process and requires static clustering analysis of the obtained user and item representations. The above process, however, is time and computationally intensive, making it difficult to apply in real-time to e-commerce or Internet of Things environments with billions of users and trillions of items. To address this, we propose a unified matrix factorization method based on dynamic multi-view clustering (MFDMC) that employs an end-to-end training paradigm. Specifically, in each view, a user/item representation is regarded as a weighted projection of all clusters. The representation of each cluster is learnable, enabling the dynamic discarding of bad clusters. Furthermore, we employ multi-view clustering to represent multiple roles of users/items, effectively utilizing the representation space and improving the interpretability of the user/item representations for downstream tasks. Extensive experiments show that our proposed MFDMC achieves state-of-the-art performance on real-world recommendation datasets. Additionally, comprehensive visualization and ablation studies interpretably confirm that our method provides meaningful representations for downstream tasks of users/items. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 384,498 |
2310.14492 | Robotic Arm Manipulation to Perform Rock Skipping in Simulation | Rock skipping is a highly dynamic and relatively complex task that can easily be performed by humans. This project aims to bring rock skipping into a robotic setting, utilizing the lessons we learned in Robotic Manipulation. Specifically, this project implements a system consisting of a robotic arm and dynamic environment to perform rock skipping in simulation. By varying important parameters such as release velocity, we hope to use our system to gain insight into the most important factors for maximizing the total number of skips. In addition, by implementing the system in simulation, we have a more rigorous and precise testing approach over these varied test parameters. However, this project experienced some limitations due to gripping inefficiencies and problems with release height trajectories which is further discussed in our report. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 401,877 |
1806.08658 | Privacy-Preserving Identification via Layered Sparse Code Design:
Distributed Servers and Multiple Access Authorization | We propose a new computationally efficient privacy-preserving identification framework based on layered sparse coding. The key idea of the proposed framework is a sparsifying transform learning with ambiguization, which consists of a trained linear map, a component-wise nonlinearity and a privacy amplification. We introduce a practical identification framework, which consists of two phases: public and private identification. The public untrusted server provides the fast search service based on the sparse privacy protected codebook stored at its side. The private trusted server or the local client application performs the refined accurate similarity search using the results of the public search and the layered sparse codebooks stored at its side. The private search is performed in the decoded domain and also the accuracy of private search is chosen based on the authorization level of the client. The efficiency of the proposed method is in computational complexity of encoding, decoding, "encryption" (ambiguization) and "decryption" (purification) as well as storage complexity of the codebooks. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | true | true | 101,193 |
2003.05822 | Topological Effects on Attacks Against Vertex Classification | Vertex classification is vulnerable to perturbations of both graph topology and vertex attributes, as shown in recent research. As in other machine learning domains, concerns about robustness to adversarial manipulation can prevent potential users from adopting proposed methods when the consequence of action is very high. This paper considers two topological characteristics of graphs and explores the way these features affect the amount the adversary must perturb the graph in order to be successful. We show that, if certain vertices are included in the training set, it is possible to substantially an adversary's required perturbation budget. On four citation datasets, we demonstrate that if the training set includes high degree vertices or vertices that ensure all unlabeled nodes have neighbors in the training set, we show that the adversary's budget often increases by a substantial factor---often a factor of 2 or more---over random training for the Nettack poisoning attack. Even for especially easy targets (those that are misclassified after just one or two perturbations), the degradation of performance is much slower, assigning much lower probabilities to the incorrect classes. In addition, we demonstrate that this robustness either persists when recently proposed defenses are applied, or is competitive with the resulting performance improvement for the defender. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 167,957 |
1904.07475 | Learning Pyramid-Context Encoder Network for High-Quality Image
Inpainting | High-quality image inpainting requires filling missing regions in a damaged image with plausible content. Existing works either fill the regions by copying image patches or generating semantically-coherent patches from region context, while neglect the fact that both visual and semantic plausibility are highly-demanded. In this paper, we propose a Pyramid-context ENcoder Network (PEN-Net) for image inpainting by deep generative models. The PEN-Net is built upon a U-Net structure, which can restore an image by encoding contextual semantics from full resolution input, and decoding the learned semantic features back into images. Specifically, we propose a pyramid-context encoder, which progressively learns region affinity by attention from a high-level semantic feature map and transfers the learned attention to the previous low-level feature map. As the missing content can be filled by attention transfer from deep to shallow in a pyramid fashion, both visual and semantic coherence for image inpainting can be ensured. We further propose a multi-scale decoder with deeply-supervised pyramid losses and an adversarial loss. Such a design not only results in fast convergence in training, but more realistic results in testing. Extensive experiments on various datasets show the superior performance of the proposed network | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 127,808 |
math/0212212 | Coverage control for mobile sensing networks | This paper presents control and coordination algorithms for groups of vehicles. The focus is on autonomous vehicle networks performing distributed sensing tasks where each vehicle plays the role of a mobile tunable sensor. The paper proposes gradient descent algorithms for a class of utility functions which encode optimal coverage and sensing policies. The resulting closed-loop behavior is adaptive, distributed, asynchronous, and verifiably correct. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 540,643 |
1709.05559 | Nonnegative HMM for Babble Noise Derived from Speech HMM: Application to
Speech Enhancement | Deriving a good model for multitalker babble noise can facilitate different speech processing algorithms, e.g. noise reduction, to reduce the so-called cocktail party difficulty. In the available systems, the fact that the babble waveform is generated as a sum of N different speech waveforms is not exploited explicitly. In this paper, first we develop a gamma hidden Markov model for power spectra of the speech signal, and then formulate it as a sparse nonnegative matrix factorization (NMF). Second, the sparse NMF is extended by relaxing the sparsity constraint, and a novel model for babble noise (gamma nonnegative HMM) is proposed in which the babble basis matrix is the same as the speech basis matrix, and only the activation factors (weights) of the basis vectors are different for the two signals over time. Finally, a noise reduction algorithm is proposed using the derived speech and babble models. All of the stationary model parameters are estimated using the expectation-maximization (EM) algorithm, whereas the time-varying parameters, i.e. the gain parameters of speech and babble signals, are estimated using a recursive EM algorithm. The objective and subjective listening evaluations show that the proposed babble model and the final noise reduction algorithm significantly outperform the conventional methods. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 80,905 |
2010.03801 | On functions with the maximal number of bent components | A function $F:\mathbb{F}_2^n\rightarrow \mathbb{F}_2^n$, $n=2m$, can have at most $2^n-2^m$ bent component functions. Trivial examples are obtained as $F(x) = (f_1(x),\ldots,f_m(x),a_1(x),\ldots, a_m(x))$, where $\tilde{F}(x)=(f_1(x),\ldots,f_m(x))$ is a vectorial bent function from $\mathbb{F}_2^n$ to $\mathbb{F}_2^m$, and $a_i$, $1\le i\le m$, are affine Boolean functions. A class of nontrivial examples is given in univariate form with the functions $F(x) = x^{2^r}{\rm Tr^n_m}(\Lambda(x))$, where $\Lambda$ is a linearized permutation of $\mathbb{F}_{2^m}$. In the first part of this article it is shown that plateaued functions with $2^n-2^m$ bent components can have nonlinearity at most $2^{n-1}-2^{\lfloor\frac{n+m}{2}\rfloor}$, a bound which is attained by the example $x^{2^r}{\rm Tr^n_m}(x)$, $1\le r<m$ (Pott et al. 2018). This partially solves Question 5 in Pott et al. 2018. We then analyse the functions of the form $x^{2^r}{\rm Tr^n_m}(\Lambda(x))$. We show that for odd $m$, only $x^{2^r}{\rm Tr^n_m}(x)$, $1\le r<m$, has maximal nonlinearity, whereas there are more of them for even $m$, of which we present one more infinite class explicitly. In detail, we investigate Walsh spectrum, differential spectrum and their relations for the functions $x^{2^r}{\rm Tr^n_m}(\Lambda(x))$. Our results indicate that this class contains many nontrivial EA-equivalence classes of functions with the maximal number of bent components, if $m$ is even, several with maximal possible nonlinearity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 199,535 |
2311.17593 | LanGWM: Language Grounded World Model | Recent advances in deep reinforcement learning have showcased its potential in tackling complex tasks. However, experiments on visual control tasks have revealed that state-of-the-art reinforcement learning models struggle with out-of-distribution generalization. Conversely, expressing higher-level concepts and global contexts is relatively easy using language. Building upon recent success of the large language models, our main objective is to improve the state abstraction technique in reinforcement learning by leveraging language for robust action selection. Specifically, we focus on learning language-grounded visual features to enhance the world model learning, a model-based reinforcement learning technique. To enforce our hypothesis explicitly, we mask out the bounding boxes of a few objects in the image observation and provide the text prompt as descriptions for these masked objects. Subsequently, we predict the masked objects along with the surrounding regions as pixel reconstruction, similar to the transformer-based masked autoencoder approach. Our proposed LanGWM: Language Grounded World Model achieves state-of-the-art performance in out-of-distribution test at the 100K interaction steps benchmarks of iGibson point navigation tasks. Furthermore, our proposed technique of explicit language-grounded visual representation learning has the potential to improve models for human-robot interaction because our extracted visual features are language grounded. | false | false | false | false | true | false | true | true | true | false | false | true | false | false | false | false | false | false | 411,352 |
2009.12410 | Fundamental limitations to no-jerk gearshifts of multi-speed
transmission architectures in electric vehicles | Multi-speed transmissions can enhance the performance and reduce the overall cost of an electric vehicle, but they also introduce a challenge: avoiding gearshift jerk, which may sometimes prove to be impossible in the presence of motor and clutch saturation. In this article, we introduce three theorems that explicitly define the fundamental limitations to no-jerk gearshifts resulting from motor or actuator saturation. We compare gearshifts that consist of transferring transmission torque from one friction clutch to another, to the case in which one of the clutches is a one-way clutch. We show that systems with a one-way clutch are more prone to motor saturation, causing gearshift jerk to be more often inevitable. We also study the influence of planetary gearsets on the gearshift dynamical trajectories, and expose the impact on the no-jerk limitations. This work offers tools to compare transmission architectures during the conceptual design phase of a new electric vehicle. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 197,412 |
2205.10706 | GL-RG: Global-Local Representation Granularity for Video Captioning | Video captioning is a challenging task as it needs to accurately transform visual understanding into natural language description. To date, state-of-the-art methods inadequately model global-local representation across video frames for caption generation, leaving plenty of room for improvement. In this work, we approach the video captioning task from a new perspective and propose a GL-RG framework for video captioning, namely a \textbf{G}lobal-\textbf{L}ocal \textbf{R}epresentation \textbf{G}ranularity. Our GL-RG demonstrates three advantages over the prior efforts: 1) we explicitly exploit extensive visual representations from different video ranges to improve linguistic expression; 2) we devise a novel global-local encoder to produce rich semantic vocabulary to obtain a descriptive granularity of video contents across frames; 3) we develop an incremental training strategy which organizes model learning in an incremental fashion to incur an optimal captioning behavior. Experimental results on the challenging MSR-VTT and MSVD datasets show that our DL-RG outperforms recent state-of-the-art methods by a significant margin. Code is available at \url{https://github.com/ylqi/GL-RG}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 297,825 |
2012.11810 | Progressive One-shot Human Parsing | Prior human parsing models are limited to parsing humans into classes pre-defined in the training data, which is not flexible to generalize to unseen classes, e.g., new clothing in fashion analysis. In this paper, we propose a new problem named one-shot human parsing (OSHP) that requires to parse human into an open set of reference classes defined by any single reference example. During training, only base classes defined in the training set are exposed, which can overlap with part of reference classes. In this paper, we devise a novel Progressive One-shot Parsing network (POPNet) to address two critical challenges , i.e., testing bias and small sizes. POPNet consists of two collaborative metric learning modules named Attention Guidance Module and Nearest Centroid Module, which can learn representative prototypes for base classes and quickly transfer the ability to unseen classes during testing, thereby reducing testing bias. Moreover, POPNet adopts a progressive human parsing framework that can incorporate the learned knowledge of parent classes at the coarse granularity to help recognize the descendant classes at the fine granularity, thereby handling the small sizes issue. Experiments on the ATR-OS benchmark tailored for OSHP demonstrate POPNet outperforms other representative one-shot segmentation models by large margins and establishes a strong baseline. Source code can be found at https://github.com/Charleshhy/One-shot-Human-Parsing. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 212,743 |
1911.07474 | Deep and Dense Sarcasm Detection | Recent work in automated sarcasm detection has placed a heavy focus on context and meta-data. Whilst certain utterances indeed require background knowledge and commonsense reasoning, previous works have only explored shallow models for capturing the lexical, syntactic and semantic cues present within a text. In this paper, we propose a deep 56 layer network, implemented with dense connectivity to model the isolated utterance and extract richer features therein. We compare our approach against recent state-of-the-art architectures which make considerable use of extrinsic information, and demonstrate competitive results whilst using only the local features of the text. Further, we provide an analysis of the dependency of prior convolution outputs in generating the final feature maps. Finally a case study is presented, supporting that our approach accurately classifies additional uses of clear sarcasm, which a standard CNN misclassifies. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 153,864 |
2205.06009 | Falsesum: Generating Document-level NLI Examples for Recognizing Factual
Inconsistency in Summarization | Neural abstractive summarization models are prone to generate summaries which are factually inconsistent with their source documents. Previous work has introduced the task of recognizing such factual inconsistency as a downstream application of natural language inference (NLI). However, state-of-the-art NLI models perform poorly in this context due to their inability to generalize to the target task. In this work, we show that NLI models can be effective for this task when the training data is augmented with high-quality task-oriented examples. We introduce Falsesum, a data generation pipeline leveraging a controllable text generation model to perturb human-annotated summaries, introducing varying types of factual inconsistencies. Unlike previously introduced document-level NLI datasets, our generated dataset contains examples that are diverse and inconsistent yet plausible. We show that models trained on a Falsesum-augmented NLI dataset improve the state-of-the-art performance across four benchmarks for detecting factual inconsistency in summarization. The code to obtain the dataset is available online at https://github.com/joshbambrick/Falsesum | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 296,118 |
2304.00892 | Asservissement visuel 3D direct dans le domaine spectral | This paper presents a direct 3D visual servo scheme for the automatic alignment of point clouds (respectively, objects) using visual information in the spectral domain. Specifically, we propose an alignment method for 3D models/point clouds that works by estimating the global transformation between a reference point cloud and a target point cloud using harmonic domain data analysis. A 3D discrete Fourier transform (DFT) in $\mathbb{R}^3$ is used for translation estimation and real spherical harmonics in $SO(3)$ are used for rotation estimation. This approach allows us to derive a decoupled visual servo controller with 6 degrees of freedom. We then show how this approach can be used as a controller for a robotic arm to perform a positioning task. Unlike existing 3D visual servo methods, our method works well with partial point clouds and in cases of large initial transformations between the initial and desired position. Additionally, using spectral data (instead of spatial data) for the transformation estimation makes our method robust to sensor-induced noise and partial occlusions. Our method has been successfully validated experimentally on point clouds obtained with a depth camera mounted on a robotic arm. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 355,845 |
cs/0504085 | Capacity per Unit Energy of Fading Channels with a Peak Constraint | A discrete-time single-user scalar channel with temporally correlated Rayleigh fading is analyzed. There is no side information at the transmitter or the receiver. A simple expression is given for the capacity per unit energy, in the presence of a peak constraint. The simple formula of Verdu for capacity per unit cost is adapted to a channel with memory, and is used in the proof. In addition to bounding the capacity of a channel with correlated fading, the result gives some insight into the relationship between the correlation in the fading process and the channel capacity. The results are extended to a channel with side information, showing that the capacity per unit energy is one nat per Joule, independently of the peak power constraint. A continuous-time version of the model is also considered. The capacity per unit energy subject to a peak constraint (but no bandwidth constraint) is given by an expression similar to that for discrete time, and is evaluated for Gauss-Markov and Clarke fading channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 538,689 |
1701.02660 | Towards parallelizable sampling-based Nonlinear Model Predictive Control | This paper proposes a new sampling-based nonlinear model predictive control (MPC) algorithm, with a bound on complexity quadratic in the prediction horizon N and linear in the number of samples. The idea of the proposed algorithm is to use the sequence of predicted inputs from the previous time step as a warm start, and to iteratively update this sequence by changing its elements one by one, starting from the last predicted input and ending with the first predicted input. This strategy, which resembles the dynamic programming principle, allows for parallelization up to a certain level and yields a suboptimal nonlinear MPC algorithm with guaranteed recursive feasibility, stability and improved cost function at every iteration, which is suitable for real-time implementation. The complexity of the algorithm per each time step in the prediction horizon depends only on the horizon, the number of samples and parallel threads, and it is independent of the measured system state. Comparisons with the fmincon nonlinear optimization solver on benchmark examples indicate that as the simulation time progresses, the proposed algorithm converges rapidly to the "optimal" solution, even when using a small number of samples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 66,582 |
2301.01514 | PENDANTSS: PEnalized Norm-ratios Disentangling Additive Noise, Trend and
Sparse Spikes | Denoising, detrending, deconvolution: usual restoration tasks, traditionally decoupled. Coupled formulations entail complex ill-posed inverse problems. We propose PENDANTSS for joint trend removal and blind deconvolution of sparse peak-like signals. It blends a parsimonious prior with the hypothesis that smooth trend and noise can somewhat be separated by low-pass filtering. We combine the generalized quasi-norm ratio SOOT/SPOQ sparse penalties $\ell_p/\ell_q$ with the BEADS ternary assisted source separation algorithm. This results in a both convergent and efficient tool, with a novel Trust-Region block alternating variable metric forward-backward approach. It outperforms comparable methods, when applied to typically peaked analytical chemistry signals. Reproducible code is provided. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 339,260 |
2306.13968 | Fusing Multimodal Signals on Hyper-complex Space for Extreme Abstractive
Text Summarization (TL;DR) of Scientific Contents | The realm of scientific text summarization has experienced remarkable progress due to the availability of annotated brief summaries and ample data. However, the utilization of multiple input modalities, such as videos and audio, has yet to be thoroughly explored. At present, scientific multimodal-input-based text summarization systems tend to employ longer target summaries like abstracts, leading to an underwhelming performance in the task of text summarization. In this paper, we deal with a novel task of extreme abstractive text summarization (aka TL;DR generation) by leveraging multiple input modalities. To this end, we introduce mTLDR, a first-of-its-kind dataset for the aforementioned task, comprising videos, audio, and text, along with both author-composed summaries and expert-annotated summaries. The mTLDR dataset accompanies a total of 4,182 instances collected from various academic conference proceedings, such as ICLR, ACL, and CVPR. Subsequently, we present mTLDRgen, an encoder-decoder-based model that employs a novel dual-fused hyper-complex Transformer combined with a Wasserstein Riemannian Encoder Transformer, to dexterously capture the intricacies between different modalities in a hyper-complex latent geometric space. The hyper-complex Transformer captures the intrinsic properties between the modalities, while the Wasserstein Riemannian Encoder Transformer captures the latent structure of the modalities in the latent space geometry, thereby enabling the model to produce diverse sentences. mTLDRgen outperforms 20 baselines on mTLDR as well as another non-scientific dataset (How2) across three Rouge-based evaluation measures. Furthermore, based on the qualitative metrics, BERTScore and FEQA, and human evaluations, we demonstrate that the summaries generated by mTLDRgen are fluent and congruent to the original source material. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 375,480 |
0805.0909 | SANA - Security Analysis in Internet Traffic through Artificial Immune
Systems | The Attacks done by Viruses, Worms, Hackers, etc. are a Network Security-Problem in many Organisations. Current Intrusion Detection Systems have significant Disadvantages, e.g. the need of plenty of Computational Power or the Local Installation. Therefore, we introduce a novel Framework for Network Security which is called SANA. SANA contains an artificial Immune System with artificial Cells which perform certain Tasks in order to to support existing systems to better secure the Network against Intrusions. The Advantages of SANA are that it is efficient, adaptive, autonomous, and massively-distributed. In this Article, we describe the Architecture of the artificial Immune System and the Functionality of the Components. We explain briefly the Implementation and discuss Results. | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | 1,728 |
2010.12523 | Neural Passage Retrieval with Improved Negative Contrast | In this paper we explore the effects of negative sampling in dual encoder models used to retrieve passages for automatic question answering. We explore four negative sampling strategies that complement the straightforward random sampling of negatives, typically used to train dual encoder models. Out of the four strategies, three are based on retrieval and one on heuristics. Our retrieval-based strategies are based on the semantic similarity and the lexical overlap between questions and passages. We train the dual encoder models in two stages: pre-training with synthetic data and fine tuning with domain-specific data. We apply negative sampling to both stages. The approach is evaluated in two passage retrieval tasks. Even though it is not evident that there is one single sampling strategy that works best in all the tasks, it is clear that our strategies contribute to improving the contrast between the response and all the other passages. Furthermore, mixing the negatives from different strategies achieve performance on par with the best performing strategy in all tasks. Our results establish a new state-of-the-art level of performance on two of the open-domain question answering datasets that we evaluated. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 202,727 |
1201.3592 | Characterizing Interdisciplinarity of Researchers and Research Topics
Using Web Search Engines | Researchers' networks have been subject to active modeling and analysis. Earlier literature mostly focused on citation or co-authorship networks reconstructed from annotated scientific publication databases, which have several limitations. Recently, general-purpose web search engines have also been utilized to collect information about social networks. Here we reconstructed, using web search engines, a network representing the relatedness of researchers to their peers as well as to various research topics. Relatedness between researchers and research topics was characterized by visibility boost-increase of a researcher's visibility by focusing on a particular topic. It was observed that researchers who had high visibility boosts by the same research topic tended to be close to each other in their network. We calculated correlations between visibility boosts by research topics and researchers' interdisciplinarity at individual level (diversity of topics related to the researcher) and at social level (his/her centrality in the researchers' network). We found that visibility boosts by certain research topics were positively correlated with researchers' individual-level interdisciplinarity despite their negative correlations with the general popularity of researchers. It was also found that visibility boosts by network-related topics had positive correlations with researchers' social-level interdisciplinarity. Research topics' correlations with researchers' individual- and social-level interdisciplinarities were found to be nearly independent from each other. These findings suggest that the notion of "interdisciplinarity" of a researcher should be understood as a multi-dimensional concept that should be evaluated using multiple assessment means. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 13,859 |
2106.01893 | Worst-Case Pointing Performance Analysis for Large Flexible Spacecraft | This paper presents a tool, PELIB, developed in MATLAB /SIMULINK environment to perform pointing performance analysis based on European pointing standards. PELIB is designed as an extension of the Satellite Dynamics Toolbox (SDT), which derives the Linear Fractional Transformation (LFT) models of flexible space structures. The addition of PELIB will allow the users of SDT to perform pointing performance analysis of real mission scenarios in the same environment used for control synthesis. PELIB offers as well the possibility to take into account uncertainties in the system. This feature represents an enhancement to the current verification tools available in the European space industry community by providing the worst-case pointing budget. The capabilities of PELIB were demonstrated in a case study involving a spacecraft model with two flexible solar arrays. Several error sources, as well as uncertain parameters, were included in this model. The nominal performance has been investigated using PELIB and compared with the current European reference tool. The worst-case performance is also investigated with the new feature of PELIB to obtain the worst-case performance budget | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 238,654 |
1304.5634 | A Survey on Multi-view Learning | In recent years, a great many methods of learning from multi-view data by considering the diversity of different views have been proposed. These views may be obtained from multiple sources or different feature subsets. In trying to organize and highlight similarities and differences between the variety of multi-view learning approaches, we review a number of representative multi-view learning algorithms in different areas and classify them into three groups: 1) co-training, 2) multiple kernel learning, and 3) subspace learning. Notably, co-training style algorithms train alternately to maximize the mutual agreement on two distinct views of the data; multiple kernel learning algorithms exploit kernels that naturally correspond to different views and combine kernels either linearly or non-linearly to improve learning performance; and subspace learning algorithms aim to obtain a latent subspace shared by multiple views by assuming that the input views are generated from this latent subspace. Though there is significant variance in the approaches to integrating multiple views to improve learning performance, they mainly exploit either the consensus principle or the complementary principle to ensure the success of multi-view learning. Since accessing multiple views is the fundament of multi-view learning, with the exception of study on learning a model from multiple views, it is also valuable to study how to construct multiple views and how to evaluate these views. Overall, by exploring the consistency and complementary properties of different views, multi-view learning is rendered more effective, more promising, and has better generalization ability than single-view learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 24,102 |
2401.00230 | Transformer Multivariate Forecasting: Less is More? | In the domain of multivariate forecasting, transformer models stand out as powerful apparatus, displaying exceptional capabilities in handling messy datasets from real-world contexts. However, the inherent complexity of these datasets, characterized by numerous variables and lengthy temporal sequences, poses challenges, including increased noise and extended model runtime. This paper focuses on reducing redundant information to elevate forecasting accuracy while optimizing runtime efficiency. We propose a novel transformer forecasting framework enhanced by Principal Component Analysis (PCA) to tackle this challenge. The framework is evaluated by five state-of-the-art (SOTA) models and four diverse real-world datasets. Our experimental results demonstrate the framework's ability to minimize prediction errors across all models and datasets while significantly reducing runtime. From the model perspective, one of the PCA-enhanced models: PCA+Crossformer, reduces mean square errors (MSE) by 33.3% and decreases runtime by 49.2% on average. From the dataset perspective, the framework delivers 14.3% MSE and 76.6% runtime reduction on Electricity datasets, as well as 4.8% MSE and 86.9% runtime reduction on Traffic datasets. This study aims to advance various SOTA models and enhance transformer-based time series forecasting for intricate data. Code is available at: https://github.com/jingjing-unilu/PCA_Transformer. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 418,921 |
1703.09651 | Structural Damage Identification Using Artificial Neural Network and
Synthetic data | This paper presents real-time vibration based identification technique using measured frequency response functions(FRFs) under random vibration loading. Artificial Neural Networks (ANNs) are trained to map damage fingerprints to damage characteristic parameters. Principal component statistical analysis(PCA) technique was used to tackle the problem of high dimensionality and high noise of data, which is common for industrial structures. The present study considers Crack, Rivet hole expansion and redundant uniform mass as damages on the structure. Frequency response function data after being reduced in size using PCA is fed to individual neural networks to localize and predict the severity of damage on the structure. The system of ANNs trained with both numerical and experimental model data to make the system reliable and robust. The methodology is applied to a numerical model of stiffened panel structure, where damages are confined close to the stiffener. The results showed that, in all the cases considered, it is possible to localize and predict severity of the damage occurrence with very good accuracy and reliability. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 70,777 |
1810.12536 | Forest Tree Detection and Segmentation using High Resolution Airborne
LiDAR | This paper presents an autonomous approach to tree detection and segmentation in high resolution airborne LiDAR that utilises state-of-the-art region-based CNN and 3D-CNN deep learning algorithms. If the number of training examples for a site is low, it is shown to be beneficial to transfer a segmentation network learnt from a different site with more training data and fine-tune it. The algorithm was validated using airborne laser scanning over two different commercial pine plantations. The results show that the proposed approach performs favourably in comparison to other methods for tree detection and segmentation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 111,792 |
2303.03679 | MAST: Masked Augmentation Subspace Training for Generalizable
Self-Supervised Priors | Recent Self-Supervised Learning (SSL) methods are able to learn feature representations that are invariant to different data augmentations, which can then be transferred to downstream tasks of interest. However, different downstream tasks require different invariances for their best performance, so the optimal choice of augmentations for SSL depends on the target task. In this paper, we aim to learn self-supervised features that generalize well across a variety of downstream tasks (e.g., object classification, detection and instance segmentation) without knowing any task information beforehand. We do so by Masked Augmentation Subspace Training (or MAST) to encode in the single feature space the priors from different data augmentations in a factorized way. Specifically, we disentangle the feature space into separate subspaces, each induced by a learnable mask that selects relevant feature dimensions to model invariance to a specific augmentation. We show the success of MAST in jointly capturing generalizable priors from different augmentations, using both unique and shared features across the subspaces. We further show that MAST benefits from uncertainty modeling to reweight ambiguous samples from strong augmentations that may cause similarity mismatch in each subspace. Experiments demonstrate that MAST consistently improves generalization on various downstream tasks, while being task-agnostic and efficient during SSL. We also provide interesting insights about how different augmentations are related and how uncertainty reflects learning difficulty. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 349,814 |
2112.00568 | Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning | Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks. Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance, which limits the generalization ability of FAS model. In this paper, we propose Dual Spoof Disentanglement Generation (DSDG) framework to tackle this challenge by "anti-spoofing via generation". Depending on the interpretable factorized latent disentanglement in Variational Autoencoder (VAE), DSDG learns a joint distribution of the identity representation and the spoofing pattern representation in the latent space. Then, large-scale paired live and spoofing images can be generated from random noise to boost the diversity of the training set. However, some generated face images are partially distorted due to the inherent defect of VAE. Such noisy samples are hard to predict precise depth values, thus may obstruct the widely-used depth supervised optimization. To tackle this issue, we further introduce a lightweight Depth Uncertainty Module (DUM), which alleviates the adverse effects of noisy samples by depth uncertainty learning. DUM is developed without extra-dependency, thus can be flexibly integrated with any depth supervised network for face anti-spoofing. We evaluate the effectiveness of the proposed method on five popular benchmarks and achieve state-of-the-art results under both intra- and inter- test settings. The codes are available at https://github.com/JDAI-CV/FaceX-Zoo/tree/main/addition_module/DSDG. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 269,176 |
2501.01986 | FrameFusion: Combining Similarity and Importance for Video Token
Reduction on Large Visual Language Models | The increasing demand to process long and high-resolution videos significantly burdens Large Vision-Language Models (LVLMs) due to the enormous number of visual tokens. Existing token reduction methods primarily focus on importance-based token pruning, which overlooks the redundancy caused by frame resemblance and repetitive visual elements. In this paper, we analyze the high vision token similarities in LVLMs. We reveal that token similarity distribution condenses as layers deepen while maintaining ranking consistency. Leveraging the unique properties of similarity over importance, we introduce FrameFusion, a novel approach that combines similarity-based merging with importance-based pruning for better token reduction in LVLMs. FrameFusion identifies and merges similar tokens before pruning, opening up a new perspective for token reduction. We evaluate FrameFusion on diverse LVLMs, including Llava-Video-{7B,32B,72B}, and MiniCPM-V-8B, on video understanding, question-answering, and retrieval benchmarks. Experiments show that FrameFusion reduces vision tokens by 70$\%$, achieving 3.4-4.4x LLM speedups and 1.6-1.9x end-to-end speedups, with an average performance impact of less than 3$\%$. Our code is available at https://github.com/thu-nics/FrameFusion. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 522,288 |
2004.11273 | Ensemble Generative Cleaning with Feedback Loops for Defending
Adversarial Attacks | Effective defense of deep neural networks against adversarial attacks remains a challenging problem, especially under powerful white-box attacks. In this paper, we develop a new method called ensemble generative cleaning with feedback loops (EGC-FL) for effective defense of deep neural networks. The proposed EGC-FL method is based on two central ideas. First, we introduce a transformed deadzone layer into the defense network, which consists of an orthonormal transform and a deadzone-based activation function, to destroy the sophisticated noise pattern of adversarial attacks. Second, by constructing a generative cleaning network with a feedback loop, we are able to generate an ensemble of diverse estimations of the original clean image. We then learn a network to fuse this set of diverse estimations together to restore the original image. Our extensive experimental results demonstrate that our approach improves the state-of-art by large margins in both white-box and black-box attacks. It significantly improves the classification accuracy for white-box PGD attacks upon the second best method by more than 29% on the SVHN dataset and more than 39% on the challenging CIFAR-10 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 173,867 |
0711.3964 | Iterative Filtering for a Dynamical Reputation System | The paper introduces a novel iterative method that assigns a reputation to n + m items: n raters and m objects. Each rater evaluates a subset of objects leading to a n x m rating matrix with a certain sparsity pattern. From this rating matrix we give a nonlinear formula to define the reputation of raters and objects. We also provide an iterative algorithm that superlinearly converges to the unique vector of reputations and this for any rating matrix. In contrast to classical outliers detection, no evaluation is discarded in this method but each one is taken into account with different weights for the reputation of the objects. The complexity of one iteration step is linear in the number of evaluations, making our algorithm efficient for large data set. Experiments show good robustness of the reputation of the objects against cheaters and spammers and good detection properties of cheaters and spammers. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 956 |
2309.09653 | Towards Model Co-evolution Across Self-Adaptation Steps for Combined
Safety and Security Analysis | Self-adaptive systems offer several attack surfaces due to the communication via different channels and the different sensors required to observe the environment. Often, attacks cause safety to be compromised as well, making it necessary to consider these two aspects together. Furthermore, the approaches currently used for safety and security analysis do not sufficiently take into account the intermediate steps of an adaptation. Current work in this area ignores the fact that a self-adaptive system also reveals possible vulnerabilities (even if only temporarily) during the adaptation. To address this issue, we propose a modeling approach that takes into account the different relevant aspects of a system, its adaptation process, as well as safety hazards and security attacks. We present several models that describe different aspects of a self-adaptive system and we outline our idea of how these models can then be combined into an Attack-Fault Tree. This allows modeling aspects of the system on different levels of abstraction and co-evolve the models using transformations according to the adaptation of the system. Finally, analyses can then be performed as usual on the resulting Attack-Fault Tree. | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | true | 392,692 |
2108.01344 | Adaptive Affinity Loss and Erroneous Pseudo-Label Refinement for Weakly
Supervised Semantic Segmentation | Semantic segmentation has been continuously investigated in the last ten years, and majority of the established technologies are based on supervised models. In recent years, image-level weakly supervised semantic segmentation (WSSS), including single- and multi-stage process, has attracted large attention due to data labeling efficiency. In this paper, we propose to embed affinity learning of multi-stage approaches in a single-stage model. To be specific, we introduce an adaptive affinity loss to thoroughly learn the local pairwise affinity. As such, a deep neural network is used to deliver comprehensive semantic information in the training phase, whilst improving the performance of the final prediction module. On the other hand, considering the existence of errors in the pseudo labels, we propose a novel label reassign loss to mitigate over-fitting. Extensive experiments are conducted on the PASCAL VOC 2012 dataset to evaluate the effectiveness of our proposed approach that outperforms other standard single-stage methods and achieves comparable performance against several multi-stage methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 249,001 |
1602.08237 | Machine Agency in Human-Machine Networks; Impacts and Trust Implications | We live in an emerging hyper-connected era in which people are in contact and interacting with an increasing number of other people and devices. Increasingly, modern IT systems form networks of humans and machines that interact with one another. As machines take a more active role in such networks, they exert an in-creasing level of influence on other participants. We review the existing literature on agency and propose a definition of agency that is practical for describing the capabilities and impact human and machine actors may have in a human-machine network. On this basis, we discuss and demonstrate the impact and trust implica-tions for machine actors in human-machine networks for emergency decision support, healthcare and future smart homes. We maintain that machine agency not only facilitates human to machine trust, but also interpersonal trust; and that trust must develop to be able to seize the full potential of future technology. | true | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 52,620 |
2012.05015 | Fusion of rain radar images and wind forecasts in a deep learning model
applied to rain nowcasting | Short- or mid-term rainfall forecasting is a major task with several environmental applications such as agricultural management or flood risk monitoring. Existing data-driven approaches, especially deep learning models, have shown significant skill at this task, using only rainfall radar images as inputs. In order to determine whether using other meteorological parameters such as wind would improve forecasts, we trained a deep learning model on a fusion of rainfall radar images and wind velocity produced by a weather forecast model. The network was compared to a similar architecture trained only on radar data, to a basic persistence model and to an approach based on optical flow. Our network outperforms by 8% the F1-score calculated for the optical flow on moderate and higher rain events for forecasts at a horizon time of 30 min. Furthermore, it outperforms by 7% the same architecture trained using only rainfall radar images. Merging rain and wind data has also proven to stabilize the training process and enabled significant improvement especially on the difficult-to-predict high precipitation rainfalls. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 210,654 |
2310.20326 | Erato: Automatizing Poetry Evaluation | We present Erato, a framework designed to facilitate the automated evaluation of poetry, including that generated by poetry generation systems. Our framework employs a diverse set of features, and we offer a brief overview of Erato's capabilities and its potential for expansion. Using Erato, we compare and contrast human-authored poetry with automatically-generated poetry, demonstrating its effectiveness in identifying key differences. Our implementation code and software are freely available under the GNU GPLv3 license. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 404,345 |
2008.13626 | Example-based Color Transfer with Gaussian Mixture Modeling | Color transfer, which plays a key role in image editing, has attracted noticeable attention recently. It has remained a challenge to date due to various issues such as time-consuming manual adjustments and prior segmentation issues. In this paper, we propose to model color transfer under a probability framework and cast it as a parameter estimation problem. In particular, we relate the transferred image with the example image under the Gaussian Mixture Model (GMM) and regard the transferred image color as the GMM centroids. We employ the Expectation-Maximization (EM) algorithm (E-step and M-step) for optimization. To better preserve gradient information, we introduce a Laplacian based regularization term to the objective function at the M-step which is solved by deriving a gradient descent algorithm. Given the input of a source image and an example image, our method is able to generate continuous color transfer results with increasing EM iterations. Various experiments show that our approach generally outperforms other competitive color transfer methods, both visually and quantitatively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 193,900 |
2210.12026 | Ontology Development is Consensus Creation, Not (Merely) Representation | Ontology development methodologies emphasise knowledge gathering from domain experts and documentary resources, and knowledge representation using an ontology language such as OWL or FOL. However, working ontologists are often surprised by how challenging and slow it can be to develop ontologies. Here, with a particular emphasis on the sorts of ontologies that are content-heavy and intended to be shared across a community of users (reference ontologies), we propose that a significant and heretofore under-emphasised contributor of challenges during ontology development is the need to create, or bring about, consensus in the face of disagreement. For this reason reference ontology development cannot be automated, at least within the limitations of existing AI approaches. Further, for the same reason ontologists are required to have specific social-negotiating skills which are currently lacking in most technical curricula. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 325,551 |
2104.05743 | Practical Defences Against Model Inversion Attacks for Split Neural
Networks | We describe a threat model under which a split network-based federated learning system is susceptible to a model inversion attack by a malicious computational server. We demonstrate that the attack can be successfully performed with limited knowledge of the data distribution by the attacker. We propose a simple additive noise method to defend against model inversion, finding that the method can significantly reduce attack efficacy at an acceptable accuracy trade-off on MNIST. Furthermore, we show that NoPeekNN, an existing defensive method, protects different information from exposure, suggesting that a combined defence is necessary to fully protect private user data. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 229,829 |
2002.04254 | Minimax optimal goodness-of-fit testing for densities and multinomials
under a local differential privacy constraint | Finding anonymization mechanisms to protect personal data is at the heart of recent machine learning research. Here, we consider the consequences of local differential privacy constraints on goodness-of-fit testing, i.e. the statistical problem assessing whether sample points are generated from a fixed density $f_0$, or not. The observations are kept hidden and replaced by a stochastic transformation satisfying the local differential privacy constraint. In this setting, we propose a testing procedure which is based on an estimation of the quadratic distance between the density $f$ of the unobserved samples and $f_0$. We establish an upper bound on the separation distance associated with this test, and a matching lower bound on the minimax separation rates of testing under non-interactive privacy in the case that $f_0$ is uniform, in discrete and continuous settings. To the best of our knowledge, we provide the first minimax optimal test and associated private transformation under a local differential privacy constraint over Besov balls in the continuous setting, quantifying the price to pay for data privacy. We also present a test that is adaptive to the smoothness parameter of the unknown density and remains minimax optimal up to a logarithmic factor. Finally, we note that our results can be translated to the discrete case, where the treatment of probability vectors is shown to be equivalent to that of piecewise constant densities in our setting. That is why we work with a unified setting for both the continuous and the discrete cases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 163,560 |
2001.02407 | SPACE: Unsupervised Object-Oriented Scene Representation via Spatial
Attention and Decomposition | The ability to decompose complex multi-object scenes into meaningful abstractions like objects is fundamental to achieve higher-level cognition. Previous approaches for unsupervised object-oriented scene representation learning are either based on spatial-attention or scene-mixture approaches and limited in scalability which is a main obstacle towards modeling real-world scenes. In this paper, we propose a generative latent variable model, called SPACE, that provides a unified probabilistic modeling framework that combines the best of spatial-attention and scene-mixture approaches. SPACE can explicitly provide factorized object representations for foreground objects while also decomposing background segments of complex morphology. Previous models are good at either of these, but not both. SPACE also resolves the scalability problems of previous methods by incorporating parallel spatial-attention and thus is applicable to scenes with a large number of objects without performance degradations. We show through experiments on Atari and 3D-Rooms that SPACE achieves the above properties consistently in comparison to SPAIR, IODINE, and GENESIS. Results of our experiments can be found on our project website: https://sites.google.com/view/space-project-page | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 159,733 |
2311.04243 | Toward Planet-Wide Traffic Camera Calibration | Despite the widespread deployment of outdoor cameras, their potential for automated analysis remains largely untapped due, in part, to calibration challenges. The absence of precise camera calibration data, including intrinsic and extrinsic parameters, hinders accurate real-world distance measurements from captured videos. To address this, we present a scalable framework that utilizes street-level imagery to reconstruct a metric 3D model, facilitating precise calibration of in-the-wild traffic cameras. Notably, our framework achieves 3D scene reconstruction and accurate localization of over 100 global traffic cameras and is scalable to any camera with sufficient street-level imagery. For evaluation, we introduce a dataset of 20 fully calibrated traffic cameras, demonstrating our method's significant enhancements over existing automatic calibration techniques. Furthermore, we highlight our approach's utility in traffic analysis by extracting insights via 3D vehicle reconstruction and speed measurement, thereby opening up the potential of using outdoor cameras for automated analysis. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 406,155 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.