id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1905.01965 | Arabic Text Diacritization Using Deep Neural Networks | Diacritization of Arabic text is both an interesting and a challenging problem at the same time with various applications ranging from speech synthesis to helping students learning the Arabic language. Like many other tasks or problems in Arabic language processing, the weak efforts invested into this problem and the lack of available (open-source) resources hinder the progress towards solving this problem. This work provides a critical review for the currently existing systems, measures and resources for Arabic text diacritization. Moreover, it introduces a much-needed free-for-all cleaned dataset that can be easily used to benchmark any work on Arabic diacritization. Extracted from the Tashkeela Corpus, the dataset consists of 55K lines containing about 2.3M words. After constructing the dataset, existing tools and systems are tested on it. The results of the experiments show that the neural Shakkala system significantly outperforms traditional rule-based approaches and other closed-source tools with a Diacritic Error Rate (DER) of 2.88% compared with 13.78%, which the best DER for the non-neural approach (obtained by the Mishkal tool). | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 129,854 |
2102.04223 | Multi-level Distance Regularization for Deep Metric Learning | We propose a novel distance-based regularization method for deep metric learning called Multi-level Distance Regularization (MDR). MDR explicitly disturbs a learning procedure by regularizing pairwise distances between embedding vectors into multiple levels that represents a degree of similarity between a pair. In the training stage, the model is trained with both MDR and an existing loss function of deep metric learning, simultaneously; the two losses interfere with the objective of each other, and it makes the learning process difficult. Moreover, MDR prevents some examples from being ignored or overly influenced in the learning process. These allow the parameters of the embedding network to be settle on a local optima with better generalization. Without bells and whistles, MDR with simple Triplet loss achieves the-state-of-the-art performance in various benchmark datasets: CUB-200-2011, Cars-196, Stanford Online Products, and In-Shop Clothes Retrieval. We extensively perform ablation studies on its behaviors to show the effectiveness of MDR. By easily adopting our MDR, the previous approaches can be improved in performance and generalization ability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 219,033 |
2302.02363 | Quantized-Constraint Concatenation and the Covering Radius of
Constrained Systems | We introduce a novel framework for implementing error-correction in constrained systems. The main idea of our scheme, called Quantized-Constraint Concatenation (QCC), is to employ a process of embedding the codewords of an error-correcting code in a constrained system as a (noisy, irreversible) quantization process. This is in contrast to traditional methods, such as concatenation and reverse concatenation, where the encoding into the constrained system is reversible. The possible number of channel errors QCC is capable of correcting is linear in the block length $n$, improving upon the $O(\sqrt{n})$ possible with the state-of-the-art known schemes. For a given constrained system, the performance of QCC depends on a new fundamental parameter of the constrained system - its covering radius. Motivated by QCC, we study the covering radius of constrained systems in both combinatorial and probabilistic settings. We reveal an intriguing characterization of the covering radius of a constrained system using ergodic theory. We use this equivalent characterization in order to establish efficiently computable upper bounds on the covering radius. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 343,974 |
2006.08718 | Analytic Manifold Learning: Unifying and Evaluating Representations for
Continuous Control | We address the problem of learning reusable state representations from streaming high-dimensional observations. This is important for areas like Reinforcement Learning (RL), which yields non-stationary data distributions during training. We make two key contributions. First, we propose an evaluation suite that measures alignment between latent and true low-dimensional states. We benchmark several widely used unsupervised learning approaches. This uncovers the strengths and limitations of existing approaches that impose additional constraints/objectives on the latent space. Our second contribution is a unifying mathematical formulation for learning latent relations. We learn analytic relations on source domains, then use these relations to help structure the latent space when learning on target domains. This formulation enables a more general, flexible and principled way of shaping the latent space. It formalizes the notion of learning independent relations, without imposing restrictive simplifying assumptions or requiring domain-specific information. We present mathematical properties, concrete algorithms for implementation and experimental validation of successful learning and transfer of latent relations. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 182,274 |
2411.14865 | Benchmarking the Robustness of Optical Flow Estimation to Corruptions | Optical flow estimation is extensively used in autonomous driving and video editing. While existing models demonstrate state-of-the-art performance across various benchmarks, the robustness of these methods has been infrequently investigated. Despite some research focusing on the robustness of optical flow models against adversarial attacks, there has been a lack of studies investigating their robustness to common corruptions. Taking into account the unique temporal characteristics of optical flow, we introduce 7 temporal corruptions specifically designed for benchmarking the robustness of optical flow models, in addition to 17 classical single-image corruptions, in which advanced PSF Blur simulation method is performed. Two robustness benchmarks, KITTI-FC and GoPro-FC, are subsequently established as the first corruption robustness benchmark for optical flow estimation, with Out-Of-Domain (OOD) and In-Domain (ID) settings to facilitate comprehensive studies. Robustness metrics, Corruption Robustness Error (CRE), Corruption Robustness Error ratio (CREr), and Relative Corruption Robustness Error (RCRE) are further introduced to quantify the optical flow estimation robustness. 29 model variants from 15 optical flow methods are evaluated, yielding 10 intriguing observations, such as 1) the absolute robustness of the model is heavily dependent on the estimation performance; 2) the corruptions that diminish local information are more serious than that reduce visual effects. We also give suggestions for the design and application of optical flow models. We anticipate that our benchmark will serve as a foundational resource for advancing research in robust optical flow estimation. The benchmarks and source code will be released at https://github.com/ZhonghuaYi/optical_flow_robustness_benchmark. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 510,350 |
1302.0406 | Generalization Guarantees for a Binary Classification Framework for
Two-Stage Multiple Kernel Learning | We present generalization bounds for the TS-MKL framework for two stage multiple kernel learning. We also present bounds for sparse kernel learning formulations within the TS-MKL framework. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 21,719 |
1905.08059 | Towards a Flexible Deep Learning Method for Automatic Detection of
Clinically Relevant Multi-Modal Events in the Polysomnogram | Much attention has been given to automatic sleep staging algorithms in past years, but the detection of discrete events in sleep studies is also crucial for precise characterization of sleep patterns and possible diagnosis of sleep disorders. We propose here a deep learning model for automatic detection and annotation of arousals and leg movements. Both of these are commonly seen during normal sleep, while an excessive amount of either is linked to disrupted sleep patterns, excessive daytime sleepiness impacting quality of life, and various sleep disorders. Our model was trained on 1,485 subjects and tested on 1,000 separate recordings of sleep. We tested two different experimental setups and found optimal arousal detection was attained by including a recurrent neural network module in our default model with a dynamic default event window (F1 = 0.75), while optimal leg movement detection was attained using a static event window (F1 = 0.65). Our work show promise while still allowing for improvements. Specifically, future research will explore the proposed model as a general-purpose sleep analysis model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 131,390 |
1902.00249 | ProteinNet: a standardized data set for machine learning of protein
structure | Rapid progress in deep learning has spurred its application to bioinformatics problems including protein structure prediction and design. In classic machine learning problems like computer vision, progress has been driven by standardized data sets that facilitate fair assessment of new methods and lower the barrier to entry for non-domain experts. While data sets of protein sequence and structure exist, they lack certain components critical for machine learning, including high-quality multiple sequence alignments and insulated training / validation splits that account for deep but only weakly detectable homology across protein space. We have created the ProteinNet series of data sets to provide a standardized mechanism for training and assessing data-driven models of protein sequence-structure relationships. ProteinNet integrates sequence, structure, and evolutionary information in programmatically accessible file formats tailored for machine learning frameworks. Multiple sequence alignments of all structurally characterized proteins were created using substantial high-performance computing resources. Standardized data splits were also generated to emulate the difficulty of past CASP (Critical Assessment of protein Structure Prediction) experiments by resetting protein sequence and structure space to the historical states that preceded six prior CASPs. Utilizing sensitive evolution-based distance metrics to segregate distantly related proteins, we have additionally created validation sets distinct from the official CASP sets that faithfully mimic their difficulty. ProteinNet thus represents a comprehensive and accessible resource for training and assessing machine-learned models of protein structure. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,364 |
2208.08338 | SO(3)-Pose: SO(3)-Equivariance Learning for 6D Object Pose Estimation | 6D pose estimation of rigid objects from RGB-D images is crucial for object grasping and manipulation in robotics. Although RGB channels and the depth (D) channel are often complementary, providing respectively the appearance and geometry information, it is still non-trivial how to fully benefit from the two cross-modal data. From the simple yet new observation, when an object rotates, its semantic label is invariant to the pose while its keypoint offset direction is variant to the pose. To this end, we present SO(3)-Pose, a new representation learning network to explore SO(3)-equivariant and SO(3)-invariant features from the depth channel for pose estimation. The SO(3)-invariant features facilitate to learn more distinctive representations for segmenting objects with similar appearance from RGB channels. The SO(3)-equivariant features communicate with RGB features to deduce the (missed) geometry for detecting keypoints of an object with the reflective surface from the depth channel. Unlike most of existing pose estimation methods, our SO(3)-Pose not only implements the information communication between the RGB and depth channels, but also naturally absorbs the SO(3)-equivariance geometry knowledge from depth images, leading to better appearance and geometry representation learning. Comprehensive experiments show that our method achieves the state-of-the-art performance on three benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 313,342 |
2203.08465 | Building AI Innovation Labs together with Companies | In the future, most companies will be confronted with the topic of Artificial Intelligence (AI) and will have to decide on their strategy in this regards. Currently, a lot of companies are thinking about whether and how AI and the usage of data will impact their business model and what potential use cases could look like. One of the biggest challenges lies in coming up with innovative solution ideas with a clear business value. This requires business competencies on the one hand and technical competencies in AI and data analytics on the other hand. In this article, we present the concept of AI innovation labs and demonstrate a comprehensive framework, from coming up with the right ideas to incrementally implementing and evaluating them regarding their business value and their feasibility based on a company's capabilities. The concept is the result of nine years of working on data-driven innovations with companies from various domains. Furthermore, we share some lessons learned from its practical applications. Even though a lot of technical publications can be found in the literature regarding the development of AI models and many consultancy companies provide corresponding services for building AI innovations, we found very few publications sharing details about what an end-to-end framework could look like. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 285,803 |
2305.17037 | Distributionally Robust Linear Quadratic Control | Linear-Quadratic-Gaussian (LQG) control is a fundamental control paradigm that is studied in various fields such as engineering, computer science, economics, and neuroscience. It involves controlling a system with linear dynamics and imperfect observations, subject to additive noise, with the goal of minimizing a quadratic cost function for the state and control variables. In this work, we consider a generalization of the discrete-time, finite-horizon LQG problem, where the noise distributions are unknown and belong to Wasserstein ambiguity sets centered at nominal (Gaussian) distributions. The objective is to minimize a worst-case cost across all distributions in the ambiguity set, including non-Gaussian distributions. Despite the added complexity, we prove that a control policy that is linear in the observations is optimal for this problem, as in the classic LQG problem. We propose a numerical solution method that efficiently characterizes this optimal control policy. Our method uses the Frank-Wolfe algorithm to identify the least-favorable distributions within the Wasserstein ambiguity sets and computes the controller's optimal policy using Kalman filter estimation under these distributions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 368,386 |
2110.01147 | On the Interplay Between Sparsity, Naturalness, Intelligibility, and
Prosody in Speech Synthesis | Are end-to-end text-to-speech (TTS) models over-parametrized? To what extent can these models be pruned, and what happens to their synthesis capabilities? This work serves as a starting point to explore pruning both spectrogram prediction networks and vocoders. We thoroughly investigate the tradeoffs between sparsity and its subsequent effects on synthetic speech. Additionally, we explored several aspects of TTS pruning: amount of finetuning data versus sparsity, TTS-Augmentation to utilize unspoken text, and combining knowledge distillation and pruning. Our findings suggest that not only are end-to-end TTS models highly prunable, but also, perhaps surprisingly, pruned TTS models can produce synthetic speech with equal or higher naturalness and intelligibility, with similar prosody. All of our experiments are conducted on publicly available models, and findings in this work are backed by large-scale subjective tests and objective measures. Code and 200 pruned models are made available to facilitate future research on efficiency in TTS. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 258,660 |
2110.08946 | Deep Tactile Experience: Estimating Tactile Sensor Output from Depth
Sensor Data | Tactile sensing is inherently contact based. To use tactile data, robots need to make contact with the surface of an object. This is inefficient in applications where an agent needs to make a decision between multiple alternatives that depend the physical properties of the contact location. We propose a method to get tactile data in a non-invasive manner. The proposed method estimates the output of a tactile sensor from the depth data of the surface of the object based on past experiences. An experience dataset is built by allowing the robot to interact with various objects, collecting tactile data and the corresponding object surface depth data. We use the experience dataset to train a neural network to estimate the tactile output from depth data alone. We use GelSight tactile sensors, an image-based sensor, to generate images that capture detailed surface features at the contact location. We train a network with a dataset containing 578 tactile-image to depthmap correspondences. Given a depth-map of the surface of an object, the network outputs an estimate of the response of the tactile sensor, should it make a contact with the object. We evaluate the method with structural similarity index matrix (SSIM), a similarity metric between two images commonly used in image processing community. We present experimental results that show the proposed method outperforms a baseline that uses random images with statistical significance getting an SSIM score of 0.84 +/- 0.0056 and 0.80 +/- 0.0036, respectively. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 261,619 |
2410.20268 | Centaur: a foundation model of human cognition | Establishing a unified theory of cognition has been a major goal of psychology. While there have been previous attempts to instantiate such theories by building computational models, we currently do not have one model that captures the human mind in its entirety. Here we introduce Centaur, a computational model that can predict and simulate human behavior in any experiment expressible in natural language. We derived Centaur by finetuning a state-of-the-art language model on a novel, large-scale data set called Psych-101. Psych-101 reaches an unprecedented scale, covering trial-by-trial data from over 60,000 participants performing over 10,000,000 choices in 160 experiments. Centaur not only captures the behavior of held-out participants better than existing cognitive models, but also generalizes to new cover stories, structural task modifications, and entirely new domains. Furthermore, we find that the model's internal representations become more aligned with human neural activity after finetuning. Taken together, Centaur is the first real candidate for a unified model of human cognition. We anticipate that it will have a disruptive impact on the cognitive sciences, challenging the existing paradigm for developing computational models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 502,730 |
2211.15410 | Private Multi-Winner Voting for Machine Learning | Private multi-winner voting is the task of revealing $k$-hot binary vectors satisfying a bounded differential privacy (DP) guarantee. This task has been understudied in machine learning literature despite its prevalence in many domains such as healthcare. We propose three new DP multi-winner mechanisms: Binary, $\tau$, and Powerset voting. Binary voting operates independently per label through composition. $\tau$ voting bounds votes optimally in their $\ell_2$ norm for tight data-independent guarantees. Powerset voting operates over the entire binary vector by viewing the possible outcomes as a power set. Our theoretical and empirical analysis shows that Binary voting can be a competitive mechanism on many tasks unless there are strong correlations between labels, in which case Powerset voting outperforms it. We use our mechanisms to enable privacy-preserving multi-label learning in the central setting by extending the canonical single-label technique: PATE. We find that our techniques outperform current state-of-the-art approaches on large, real-world healthcare data and standard multi-label benchmarks. We further enable multi-label confidential and private collaborative (CaPC) learning and show that model performance can be significantly improved in the multi-site setting. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 333,249 |
2404.01790 | Super-Resolution Analysis for Landfill Waste Classification | Illegal landfills are a critical issue due to their environmental, economic, and public health impacts. This study leverages aerial imagery for environmental crime monitoring. While advances in artificial intelligence and computer vision hold promise, the challenge lies in training models with high-resolution literature datasets and adapting them to open-access low-resolution images. Considering the substantial quality differences and limited annotation, this research explores the adaptability of models across these domains. Motivated by the necessity for a comprehensive evaluation of waste detection algorithms, it advocates cross-domain classification and super-resolution enhancement to analyze the impact of different image resolutions on waste classification as an evaluation to combat the proliferation of illegal landfills. We observed performance improvements by enhancing image quality but noted an influence on model sensitivity, necessitating careful threshold fine-tuning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 443,584 |
2412.13096 | Incremental Online Learning of Randomized Neural Network with Forward
Regularization | Online learning of deep neural networks suffers from challenges such as hysteretic non-incremental updating, increasing memory usage, past retrospective retraining, and catastrophic forgetting. To alleviate these drawbacks and achieve progressive immediate decision-making, we propose a novel Incremental Online Learning (IOL) process of Randomized Neural Networks (Randomized NN), a framework facilitating continuous improvements to Randomized NN performance in restrictive online scenarios. Within the framework, we further introduce IOL with ridge regularization (-R) and IOL with forward regularization (-F). -R generates stepwise incremental updates without retrospective retraining and avoids catastrophic forgetting. Moreover, we substituted -R with -F as it enhanced precognition learning ability using semi-supervision and realized better online regrets to offline global experts compared to -R during IOL. The algorithms of IOL for Randomized NN with -R/-F on non-stationary batch stream were derived respectively, featuring recursive weight updates and variable learning rates. Additionally, we conducted a detailed analysis and theoretically derived relative cumulative regret bounds of the Randomized NN learners with -R/-F in IOL under adversarial assumptions using a novel methodology and presented several corollaries, from which we observed the superiority on online learning acceleration and regret bounds of employing -F in IOL. Finally, our proposed methods were rigorously examined across regression and classification tasks on diverse datasets, which distinctly validated the efficacy of IOL frameworks of Randomized NN and the advantages of forward regularization. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 518,158 |
1303.5613 | Network Detection Theory and Performance | Network detection is an important capability in many areas of applied research in which data can be represented as a graph of entities and relationships. Oftentimes the object of interest is a relatively small subgraph in an enormous, potentially uninteresting background. This aspect characterizes network detection as a "big data" problem. Graph partitioning and network discovery have been major research areas over the last ten years, driven by interest in internet search, cyber security, social networks, and criminal or terrorist activities. The specific problem of network discovery is addressed as a special case of graph partitioning in which membership in a small subgraph of interest must be determined. Algebraic graph theory is used as the basis to analyze and compare different network detection methods. A new Bayesian network detection framework is introduced that partitions the graph based on prior information and direct observations. The new approach, called space-time threat propagation, is proved to maximize the probability of detection and is therefore optimum in the Neyman-Pearson sense. This optimality criterion is compared to spectral community detection approaches which divide the global graph into subsets or communities with optimal connectivity properties. We also explore a new generative stochastic model for covert networks and analyze using receiver operating characteristics the detection performance of both classes of optimal detection techniques. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 23,140 |
2311.16492 | VLPrompt: Vision-Language Prompting for Panoptic Scene Graph Generation | Panoptic Scene Graph Generation (PSG) aims at achieving a comprehensive image understanding by simultaneously segmenting objects and predicting relations among objects. However, the long-tail problem among relations leads to unsatisfactory results in real-world applications. Prior methods predominantly rely on vision information or utilize limited language information, such as object or relation names, thereby overlooking the utility of language information. Leveraging the recent progress in Large Language Models (LLMs), we propose to use language information to assist relation prediction, particularly for rare relations. To this end, we propose the Vision-Language Prompting (VLPrompt) model, which acquires vision information from images and language information from LLMs. Then, through a prompter network based on attention mechanism, it achieves precise relation prediction. Our extensive experiments show that VLPrompt significantly outperforms previous state-of-the-art methods on the PSG dataset, proving the effectiveness of incorporating language information and alleviating the long-tail problem of relations. Code is available at \url{https://github.com/franciszzj/TP-SIS}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 410,917 |
1711.03705 | Online Deep Learning: Learning Deep Neural Networks on the Fly | Deep Neural Networks (DNNs) are typically trained by backpropagation in a batch learning setting, which requires the entire training data to be made available prior to the learning task. This is not scalable for many real-world scenarios where new data arrives sequentially in a stream form. We aim to address an open challenge of "Online Deep Learning" (ODL) for learning DNNs on the fly in an online setting. Unlike traditional online learning that often optimizes some convex objective function with respect to a shallow model (e.g., a linear/kernel-based hypothesis), ODL is significantly more challenging since the optimization of the DNN objective function is non-convex, and regular backpropagation does not work well in practice, especially for online learning settings. In this paper, we present a new online deep learning framework that attempts to tackle the challenges by learning DNN models of adaptive depth from a sequence of training data in an online learning setting. In particular, we propose a novel Hedge Backpropagation (HBP) method for online updating the parameters of DNN effectively, and validate the efficacy of our method on large-scale data sets, including both stationary and concept drifting scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 84,265 |
1901.05135 | Deep Supervised Hashing leveraging Quadratic Spherical Mutual
Information for Content-based Image Retrieval | Several deep supervised hashing techniques have been proposed to allow for efficiently querying large image databases. However, deep supervised image hashing techniques are developed, to a great extent, heuristically often leading to suboptimal results. Contrary to this, we propose an efficient deep supervised hashing algorithm that optimizes the learned codes using an information-theoretic measure, the Quadratic Mutual Information (QMI). The proposed method is adapted to the needs of large-scale hashing and information retrieval leading to a novel information-theoretic measure, the Quadratic Spherical Mutual Information (QSMI). Apart from demonstrating the effectiveness of the proposed method under different scenarios and outperforming existing state-of-the-art image hashing techniques, this paper provides a structured way to model the process of information retrieval and develop novel methods adapted to the needs of each application. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 118,735 |
2108.05574 | Implicit Sparse Regularization: The Impact of Depth and Early Stopping | In this paper, we study the implicit bias of gradient descent for sparse regression. We extend results on regression with quadratic parametrization, which amounts to depth-2 diagonal linear networks, to more general depth-N networks, under more realistic settings of noise and correlated designs. We show that early stopping is crucial for gradient descent to converge to a sparse model, a phenomenon that we call implicit sparse regularization. This result is in sharp contrast to known results for noiseless and uncorrelated-design cases. We characterize the impact of depth and early stopping and show that for a general depth parameter N, gradient descent with early stopping achieves minimax optimal sparse recovery with sufficiently small initialization and step size. In particular, we show that increasing depth enlarges the scale of working initialization and the early-stopping window so that this implicit sparse regularization effect is more likely to take place. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 250,342 |
2308.07136 | Pairing interacting protein sequences using masked language modeling | Predicting which proteins interact together from amino-acid sequences is an important task. We develop a method to pair interacting protein sequences which leverages the power of protein language models trained on multiple sequence alignments, such as MSA Transformer and the EvoFormer module of AlphaFold. We formulate the problem of pairing interacting partners among the paralogs of two protein families in a differentiable way. We introduce a method called DiffPALM that solves it by exploiting the ability of MSA Transformer to fill in masked amino acids in multiple sequence alignments using the surrounding context. MSA Transformer encodes coevolution between functionally or structurally coupled amino acids. We show that it captures inter-chain coevolution, while it was trained on single-chain data, which means that it can be used out-of-distribution. Relying on MSA Transformer without fine-tuning, DiffPALM outperforms existing coevolution-based pairing methods on difficult benchmarks of shallow multiple sequence alignments extracted from ubiquitous prokaryotic protein datasets. It also outperforms an alternative method based on a state-of-the-art protein language model trained on single sequences. Paired alignments of interacting protein sequences are a crucial ingredient of supervised deep learning methods to predict the three-dimensional structure of protein complexes. DiffPALM substantially improves the structure prediction of some eukaryotic protein complexes by AlphaFold-Multimer, without significantly deteriorating any of those we tested. It also achieves competitive performance with using orthology-based pairing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 385,405 |
2404.19315 | Modeling Orthographic Variation in Occitan's Dialects | Effectively normalizing textual data poses a considerable challenge, especially for low-resource languages lacking standardized writing systems. In this study, we fine-tuned a multilingual model with data from several Occitan dialects and conducted a series of experiments to assess the model's representations of these dialects. For evaluation purposes, we compiled a parallel lexicon encompassing four Occitan dialects. Intrinsic evaluations of the model's embeddings revealed that surface similarity between the dialects strengthened representations. When the model was further fine-tuned for part-of-speech tagging and Universal Dependency parsing, its performance was robust to dialectical variation, even when trained solely on part-of-speech data from a single dialect. Our findings suggest that large multilingual models minimize the need for spelling normalization during pre-processing. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 450,594 |
2502.04556 | TruthFlow: Truthful LLM Generation via Representation Flow Correction | Large language models (LLMs) are known to struggle with consistently generating truthful responses. While various representation intervention techniques have been proposed, these methods typically apply a universal representation correction vector to all input queries, limiting their effectiveness against diverse queries in practice. In this study, we introduce TruthFlow, a novel method that leverages the Flow Matching technique for query-specific truthful representation correction. Specifically, TruthFlow first uses a flow model to learn query-specific correction vectors that transition representations from hallucinated to truthful states. Then, during inference, the trained flow model generates these correction vectors to enhance the truthfulness of LLM outputs. Experimental results demonstrate that TruthFlow significantly improves performance on open-ended generation tasks across various advanced LLMs evaluated on TruthfulQA. Moreover, the trained TruthFlow model exhibits strong transferability, performing effectively on other unseen hallucination benchmarks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 531,213 |
1503.04864 | GeomRDF: A Geodata Converter with a Fine-Grained Structured
Representation of Geometry in the Web | In recent years, with the advent of the web of data, a growing number of national mapping agencies tend to publish their geospatial data as Linked Data. However, differences between traditional GIS data models and Linked Data model can make the publication process more complicated. Besides, it may require, to be done, the setting of several parameters and some expertise in the semantic web technologies. In addition, the use of standards like GeoSPARQL (or ad hoc predicates) is mandatory to perform spatial queries on published geospatial data. In this paper, we present GeomRDF, a tool that helps users to convert spatial data from traditional GIS formats to RDF model easily. It generates geometries represented as GeoSPARQL WKT literal but also as structured geometries that can be exploited by using only the RDF query language, SPARQL. GeomRDF was implemented as a module in the RDF publication platform Datalift. A validation of GeomRDF has been realized against the French administrative units dataset (provided by IGN France). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 41,187 |
1112.1333 | Reaching an Optimal Consensus: Dynamical Systems that Compute
Intersections of Convex Sets | In this paper, multi-agent systems minimizing a sum of objective functions, where each component is only known to a particular node, is considered for continuous-time dynamics with time-varying interconnection topologies. Assuming that each node can observe a convex solution set of its optimization component, and the intersection of all such sets is nonempty, the considered optimization problem is converted to an intersection computation problem. By a simple distributed control rule, the considered multi-agent system with continuous-time dynamics achieves not only a consensus, but also an optimal agreement within the optimal solution set of the overall optimization objective. Directed and bidirectional communications are studied, respectively, and connectivity conditions are given to ensure a global optimal consensus. In this way, the corresponding intersection computation problem is solved by the proposed decentralized continuous-time algorithm. We establish several important properties of the distance functions with respect to the global optimal solution set and a class of invariant sets with the help of convex and non-smooth analysis. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 13,340 |
2406.16972 | An Efficient NAS-based Approach for Handling Imbalanced Datasets | Class imbalance is a common issue in real-world data distributions, negatively impacting the training of accurate classifiers. Traditional approaches to mitigate this problem fall into three main categories: class re-balancing, information transfer, and representation learning. This paper introduces a novel approach to enhance performance on long-tailed datasets by optimizing the backbone architecture through neural architecture search (NAS). Our research shows that an architecture's accuracy on a balanced dataset does not reliably predict its performance on imbalanced datasets. This necessitates a complete NAS run on long-tailed datasets, which can be computationally expensive. To address this computational challenge, we focus on existing work, called IMB-NAS, which proposes efficiently adapting a NAS super-network trained on a balanced source dataset to an imbalanced target dataset. A detailed description of the fundamental techniques for IMB-NAS is provided in this paper, including NAS and architecture transfer. Among various adaptation strategies, we find that the most effective approach is to retrain the linear classification head with reweighted loss while keeping the backbone NAS super-network trained on the balanced source dataset frozen. Finally, we conducted a series of experiments on the imbalanced CIFAR dataset for performance evaluation. Our conclusions are the same as those proposed in the IMB-NAS paper. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 467,372 |
2303.06817 | Transformation-Invariant Network for Few-Shot Object Detection in Remote
Sensing Images | Object detection in remote sensing images relies on a large amount of labeled data for training. However, the increasing number of new categories and class imbalance make exhaustive annotation impractical. Few-shot object detection (FSOD) addresses this issue by leveraging meta-learning on seen base classes and fine-tuning on novel classes with limited labeled samples. Nonetheless, the substantial scale and orientation variations of objects in remote sensing images pose significant challenges to existing few-shot object detection methods. To overcome these challenges, we propose integrating a feature pyramid network and utilizing prototype features to enhance query features, thereby improving existing FSOD methods. We refer to this modified FSOD approach as a Strong Baseline, which has demonstrated significant performance improvements compared to the original baselines. Furthermore, we tackle the issue of spatial misalignment caused by orientation variations between the query and support images by introducing a Transformation-Invariant Network (TINet). TINet ensures geometric invariance and explicitly aligns the features of the query and support branches, resulting in additional performance gains while maintaining the same inference speed as the Strong Baseline. Extensive experiments on three widely used remote sensing object detection datasets, i.e., NWPU VHR-10.v2, DIOR, and HRRSD demonstrated the effectiveness of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 350,996 |
2101.02659 | A review of robotics taxonomies in terms of form and structure | Identifying and categorizing specific robot tasks, behaviors, and resources is an essential precursor to reproducing and evaluating robotics experiments across laboratories and platforms. Without some means of capturing how one environment, platform, or behavior differs from another, we cannot begin to establish the performance impact of these changes or predict a robot's performance in a novel environment. As a first step towards experimental reproducibility, existing taxonomies in the field of robotics are reviewed and common patterns of structure and form extracted, identifying both the properties they share with traditional taxonomies and the necessary structural elements that draw from other classification and categorization systems. The diversity of taxonomy subjects and subsequent difficulty in harmonization of conceptual underpinnings is noted. Robotics taxonomies are shown to be deeply fragmented in structure and form and to require notation that can support complex relationships. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 214,697 |
2101.09526 | A Dual-arm Robot that Autonomously Lifts Up and Tumbles Heavy Plates
Using Crane Pulley Blocks | This paper develops a planner that plans the action sequences and motion for a dual-arm robot to lift up and flip heavy plates using crane pulley blocks. The problem is motivated by the low payload of modern collaborative robots. Instead of directly manipulating heavy plates that collaborative robots cannot afford, the paper develops a planner for collaborative robots to operate crane pulley blocks. The planner assumes a target plate is pre-attached to the crane hook. It optimizes dual-arm action sequences and plans the robot's dual-arm motion that pulls the rope of the crane pulley blocks to lift up the plate. The crane pulley blocks reduce the payload that each robotic arm needs to bear. When the plate is lifted up to a satisfying pose, the planner plans a pushing motion for one of the robot arms to tumble over the plate while considering force and moment constraints. The article presents the technical details of the planner and several experiments and analysis carried out using a dual-arm robot made by two Universal Robots UR3 arms. The influence of various parameters and optimization goals are investigated and compared in depth. The results show that the proposed planner is flexible and efficient. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 216,626 |
2011.01861 | DeL-haTE: A Deep Learning Tunable Ensemble for Hate Speech Detection | Online hate speech on social media has become a fast-growing problem in recent times. Nefarious groups have developed large content delivery networks across several main-stream (Twitter and Facebook) and fringe (Gab, 4chan, 8chan, etc.) outlets to deliver cascades of hate messages directed both at individuals and communities. Thus addressing these issues has become a top priority for large-scale social media outlets. Three key challenges in automated detection and classification of hateful content are the lack of clearly labeled data, evolving vocabulary and lexicon - hashtags, emojis, etc. - and the lack of baseline models for fringe outlets such as Gab. In this work, we propose a novel framework with three major contributions. (a) We engineer an ensemble of deep learning models that combines the strengths of state-of-the-art approaches, (b) we incorporate a tuning factor into this framework that leverages transfer learning to conduct automated hate speech classification on unlabeled datasets, like Gab, and (c) we develop a weak supervised learning methodology that allows our framework to train on unlabeled data. Our ensemble models achieve an 83% hate recall on the HON dataset, surpassing the performance of the state-of-the-art deep models. We demonstrate that weak supervised training in combination with classifier tuning significantly increases model performance on unlabeled data from Gab, achieving a hate recall of 67%. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 204,737 |
2306.15971 | Reconstructing the Hemodynamic Response Function via a Bimodal
Transformer | The relationship between blood flow and neuronal activity is widely recognized, with blood flow frequently serving as a surrogate for neuronal activity in fMRI studies. At the microscopic level, neuronal activity has been shown to influence blood flow in nearby blood vessels. This study introduces the first predictive model that addresses this issue directly at the explicit neuronal population level. Using in vivo recordings in awake mice, we employ a novel spatiotemporal bimodal transformer architecture to infer current blood flow based on both historical blood flow and ongoing spontaneous neuronal activity. Our findings indicate that incorporating neuronal activity significantly enhances the model's ability to predict blood flow values. Through analysis of the model's behavior, we propose hypotheses regarding the largely unexplored nature of the hemodynamic response to neuronal activity. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 376,230 |
1907.05164 | Disease classification of macular Optical Coherence Tomography scans
using deep learning software: validation on independent, multi-centre data | Purpose: To evaluate Pegasus-OCT, a clinical decision support software for the identification of features of retinal disease from macula OCT scans, across heterogenous populations involving varying patient demographics, device manufacturers, acquisition sites and operators. Methods: 5,588 normal and anomalous macular OCT volumes (162,721 B-scans), acquired at independent centres in five countries, were processed using the software. Results were evaluated against ground truth provided by the dataset owners. Results: Pegasus-OCT performed with AUROCs of at least 98% for all datasets in the detection of general macular anomalies. For scans of sufficient quality, the AUROCs for general AMD and DME detection were found to be at least 99% and 98%, respectively. Conclusions: The ability of a clinical decision support system to cater for different populations is key to its adoption. Pegasus-OCT was shown to be able to detect AMD, DME and general anomalies in OCT volumes acquired across multiple independent sites with high performance. Its use thus offers substantial promise, with the potential to alleviate the burden of growing demand in eye care services caused by retinal disease. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 138,289 |
2104.01569 | Conversational Question Answering over Knowledge Graphs with Transformer
and Graph Attention Networks | This paper addresses the task of (complex) conversational question answering over a knowledge graph. For this task, we propose LASAGNE (muLti-task semAntic parSing with trAnsformer and Graph atteNtion nEtworks). It is the first approach, which employs a transformer architecture extended with Graph Attention Networks for multi-task neural semantic parsing. LASAGNE uses a transformer model for generating the base logical forms, while the Graph Attention model is used to exploit correlations between (entity) types and predicates to produce node representations. LASAGNE also includes a novel entity recognition module which detects, links, and ranks all relevant entities in the question context. We evaluate LASAGNE on a standard dataset for complex sequential question answering, on which it outperforms existing baseline averages on all question types. Specifically, we show that LASAGNE improves the F1-score on eight out of ten question types; in some cases, the increase in F1-score is more than 20% compared to the state of the art. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 228,400 |
2101.09973 | Approximating Probability Distributions by ReLU Networks | How many neurons are needed to approximate a target probability distribution using a neural network with a given input distribution and approximation error? This paper examines this question for the case when the input distribution is uniform, and the target distribution belongs to the class of histogram distributions. We obtain a new upper bound on the number of required neurons, which is strictly better than previously existing upper bounds. The key ingredient in this improvement is an efficient construction of the neural nets representing piecewise linear functions. We also obtain a lower bound on the minimum number of neurons needed to approximate the histogram distributions. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 216,774 |
2307.16693 | AisLSM: Revolutionizing the Compaction with Asynchronous I/Os for
LSM-tree | The log-structured merge tree (LSM-tree) is widely employed to build key-value (KV) stores. LSM-tree organizes multiple levels in memory and on disk. The compaction of LSM-tree, which is used to redeploy KV pairs between on-disk levels in the form of SST files, severely stalls its foreground service. We overhaul and analyze the procedure of compaction. Writing and persisting files with fsyncs for compacted KV pairs are time-consuming and, more important, occur synchronously on the critical path of compaction. The user-space compaction thread of LSM-tree stays waiting for completion signals from a kernel-space thread that is processing file write and fsync I/Os. We accordingly design a new LSM-tree variant named AisLSM with an asynchronous I/O model. In short, AisLSM conducts asynchronous writes and fsyncs for SST files generated in a compaction and overlaps CPU computations with disk I/Os for consecutive compactions. AisLSM tracks the generation dependency between input and output files for each compaction and utilizes a deferred check-up strategy to ensure the durability of compacted KV pairs. We prototype AisLSM with RocksDB and io_uring. Experiments show that AisLSM boosts the performance of RocksDB by up to 2.14x, without losing data accessibility and consistency. It also outperforms state-of-the-art LSM-tree variants with significantly higher throughput and lower tail latency. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 382,699 |
2402.09154 | Attacking Large Language Models with Projected Gradient Descent | Current LLM alignment methods are readily broken through specifically crafted adversarial prompts. While crafting adversarial prompts using discrete optimization is highly effective, such attacks typically use more than 100,000 LLM calls. This high computational cost makes them unsuitable for, e.g., quantitative analyses and adversarial training. To remedy this, we revisit Projected Gradient Descent (PGD) on the continuously relaxed input prompt. Although previous attempts with ordinary gradient-based attacks largely failed, we show that carefully controlling the error introduced by the continuous relaxation tremendously boosts their efficacy. Our PGD for LLMs is up to one order of magnitude faster than state-of-the-art discrete optimization to achieve the same devastating attack results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,393 |
2203.00649 | Lodestar: An Integrated Embedded Real-Time Control Engine | In this work we present Lodestar, an integrated engine for rapid real-time control system development. Using a functional block diagram paradigm, Lodestar allows for complex multi-disciplinary control software design, while automatically resolving execution order, circular data-dependencies, and networking. In particular, Lodestar presents a unified set of control, signal processing, and computer vision routines to users, which may be interfaced with external hardware and software packages using interoperable user-defined wrappers. Lodestar allows for user-defined block diagrams to be directly executed, or for them to be translated to overhead-free source code for integration in other programs. We demonstrate how our framework departs from approaches used in state-of-the-art simulation frameworks to enable real-time performance, and compare its capabilities to existing solutions in the realm of control software. To demonstrate the utility of Lodestar in real-time control systems design, we have applied Lodestar to implement two real-time torque-based controller for a robotic arm. In addition, we have developed a novel autofocus algorithm for use in thermography-based localization and parameter estimation in electrosurgery and other areas of robot-assisted surgery. We compare our algorithm design approach in Lodestar to a classical ground-up approach, showing that Lodestar considerably eases the design process. We also show how Lodestar can seamlessly interface with existing simulation and networking framework in a number of simulation examples. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 283,077 |
1304.7948 | Convolutional Neural Networks learn compact local image descriptors | A standard deep convolutional neural network paired with a suitable loss function learns compact local image descriptors that perform comparably to state-of-the art approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 24,298 |
2403.15412 | Towards Measuring and Modeling "Culture" in LLMs: A Survey | We present a survey of more than 90 recent papers that aim to study cultural representation and inclusion in large language models (LLMs). We observe that none of the studies explicitly define "culture, which is a complex, multifaceted concept; instead, they probe the models on some specially designed datasets which represent certain aspects of "culture". We call these aspects the proxies of culture, and organize them across two dimensions of demographic and semantic proxies. We also categorize the probing methods employed. Our analysis indicates that only certain aspects of ``culture,'' such as values and objectives, have been studied, leaving several other interesting and important facets, especially the multitude of semantic domains (Thompson et al., 2020) and aboutness (Hershcovich et al., 2022), unexplored. Two other crucial gaps are the lack of robustness of probing techniques and situated studies on the impact of cultural mis- and under-representation in LLM-based applications. | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | 440,542 |
1506.05608 | Cooperative Control in Production and Logistics | Classical applications of control engineering and information and communication technology (ICT) in production and logistics are often done in a rigid, centralized and hierarchical way. These inflexible approaches are typically not able to cope with the complexities of the manufacturing environment, such as the instabilities, uncertainties and abrupt changes caused by internal and external disturbances, or a large number and variety of interacting, interdependent elements. A paradigm shift, e.g., novel organizing principles and methods, is needed for supporting the interoperability of dynamic alliances of agile and networked systems. Several solution proposals argue that the future of manufacturing and logistics lies in network-like, dynamic, open and reconfigurable systems of cooperative autonomous entities. The paper overviews various distributed approaches and technologies of control engineering and ICT that can support the realization of cooperative structures from the resource level to the level of networked enterprises. Standard results as well as recent advances from control theory, through cooperative game theory, distributed machine learning to holonic systems, cooperative enterprise modelling, system integration, and autonomous logistics processes are surveyed. A special emphasis is put on the theoretical developments and industrial applications of Robustly Feasible Model Predictive Control (RFMPC). Two case studies are also discussed: i) a holonic, PROSA-based approach to generate short-term forecasts for an additive manufacturing system by means of a delegate multi-agent system (D-MAS); and ii) an application of distributed RFMPC to a drinking water distribution system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 44,320 |
1809.01808 | Security Metrics of Networked Control Systems under Sensor Attacks
(extended preprint) | As more attention is paid to security in the context of control systems and as attacks occur to real control systems throughout the world, it has become clear that some of the most nefarious attacks are those that evade detection. The term stealthy has come to encompass a variety of techniques that attackers can employ to avoid being detected. In this manuscript, for a class of perturbed linear time-invariant systems, we propose two security metrics to quantify the potential impact that stealthy attacks could have on the system dynamics by tampering with sensor measurements. We provide analysis mathematical tools (in terms of linear matrix inequalities) to quantify these metrics for given system dynamics, control structure, system monitor, and set of sensors being attacked. Then, we provide synthesis tools (in terms of semidefinite programs) to redesign controllers and monitors such that the impact of stealthy attacks is minimized and the required attack-free system performance is guaranteed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 106,895 |
2111.03945 | Towards Building ASR Systems for the Next Billion Users | Recent methods in speech and language technology pretrain very LARGE models which are fine-tuned for specific tasks. However, the benefits of such LARGE models are often limited to a few resource rich languages of the world. In this work, we make multiple contributions towards building ASR systems for low resource languages from the Indian subcontinent. First, we curate 17,000 hours of raw speech data for 40 Indian languages from a wide variety of domains including education, news, technology, and finance. Second, using this raw speech data we pretrain several variants of wav2vec style models for 40 Indian languages. Third, we analyze the pretrained models to find key features: codebook vectors of similar sounding phonemes are shared across languages, representations across layers are discriminative of the language family, and attention heads often pay attention within small local windows. Fourth, we fine-tune this model for downstream ASR for 9 languages and obtain state-of-the-art results on 3 public datasets, including on very low-resource languages such as Sinhala and Nepali. Our work establishes that multilingual pretraining is an effective strategy for building ASR systems for the linguistically diverse speakers of the Indian subcontinent. Our code, data and models are available publicly at https://indicnlp.ai4bharat.org/indicwav2vec/ and we hope they will help advance research in ASR for Indic languages. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 265,326 |
2205.03401 | The Unreliability of Explanations in Few-shot Prompting for Textual
Reasoning | Does prompting a large language model (LLM) like GPT-3 with explanations improve in-context learning? We study this question on two NLP tasks that involve reasoning over text, namely question answering and natural language inference. We test the performance of four LLMs on three textual reasoning datasets using prompts that include explanations in multiple different styles. For these tasks, we find that including explanations in the prompts for OPT, GPT-3 (davinci), and InstructGPT (text-davinci-001) only yields small to moderate accuracy improvements over standard few-show learning. However, text-davinci-002 is able to benefit more substantially. We further show that explanations generated by the LLMs may not entail the models' predictions nor be factually grounded in the input, even on simple tasks with extractive explanations. However, these flawed explanations can still be useful as a way to verify LLMs' predictions post-hoc. Through analysis in our three settings, we show that explanations judged by humans to be good--logically consistent with the input and the prediction--more likely cooccur with accurate predictions. Following these observations, we train calibrators using automatically extracted scores that assess the reliability of explanations, allowing us to improve performance post-hoc across all of our datasets. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 295,273 |
2103.04423 | When Being Soft Makes You Tough: A Collision-Resilient Quadcopter
Inspired by Arthropods' Exoskeletons | Flying robots are usually rather delicate and require protective enclosures when facing the risk of collision, while high complexity and reduced payload are recurrent problems with collision-resilient flying robots. Inspired by arthropods' exoskeletons, we design a simple, open source, easily manufactured, semi-rigid structure with soft joints that can withstand high-velocity impacts. With an exoskeleton, the protective shell becomes part of the main robot structure, thereby minimizing its loss in payload capacity. Our design is simple to build and customize using cheap components (e.g. bamboo skewers) and consumer-grade 3D printers. The result is CogniFly, a sub-250g autonomous quadcopter that survives multiple collisions at speeds up to 7m/s. In addition to its collision-resiliency, CogniFly is easy to program using Python or Buzz, carries sensors that allow it to fly for approx. 17min without the need of GPS or an external motion capture system, has enough computing power to run deep neural network models on-board and was designed to facilitate integration with an automated battery swapping system. This structure becomes an ideal platform for high-risk activities (such as flying in a cluttered environment or reinforcement learning training) by dramatically reducing the risks of damaging its own hardware or the environment. Source code, 3D files, instructions and videos are available through the project's website (https://thecognifly.github.io). | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 223,632 |
1802.03098 | Tracking Noisy Targets: A Review of Recent Object Tracking Approaches | Visual object tracking is an important computer vision problem with numerous real-world applications including human-computer interaction, autonomous vehicles, robotics, motion-based recognition, video indexing, surveillance and security. In this paper, we aim to extensively review the latest trends and advances in the tracking algorithms and evaluate the robustness of trackers in the presence of noise. The first part of this work comprises a comprehensive survey of recently proposed tracking algorithms. We broadly categorize trackers into correlation filter based trackers and the others as non-correlation filter trackers. Each category is further classified into various types of trackers based on the architecture of the tracking mechanism. In the second part of this work, we experimentally evaluate tracking algorithms for robustness in the presence of additive white Gaussian noise. Multiple levels of additive noise are added to the Object Tracking Benchmark (OTB) 2015, and the precision and success rates of the tracking algorithms are evaluated. Some algorithms suffered more performance degradation than others, which brings to light a previously unexplored aspect of the tracking algorithms. The relative rank of the algorithms based on their performance on benchmark datasets may change in the presence of noise. Our study concludes that no single tracker is able to achieve the same efficiency in the presence of noise as under noise-free conditions; thus, there is a need to include a parameter for robustness to noise when evaluating newly proposed tracking algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 89,895 |
1703.10138 | How does propaganda influence the opinion dynamics of a population ? | The evolution of opinions in a population of individuals who constantly interact with a common source of user-generated content (i.e. the internet) and are also subject to propaganda is analyzed using computer simulations. The model is based on the bounded confidence approach. In the absence of propaganda, computer simulations show that the online population as a whole is either fragmented, polarized or in perfect harmony on a certain issue or ideology depending on the uncertainty of individuals in accepting opinions not closer to theirs. On applying the model to simulate radicalization, a proportion of the online population, subject to extremist propaganda radicalize depending on their pre-conceived opinions and opinion uncertainty. It is found that an optimal counter propaganda that prevents radicalization is not necessarily centrist. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 70,868 |
2311.06551 | FDNet: Feature Decoupled Segmentation Network for Tooth CBCT Image | Precise Tooth Cone Beam Computed Tomography (CBCT) image segmentation is crucial for orthodontic treatment planning. In this paper, we propose FDNet, a Feature Decoupled Segmentation Network, to excel in the face of the variable dental conditions encountered in CBCT scans, such as complex artifacts and indistinct tooth boundaries. The Low-Frequency Wavelet Transform (LF-Wavelet) is employed to enrich the semantic content by emphasizing the global structural integrity of the teeth, while the SAM encoder is leveraged to refine the boundary delineation, thus improving the contrast between adjacent dental structures. By integrating these dual aspects, FDNet adeptly addresses the semantic gap, providing a detailed and accurate segmentation. The framework's effectiveness is validated through rigorous benchmarks, achieving the top Dice and IoU scores of 85.28% and 75.23%, respectively. This innovative decoupling of semantic and boundary features capitalizes on the unique strengths of each element to elevate the quality of segmentation performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 406,984 |
cs/0311026 | Great Expectations. Part I: On the Customizability of Generalized
Expected Utility | We propose a generalization of expected utility that we call generalized EU (GEU), where a decision maker's beliefs are represented by plausibility measures, and the decision maker's tastes are represented by general (i.e.,not necessarily real-valued) utility functions. We show that every agent, ``rational'' or not, can be modeled as a GEU maximizer. We then show that we can customize GEU by selectively imposing just the constraints we want. In particular, we show how each of Savage's postulates corresponds to constraints on GEU. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 538,031 |
2302.10840 | Valid Inference for Machine Learning Model Parameters | The parameters of a machine learning model are typically learned by minimizing a loss function on a set of training data. However, this can come with the risk of overtraining; in order for the model to generalize well, it is of great importance that we are able to find the optimal parameter for the model on the entire population -- not only on the given training sample. In this paper, we construct valid confidence sets for this optimal parameter of a machine learning model, which can be generated using only the training data without any knowledge of the population. We then show that studying the distribution of this confidence set allows us to assign a notion of confidence to arbitrary regions of the parameter space, and we demonstrate that this distribution can be well-approximated using bootstrapping techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 346,970 |
2206.10919 | Comparing Formulaic Language in Human and Machine Translation: Insight
from a Parliamentary Corpus | A recent study has shown that, compared to human translations, neural machine translations contain more strongly-associated formulaic sequences made of relatively high-frequency words, but far less strongly-associated formulaic sequences made of relatively rare words. These results were obtained on the basis of translations of quality newspaper articles in which human translations can be thought to be not very literal. The present study attempts to replicate this research using a parliamentary corpus. The text were translated from French to English by three well-known neural machine translation systems: DeepL, Google Translate and Microsoft Translator. The results confirm the observations on the news corpus, but the differences are less strong. They suggest that the use of text genres that usually result in more literal translations, such as parliamentary corpora, might be preferable when comparing human and machine translations. Regarding the differences between the three neural machine systems, it appears that Google translations contain fewer highly collocational bigrams, identified by the CollGram technique, than Deepl and Microsoft translations. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 304,086 |
2212.10380 | What Are You Token About? Dense Retrieval as Distributions Over the
Vocabulary | Dual encoders are now the dominant architecture for dense retrieval. Yet, we have little understanding of how they represent text, and why this leads to good performance. In this work, we shed light on this question via distributions over the vocabulary. We propose to interpret the vector representations produced by dual encoders by projecting them into the model's vocabulary space. We show that the resulting projections contain rich semantic information, and draw connection between them and sparse retrieval. We find that this view can offer an explanation for some of the failure cases of dense retrievers. For example, we observe that the inability of models to handle tail entities is correlated with a tendency of the token distributions to forget some of the tokens of those entities. We leverage this insight and propose a simple way to enrich query and passage representations with lexical information at inference time, and show that this significantly improves performance compared to the original model in zero-shot settings, and specifically on the BEIR benchmark. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 337,432 |
1309.4429 | Comsol Simulations of Cracking in Point Loaded Masonry with Randomly
Distributed Material Properties | This paper describes COMSOL simulations of the stress and crack development in the area where a masonry wall supports a floor. In these simulations one of the main material properties of calcium silicate, its E-value, was assigned randomly to the finite elements of the modeled specimen. Calcium silicate is a frequently used building material with a relatively brittle fracture characteristic. Its initial E-value varies, as well as tensile strength and post peak behavior. Therefore, in the simulation, initial E-values were randomly assigned to the elements of the model and a step function used for describing the descending branch. The method also allows for variation in strength to be taken into account in future research. The performed non-linear simulation results are compared with experimental findings. They show the stress distribution and cracking behavior in point loaded masonry when varying material properties are used. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 27,104 |
2210.15759 | Self-supervised language learning from raw audio: Lessons from the Zero
Resource Speech Challenge | Recent progress in self-supervised or unsupervised machine learning has opened the possibility of building a full speech processing system from raw audio without using any textual representations or expert labels such as phonemes, dictionaries or parse trees. The contribution of the Zero Resource Speech Challenge series since 2015 has been to break down this long-term objective into four well-defined tasks -- Acoustic Unit Discovery, Spoken Term Discovery, Discrete Resynthesis, and Spoken Language Modeling -- and introduce associated metrics and benchmarks enabling model comparison and cumulative progress. We present an overview of the six editions of this challenge series since 2015, discuss the lessons learned, and outline the areas which need more work or give puzzling results. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 327,068 |
2003.09592 | Privacy-Preserving News Recommendation Model Learning | News recommendation aims to display news articles to users based on their personal interest. Existing news recommendation methods rely on centralized storage of user behavior data for model training, which may lead to privacy concerns and risks due to the privacy-sensitive nature of user behaviors. In this paper, we propose a privacy-preserving method for news recommendation model training based on federated learning, where the user behavior data is locally stored on user devices. Our method can leverage the useful information in the behaviors of massive number users to train accurate news recommendation models and meanwhile remove the need of centralized storage of them. More specifically, on each user device we keep a local copy of the news recommendation model, and compute gradients of the local model based on the user behaviors in this device. The local gradients from a group of randomly selected users are uploaded to server, which are further aggregated to update the global model in the server. Since the model gradients may contain some implicit private information, we apply local differential privacy (LDP) to them before uploading for better privacy protection. The updated global model is then distributed to each user device for local model update. We repeat this process for multiple rounds. Extensive experiments on a real-world dataset show the effectiveness of our method in news recommendation model training with privacy protection. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 169,092 |
1807.04551 | A Constrained Randomized Shortest-Paths Framework for Optimal
Exploration | The present work extends the randomized shortest-paths framework (RSP), interpolating between shortest-path and random-walk routing in a network, in three directions. First, it shows how to deal with equality constraints on a subset of transition probabilities and develops a generic algorithm for solving this constrained RSP problem using Lagrangian duality. Second, it derives a surprisingly simple iterative procedure to compute the optimal, randomized, routing policy generalizing the previously developed "soft" Bellman-Ford algorithm. The resulting algorithm allows balancing exploitation and exploration in an optimal way by interpolating between a pure random behavior and the deterministic, optimal, policy (least-cost paths) while satisfying the constraints. Finally, the two algorithms are applied to Markov decision problems by considering the process as a constrained RSP on a bipartite state-action graph. In this context, the derived "soft" value iteration algorithm appears to be closely related to dynamic policy programming as well as Kullback-Leibler and path integral control, and similar to a recently introduced reinforcement learning exploration strategy. This shows that this strategy is optimal in the RSP sense - it minimizes expected path cost subject to relative entropy constraint. Simulation results on illustrative examples show that the model behaves as expected. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 102,744 |
1309.1952 | A Clustering Approach to Learn Sparsely-Used Overcomplete Dictionaries | We consider the problem of learning overcomplete dictionaries in the context of sparse coding, where each sample selects a sparse subset of dictionary elements. Our main result is a strategy to approximately recover the unknown dictionary using an efficient algorithm. Our algorithm is a clustering-style procedure, where each cluster is used to estimate a dictionary element. The resulting solution can often be further cleaned up to obtain a high accuracy estimate, and we provide one simple scenario where $\ell_1$-regularized regression can be used for such a second stage. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 26,910 |
2206.12931 | Annotated Speech Corpus for Low Resource Indian Languages: Awadhi,
Bhojpuri, Braj and Magahi | In this paper we discuss an in-progress work on the development of a speech corpus for four low-resource Indo-Aryan languages -- Awadhi, Bhojpuri, Braj and Magahi using the field methods of linguistic data collection. The total size of the corpus currently stands at approximately 18 hours (approx. 4-5 hours each language) and it is transcribed and annotated with grammatical information such as part-of-speech tags, morphological features and Universal dependency relationships. We discuss our methodology for data collection in these languages, most of which was done in the middle of the COVID-19 pandemic, with one of the aims being to generate some additional income for low-income groups speaking these languages. In the paper, we also discuss the results of the baseline experiments for automatic speech recognition system in these languages. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 304,784 |
2311.05698 | Mirasol3B: A Multimodal Autoregressive model for time-aligned and
contextual modalities | One of the main challenges of multimodal learning is the need to combine heterogeneous modalities (e.g., video, audio, text). For example, video and audio are obtained at much higher rates than text and are roughly aligned in time. They are often not synchronized with text, which comes as a global context, e.g., a title, or a description. Furthermore, video and audio inputs are of much larger volumes, and grow as the video length increases, which naturally requires more compute dedicated to these modalities and makes modeling of long-range dependencies harder. We here decouple the multimodal modeling, dividing it into separate, focused autoregressive models, processing the inputs according to the characteristics of the modalities. We propose a multimodal model, called Mirasol3B, consisting of an autoregressive component for the time-synchronized modalities (audio and video), and an autoregressive component for the context modalities which are not necessarily aligned in time but are still sequential. To address the long-sequences of the video-audio inputs, we propose to further partition the video and audio sequences in consecutive snippets and autoregressively process their representations. To that end, we propose a Combiner mechanism, which models the audio-video information jointly within a timeframe. The Combiner learns to extract audio and video features from raw spatio-temporal signals, and then learns to fuse these features producing compact but expressive representations per snippet. Our approach achieves the state-of-the-art on well established multimodal benchmarks, outperforming much larger models. It effectively addresses the high computational demand of media inputs by both learning compact representations, controlling the sequence length of the audio-video feature representations, and modeling their dependencies in time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 406,668 |
2403.04236 | Regularized DeepIV with Model Selection | In this paper, we study nonparametric estimation of instrumental variable (IV) regressions. While recent advancements in machine learning have introduced flexible methods for IV estimation, they often encounter one or more of the following limitations: (1) restricting the IV regression to be uniquely identified; (2) requiring minimax computation oracle, which is highly unstable in practice; (3) absence of model selection procedure. In this paper, we present the first method and analysis that can avoid all three limitations, while still enabling general function approximation. Specifically, we propose a minimax-oracle-free method called Regularized DeepIV (RDIV) regression that can converge to the least-norm IV solution. Our method consists of two stages: first, we learn the conditional distribution of covariates, and by utilizing the learned distribution, we learn the estimator by minimizing a Tikhonov-regularized loss function. We further show that our method allows model selection procedures that can achieve the oracle rates in the misspecified regime. When extended to an iterative estimator, our method matches the current state-of-the-art convergence rate. Our method is a Tikhonov regularized variant of the popular DeepIV method with a non-parametric MLE first-stage estimator, and our results provide the first rigorous guarantees for this empirically used method, showcasing the importance of regularization which was absent from the original work. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 435,520 |
2006.06581 | Asymptotic Errors for Teacher-Student Convex Generalized Linear Models
(or : How to Prove Kabashima's Replica Formula) | There has been a recent surge of interest in the study of asymptotic reconstruction performance in various cases of generalized linear estimation problems in the teacher-student setting, especially for the case of i.i.d standard normal matrices. Here, we go beyond these matrices, and prove an analytical formula for the reconstruction performance of convex generalized linear models with rotationally-invariant data matrices with arbitrary bounded spectrum, rigorously confirming, under suitable assumptions, a conjecture originally derived using the replica method from statistical physics. The proof is achieved by leveraging on message passing algorithms and the statistical properties of their iterates, allowing to characterize the asymptotic empirical distribution of the estimator. For sufficiently strongly convex problems, we show that the two-layer vector approximate message passing algorithm (2-MLVAMP) converges, where the convergence analysis is done by checking the stability of an equivalent dynamical system, which gives the result for such problems. We then show that, under a concentration assumption, an analytical continuation may be carried out to extend the result to convex (non-strongly) problems. We illustrate our claim with numerical examples on mainstream learning methods such as sparse logistic regression and linear support vector classifiers, showing excellent agreement between moderate size simulation and the asymptotic prediction. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 181,481 |
2005.02162 | Global Wheat Head Detection (GWHD) dataset: a large and diverse dataset
of high resolution RGB labelled images to develop and benchmark wheat head
detection methods | Detection of wheat heads is an important task allowing to estimate pertinent traits including head population density and head characteristics such as sanitary state, size, maturity stage and the presence of awns. Several studies developed methods for wheat head detection from high-resolution RGB imagery. They are based on computer vision and machine learning and are generally calibrated and validated on limited datasets. However, variability in observational conditions, genotypic differences, development stages, head orientation represents a challenge in computer vision. Further, possible blurring due to motion or wind and overlap between heads for dense populations make this task even more complex. Through a joint international collaborative effort, we have built a large, diverse and well-labelled dataset, the Global Wheat Head detection (GWHD) dataset. It contains 4,700 high-resolution RGB images and 190,000 labelled wheat heads collected from several countries around the world at different growth stages with a wide range of genotypes. Guidelines for image acquisition, associating minimum metadata to respect FAIR principles and consistent head labelling methods are proposed when developing new head detection datasets. The GWHD is publicly available at http://www.global-wheat.com/ and aimed at developing and benchmarking methods for wheat head detection. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 175,794 |
1908.04036 | Fundamental Limits of Wireless Caching under Uneven-Capacity Channels | This work identifies the fundamental limits of cache-aided coded multicasting in the presence of the well-known `worst-user' bottleneck. This stems from the presence of receiving users with uneven channel capacities, which often forces the rate of transmission of each multicasting message to be reduced to that of the slowest user. This bottleneck, which can be detrimental in general wireless broadcast settings, motivates the analysis of coded caching over a standard Single-Input-Single-Output (SISO) Broadcast Channel (BC) with K cache-aided receivers, each with a generally different channel capacity. For this setting, we design a communication algorithm that is based on superposition coding that capitalizes on the realization that the user with the worst channel may not be the real bottleneck of communication. We then proceed to provide a converse that shows the algorithm to be near optimal, identifying the fundamental limits of this setting within a multiplicative factor of 4. Interestingly, the result reveals that, even if several users are experiencing channels with reduced capacity, the system can achieve the same optimal delivery time that would be achievable if all users enjoyed maximal capacity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 141,394 |
1904.05353 | Big Data Quality: A systematic literature review and future research
directions | One of the most significant problems of Big Data is to extract knowledge through the huge amount of data. The usefulness of the extracted information depends strongly on data quality. In addition to the importance, data quality has recently been taken into consideration by the big data community and there is not any comprehensive review conducted in this area. Therefore, the purpose of this study is to review and present the state of the art on the quality of big data research through a hierarchical framework. The dimensions of the proposed framework cover various aspects in the quality assessment of Big Data including 1) the processing types of big data, i.e. stream, batch, and hybrid, 2) the main task, and 3) the method used to conduct the task. We compare and critically review all of the studies reported during the last ten years through our proposed framework to identify which of the available data quality assessment methods have been successfully adopted by the big data community. Finally, we provide a critical discussion on the limitations of existing methods and offer suggestions on potential valuable research directions that can be taken in future research in this domain. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 127,292 |
2405.11912 | ARAIDA: Analogical Reasoning-Augmented Interactive Data Annotation | Human annotation is a time-consuming task that requires a significant amount of effort. To address this issue, interactive data annotation utilizes an annotation model to provide suggestions for humans to approve or correct. However, annotation models trained with limited labeled data are prone to generating incorrect suggestions, leading to extra human correction effort. To tackle this challenge, we propose Araida, an analogical reasoning-based approach that enhances automatic annotation accuracy in the interactive data annotation setting and reduces the need for human corrections. Araida involves an error-aware integration strategy that dynamically coordinates an annotation model and a k-nearest neighbors (KNN) model, giving more importance to KNN's predictions when predictions from the annotation model are deemed inaccurate. Empirical studies demonstrate that Araida is adaptable to different annotation tasks and models. On average, it reduces human correction labor by 11.02% compared to vanilla interactive data annotation methods. | true | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 455,340 |
2107.13191 | Neural Network Approximation of Refinable Functions | In the desire to quantify the success of neural networks in deep learning and other applications, there is a great interest in understanding which functions are efficiently approximated by the outputs of neural networks. By now, there exists a variety of results which show that a wide range of functions can be approximated with sometimes surprising accuracy by these outputs. For example, it is known that the set of functions that can be approximated with exponential accuracy (in terms of the number of parameters used) includes, on one hand, very smooth functions such as polynomials and analytic functions (see e.g. \cite{E,S,Y}) and, on the other hand, very rough functions such as the Weierstrass function (see e.g. \cite{EPGB,DDFHP}), which is nowhere differentiable. In this paper, we add to the latter class of rough functions by showing that it also includes refinable functions. Namely, we show that refinable functions are approximated by the outputs of deep ReLU networks with a fixed width and increasing depth with accuracy exponential in terms of their number of parameters. Our results apply to functions used in the standard construction of wavelets as well as to functions constructed via subdivision algorithms in Computer Aided Geometric Design. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 248,131 |
1304.6470 | Low-Complexity Lattice Reduction-Aided Channel Inversion Methods for
Large-Dimensional Multi-User MIMO Systems | Low-complexity precoding {algorithms} are proposed in this work to reduce the computational complexity and improve the performance of regularized block diagonalization (RBD) {based} precoding {schemes} for large multi-user {MIMO} (MU-MIMO) systems. The proposed algorithms are based on a channel inversion technique, QR decompositions{,} and lattice reductions to decouple the MU-MIMO channel into equivalent SU-MIMO channels. Simulation results show that the proposed precoding algorithms can achieve almost the same sum-rate performance as RBD precoding, substantial bit error rate (BER) performance gains{,} and a simplified receiver structure, while requiring a lower complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 24,171 |
2110.06532 | Finding Materialized Models for Model Reuse | Materialized model query aims to find the most appropriate materialized model as the initial model for model reuse. It is the precondition of model reuse, and has recently attracted much attention. {Nonetheless, the existing methods suffer from the need to provide source data, limited range of applications, and inefficiency since they do not construct a suitable metric to measure the target-related knowledge of materialized models. To address this, we present \textsf{MMQ}, a source-data free, general, efficient, and effective materialized model query framework.} It uses a Gaussian mixture-based metric called separation degree to rank materialized models. For each materialized model, \textsf{MMQ} first vectorizes the samples in the target dataset into probability vectors by directly applying this model, then utilizes Gaussian distribution to fit for each class of probability vectors, and finally uses separation degree on the Gaussian distributions to measure the target-related knowledge of the materialized model. Moreover, we propose an improved \textsf{MMQ} (\textsf{I-MMQ}), which significantly reduces the query time while retaining the query performance of \textsf{MMQ}. Extensive experiments on a range of practical model reuse workloads demonstrate the effectiveness and efficiency of \textsf{MMQ}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 260,656 |
2410.08564 | Similar Phrases for Cause of Actions of Civil Cases | In the Taiwanese judicial system, Cause of Actions (COAs) are essential for identifying relevant legal judgments. However, the lack of standardized COA labeling creates challenges in filtering cases using basic methods. This research addresses this issue by leveraging embedding and clustering techniques to analyze the similarity between COAs based on cited legal articles. The study implements various similarity measures, including Dice coefficient and Pearson's correlation coefficient. An ensemble model combines rankings, and social network analysis identifies clusters of related COAs. This approach enhances legal analysis by revealing inconspicuous connections between COAs, offering potential applications in legal research beyond civil law. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 497,171 |
2302.08220 | Dialogue State Distillation Network with Inter-slot Contrastive Learning
for Dialogue State Tracking | In task-oriented dialogue systems, Dialogue State Tracking (DST) aims to extract users' intentions from the dialogue history. Currently, most existing approaches suffer from error propagation and are unable to dynamically select relevant information when utilizing previous dialogue states. Moreover, the relations between the updates of different slots provide vital clues for DST. However, the existing approaches rely only on predefined graphs to indirectly capture the relations. In this paper, we propose a Dialogue State Distillation Network (DSDN) to utilize relevant information of previous dialogue states and migrate the gap of utilization between training and testing. Thus, it can dynamically exploit previous dialogue states and avoid introducing error propagation simultaneously. Further, we propose an inter-slot contrastive learning loss to effectively capture the slot co-update relations from dialogue context. Experiments are conducted on the widely used MultiWOZ 2.0 and MultiWOZ 2.1 datasets. The experimental results show that our proposed model achieves the state-of-the-art performance for DST. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 345,989 |
1502.02476 | An Infinite Restricted Boltzmann Machine | We present a mathematical construction for the restricted Boltzmann machine (RBM) that doesn't require specifying the number of hidden units. In fact, the hidden layer size is adaptive and can grow during training. This is obtained by first extending the RBM to be sensitive to the ordering of its hidden units. Then, thanks to a carefully chosen definition of the energy function, we show that the limit of infinitely many hidden units is well defined. As with RBM, approximate maximum likelihood training can be performed, resulting in an algorithm that naturally and adaptively adds trained hidden units during learning. We empirically study the behaviour of this infinite RBM, showing that its performance is competitive to that of the RBM, while not requiring the tuning of a hidden layer size. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 40,047 |
2310.05590 | Perceptual Artifacts Localization for Image Synthesis Tasks | Recent advancements in deep generative models have facilitated the creation of photo-realistic images across various tasks. However, these generated images often exhibit perceptual artifacts in specific regions, necessitating manual correction. In this study, we present a comprehensive empirical examination of Perceptual Artifacts Localization (PAL) spanning diverse image synthesis endeavors. We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels across ten synthesis tasks. A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks. Additionally, we illustrate its proficiency in adapting to previously unseen models using minimal training samples. We further propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images. Through our experimental analyses, we elucidate several practical downstream applications, such as automated artifact rectification, non-referential image quality evaluation, and abnormal region detection in images. The dataset and code are released. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 398,214 |
1407.1443 | Analysis of Yelp Reviews | In the era of Big Data and Social Computing, the role of customer reviews and ratings can be instrumental in predicting the success and sustainability of businesses. In this paper, we show that, despite the apparent subjectivity of user ratings, there are also external, or objective factors which help to determine the outcome of a business's reviews. The current model for social business review sites, such as Yelp, allows data (reviews, ratings) to be compiled concurrently, which introduces a bias to participants (Yelp Users). Our work examines Yelp Reviews for businesses in and around college towns. We demonstrate that an Observer Effect causes data to behave cyclically: rising and falling as momentum (quantified in user ratings) shifts for businesses. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 34,430 |
2104.01083 | What Taggers Fail to Learn, Parsers Need the Most | We present an error analysis of neural UPOS taggers to evaluate why using gold standard tags has such a large positive contribution to parsing performance while using predicted UPOS tags either harms performance or offers a negligible improvement. We evaluate what neural dependency parsers implicitly learn about word types and how this relates to the errors taggers make to explain the minimal impact using predicted tags has on parsers. We also present a short analysis on what contexts result in reductions in tagging performance. We then mask UPOS tags based on errors made by taggers to tease away the contribution of UPOS tags which taggers succeed and fail to classify correctly and the impact of tagging errors. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 228,225 |
2502.13146 | Re-Align: Aligning Vision Language Models via Retrieval-Augmented Direct
Preference Optimization | The emergence of large Vision Language Models (VLMs) has broadened the scope and capabilities of single-modal Large Language Models (LLMs) by integrating visual modalities, thereby unlocking transformative cross-modal applications in a variety of real-world scenarios. Despite their impressive performance, VLMs are prone to significant hallucinations, particularly in the form of cross-modal inconsistencies. Building on the success of Reinforcement Learning from Human Feedback (RLHF) in aligning LLMs, recent advancements have focused on applying direct preference optimization (DPO) on carefully curated datasets to mitigate these issues. Yet, such approaches typically introduce preference signals in a brute-force manner, neglecting the crucial role of visual information in the alignment process. In this paper, we introduce Re-Align, a novel alignment framework that leverages image retrieval to construct a dual-preference dataset, effectively incorporating both textual and visual preference signals. We further introduce rDPO, an extension of the standard direct preference optimization that incorporates an additional visual preference objective during fine-tuning. Our experimental results demonstrate that Re-Align not only mitigates hallucinations more effectively than previous methods but also yields significant performance gains in general visual question-answering (VQA) tasks. Moreover, we show that Re-Align maintains robustness and scalability across a wide range of VLM sizes and architectures. This work represents a significant step forward in aligning multimodal LLMs, paving the way for more reliable and effective cross-modal applications. We release all the code in https://github.com/taco-group/Re-Align. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 535,231 |
2408.00940 | A dual-task mutual learning framework for predicting post-thrombectomy
cerebral hemorrhage | Ischemic stroke is a severe condition caused by the blockage of brain blood vessels, and can lead to the death of brain tissue due to oxygen deprivation. Thrombectomy has become a common treatment choice for ischemic stroke due to its immediate effectiveness. But, it carries the risk of postoperative cerebral hemorrhage. Clinically, multiple CT scans within 0-72 hours post-surgery are used to monitor for hemorrhage. However, this approach exposes radiation dose to patients, and may delay the detection of cerebral hemorrhage. To address this dilemma, we propose a novel prediction framework for measuring postoperative cerebral hemorrhage using only the patient's initial CT scan. Specifically, we introduce a dual-task mutual learning framework to takes the initial CT scan as input and simultaneously estimates both the follow-up CT scan and prognostic label to predict the occurrence of postoperative cerebral hemorrhage. Our proposed framework incorporates two attention mechanisms, i.e., self-attention and interactive attention. Specifically, the self-attention mechanism allows the model to focus more on high-density areas in the image, which are critical for diagnosis (i.e., potential hemorrhage areas). The interactive attention mechanism further models the dependencies between the interrelated generation and classification tasks, enabling both tasks to perform better than the case when conducted individually. Validated on clinical data, our method can generate follow-up CT scans better than state-of-the-art methods, and achieves an accuracy of 86.37% in predicting follow-up prognostic labels. Thus, our work thus contributes to the timely screening of post-thrombectomy cerebral hemorrhage, and could significantly reform the clinical process of thrombectomy and other similar operations related to stroke. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 478,036 |
1812.02937 | Optimizing speed/accuracy trade-off for person re-identification via
knowledge distillation | Finding a person across a camera network plays an important role in video surveillance. For a real-world person re-identification application, in order to guarantee an optimal time response, it is crucial to find the balance between accuracy and speed. We analyse this trade-off, comparing a classical method, that comprises hand-crafted feature description and metric learning, in particular, LOMO and XQDA, to deep learning based techniques, using image classification networks, ResNet and MobileNets. Additionally, we propose and analyse network distillation as a learning strategy to reduce the computational cost of the deep learning approach at test time. We evaluate both methods on the Market-1501 and DukeMTMC-reID large-scale datasets, showing that distillation helps reducing the computational cost at inference time while even increasing the accuracy performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 115,896 |
1606.07463 | PPM: A Privacy Prediction Model for Online Social Networks | Online Social Networks (OSNs) have come to play an increasingly important role in our social lives, and their inherent privacy problems have become a major concern for users. Can we assist consumers in their privacy decision-making practices, for example by predicting their preferences and giving them personalized advice? To this end, we introduce PPM: a Privacy Prediction Model, rooted in psychological principles, which can be used to give users personalized advice regarding their privacy decision-making practices. Using this model, we study psychological variables that are known to affect users' disclosure behavior: the trustworthiness of the requester/information audience, the sharing tendency of the receiver/information holder, the sensitivity of the requested/shared information, the appropriateness of the request/sharing activities, as well as several more traditional contextual factors. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 57,719 |
2112.07435 | Multi-Leader Congestion Games with an Adversary | We study a multi-leader single-follower congestion game where multiple users (leaders) choose one resource out of a set of resources and, after observing the realized loads, an adversary (single-follower) attacks the resources with maximum loads, causing additional costs for the leaders. For the resulting strategic game among the leaders, we show that pure Nash equilibria may fail to exist and therefore, we consider approximate equilibria instead. As our first main result, we show that the existence of a $K$-approximate equilibrium can always be guaranteed, where $K \approx 1.1974$ is the unique solution of a cubic polynomial equation. To this end, we give a polynomial time combinatorial algorithm which computes a $K$-approximate equilibrium. The factor $K$ is tight, meaning that there is an instance that does not admit an $\alpha$-approximate equilibrium for any $\alpha<K$. Thus $\alpha=K$ is the smallest possible value of $\alpha$ such that the existence of an $\alpha$-approximate equilibrium can be guaranteed for any instance of the considered game. Secondly, we focus on approximate equilibria of a given fixed instance. We show how to compute efficiently a best approximate equilibrium, that is, with smallest possible $\alpha$ among all $\alpha$-approximate equilibria of the given instance. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 271,483 |
2411.07845 | Ethical Concern Identification in NLP: A Corpus of ACL Anthology Ethics
Statements | What ethical concerns, if any, do LLM researchers have? We introduce EthiCon, a corpus of 1,580 ethical concern statements extracted from scientific papers published in the ACL Anthology. We extract ethical concern keywords from the statements and show promising results in automating the concern identification process. Through a survey, we compare the ethical concerns of the corpus to the concerns listed by the general public and professionals in the field. Finally, we compare our retrieved ethical concerns with existing taxonomies pointing to gaps and future research directions. | false | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | 507,698 |
2310.08836 | A Framework for Few-Shot Policy Transfer through Observation Mapping and
Behavior Cloning | Despite recent progress in Reinforcement Learning for robotics applications, many tasks remain prohibitively difficult to solve because of the expensive interaction cost. Transfer learning helps reduce the training time in the target domain by transferring knowledge learned in a source domain. Sim2Real transfer helps transfer knowledge from a simulated robotic domain to a physical target domain. Knowledge transfer reduces the time required to train a task in the physical world, where the cost of interactions is high. However, most existing approaches assume exact correspondence in the task structure and the physical properties of the two domains. This work proposes a framework for Few-Shot Policy Transfer between two domains through Observation Mapping and Behavior Cloning. We use Generative Adversarial Networks (GANs) along with a cycle-consistency loss to map the observations between the source and target domains and later use this learned mapping to clone the successful source task behavior policy to the target domain. We observe successful behavior policy transfer with limited target task interactions and in cases where the source and target task are semantically dissimilar. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 399,556 |
2108.03477 | A Tool for Organizing Key Characteristics of Virtual, Augmented, and
Mixed Reality for Human-Robot Interaction Systems: Synthesizing VAM-HRI
Trends and Takeaways | Frameworks have begun to emerge to categorize Virtual, Augmented, and Mixed Reality (VAM) technologies that provide immersive, intuitive interfaces to facilitate Human-Robot Interaction. These frameworks, however, fail to capture key characteristics of the growing subfield of VAM-HRI and can be difficult to consistently apply due to continuous scales. This work builds upon these prior frameworks through the creation of a Tool for Organizing Key Characteristics of VAM-HRI Systems (TOKCS). TOKCS discretizes the continuous scales used within prior works for more consistent classification and adds additional characteristics related to a robot's internal model, anchor locations, manipulability, and the system's software and hardware. To showcase the tool's capability, TOKCS is applied to the ten papers from the fourth VAM-HRI workshop and examined for key trends and takeaways. These trends highlight the expressive capability of TOKCS while also helping frame newer trends and future work recommendations for VAM-HRI research. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 249,672 |
2302.02801 | LaMPP: Language Models as Probabilistic Priors for Perception and Action | Language models trained on large text corpora encode rich distributional information about real-world environments and action sequences. This information plays a crucial role in current approaches to language processing tasks like question answering and instruction generation. We describe how to leverage language models for *non-linguistic* perception and control tasks. Our approach casts labeling and decision-making as inference in probabilistic graphical models in which language models parameterize prior distributions over labels, decisions and parameters, making it possible to integrate uncertain observations and incomplete background knowledge in a principled way. Applied to semantic segmentation, household navigation, and activity recognition tasks, this approach improves predictions on rare, out-of-distribution, and structurally novel inputs. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 344,116 |
2102.03322 | CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks | Given the increasing promise of graph neural networks (GNNs) in real-world applications, several methods have been developed for explaining their predictions. Existing methods for interpreting predictions from GNNs have primarily focused on generating subgraphs that are especially relevant for a particular prediction. However, such methods are not counterfactual (CF) in nature: given a prediction, we want to understand how the prediction can be changed in order to achieve an alternative outcome. In this work, we propose a method for generating CF explanations for GNNs: the minimal perturbation to the input (graph) data such that the prediction changes. Using only edge deletions, we find that our method, CF-GNNExplainer, can generate CF explanations for the majority of instances across three widely used datasets for GNN explanations, while removing less than 3 edges on average, with at least 94\% accuracy. This indicates that CF-GNNExplainer primarily removes edges that are crucial for the original predictions, resulting in minimal CF explanations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 218,706 |
2404.02263 | OFMPNet: Deep End-to-End Model for Occupancy and Flow Prediction in
Urban Environment | The task of motion prediction is pivotal for autonomous driving systems, providing crucial data to choose a vehicle behavior strategy within its surroundings. Existing motion prediction techniques primarily focus on predicting the future trajectory of each agent in the scene individually, utilizing its past trajectory data. In this paper, we introduce an end-to-end neural network methodology designed to predict the future behaviors of all dynamic objects in the environment. This approach leverages the occupancy map and the scene's motion flow. We are investigatin various alternatives for constructing a deep encoder-decoder model called OFMPNet. This model uses a sequence of bird's-eye-view road images, occupancy grid, and prior motion flow as input data. The encoder of the model can incorporate transformer, attention-based, or convolutional units. The decoder considers the use of both convolutional modules and recurrent blocks. Additionally, we propose a novel time-weighted motion flow loss, whose application has shown a substantial decrease in end-point error. Our approach has achieved state-of-the-art results on the Waymo Occupancy and Flow Prediction benchmark, with a Soft IoU of 52.1% and an AUC of 76.75% on Flow-Grounded Occupancy. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 443,785 |
2410.15674 | TALoS: Enhancing Semantic Scene Completion via Test-time Adaptation on
the Line of Sight | Semantic Scene Completion (SSC) aims to perform geometric completion and semantic segmentation simultaneously. Despite the promising results achieved by existing studies, the inherently ill-posed nature of the task presents significant challenges in diverse driving scenarios. This paper introduces TALoS, a novel test-time adaptation approach for SSC that excavates the information available in driving environments. Specifically, we focus on that observations made at a certain moment can serve as Ground Truth (GT) for scene completion at another moment. Given the characteristics of the LiDAR sensor, an observation of an object at a certain location confirms both 1) the occupation of that location and 2) the absence of obstacles along the line of sight from the LiDAR to that point. TALoS utilizes these observations to obtain self-supervision about occupancy and emptiness, guiding the model to adapt to the scene in test time. In a similar manner, we aggregate reliable SSC predictions among multiple moments and leverage them as semantic pseudo-GT for adaptation. Further, to leverage future observations that are not accessible at the current time, we present a dual optimization scheme using the model in which the update is delayed until the future observation is available. Evaluations on the SemanticKITTI validation and test sets demonstrate that TALoS significantly improves the performance of the pre-trained SSC model. Our code is available at https://github.com/blue-531/TALoS. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 500,672 |
2401.16024 | Probabilistic Abduction for Visual Abstract Reasoning via Learning Rules
in Vector-symbolic Architectures | Abstract reasoning is a cornerstone of human intelligence, and replicating it with artificial intelligence (AI) presents an ongoing challenge. This study focuses on efficiently solving Raven's progressive matrices (RPM), a visual test for assessing abstract reasoning abilities, by using distributed computation and operators provided by vector-symbolic architectures (VSA). Instead of hard-coding the rule formulations associated with RPMs, our approach can learn the VSA rule formulations (hence the name Learn-VRF) with just one pass through the training data. Yet, our approach, with compact parameters, remains transparent and interpretable. Learn-VRF yields accurate predictions on I-RAVEN's in-distribution data, and exhibits strong out-of-distribution capabilities concerning unseen attribute-rule pairs, significantly outperforming pure connectionist baselines including large language models. Our code is available at https://github.com/IBM/learn-vector-symbolic-architectures-rule-formulations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 424,684 |
1504.06654 | Efficient Non-parametric Estimation of Multiple Embeddings per Word in
Vector Space | There is rising interest in vector-space word embeddings and their use in NLP, especially given recent methods for their fast estimation at very large scale. Nearly all this work, however, assumes a single vector per word type ignoring polysemy and thus jeopardizing their usefulness for downstream tasks. We present an extension to the Skip-gram model that efficiently learns multiple embeddings per word type. It differs from recent related work by jointly performing word sense discrimination and embedding learning, by non-parametrically estimating the number of senses per word type, and by its efficiency and scalability. We present new state-of-the-art results in the word similarity in context task and demonstrate its scalability by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 42,426 |
1404.5927 | Secure MIMO Communications under Quantized Channel Feedback in the
presence of Jamming | We consider the problem of secure communications in a MIMO setting in the presence of an adversarial jammer equipped with $n_j$ transmit antennas and an eavesdropper equipped with $n_e$ receive antennas. A multiantenna transmitter, equipped with $n_t$ antennas, desires to secretly communicate a message to a multiantenna receiver equipped with $n_r$ antennas. We propose a transmission method based on artificial noise and linear precoding and a two-stage receiver method employing beamforming. Under this strategy, we first characterize the achievable secrecy rates of communication and prove that the achievable secure degrees-of-freedom (SDoF) is given by $d_s = n_r - n_j$ in the perfect channel state information (CSI) case. Second, we consider quantized CSI feedback using Grassmannian quantization of a function of the direct channel matrix and derive sufficient conditions for the quantization bit rate scaling as a function of transmit power for maintaining the achievable SDoF $d_s$ with perfect CSI and for having asymptotically zero secrecy rate loss due to quantization. Numerical simulations are also provided to support the theory. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 32,544 |
2101.00699 | The structure of conservative gradient fields | The classical Clarke subdifferential alone is inadequate for understanding automatic differentiation in nonsmooth contexts. Instead, we can sometimes rely on enlarged generalized gradients called "conservative fields", defined through the natural path-wise chain rule: one application is the convergence analysis of gradient-based deep learning algorithms. In the semi-algebraic case, we show that all conservative fields are in fact just Clarke subdifferentials plus normals of manifolds in underlying Whitney stratifications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 214,171 |
1911.11063 | Voice Search and Typed Search Performance Comparison on Baidu Search
System | Although the voice search system is getting more and more developed, some people still have difficulties when searching for information with the voice search system. This paper is a pilot study to compare the search performance of people using voice search and typed search using Baidu search system. We surveyed and interviewed 40 Chinese students who have been using the Baidu search system. Afterward, we analyzed 8 people who had a middle to advanced searching ability by their behaviors, search results, and average query length. We found that there are a lot of variations among the participants' time when searching for different queries, and there were some interesting behaviors that were displayed by a number of participants. We conclude that more participants are needed to make a firm conclusion on the performance comparison between the voice search and typed search. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 155,009 |
cs/0603090 | Topological Grammars for Data Approximation | A method of {\it topological grammars} is proposed for multidimensional data approximation. For data with complex topology we define a {\it principal cubic complex} of low dimension and given complexity that gives the best approximation for the dataset. This complex is a generalization of linear and non-linear principal manifolds and includes them as particular cases. The problem of optimal principal complex construction is transformed into a series of minimization problems for quadratic functionals. These quadratic functionals have a physically transparent interpretation in terms of elastic energy. For the energy computation, the whole complex is represented as a system of nodes and springs. Topologically, the principal complex is a product of one-dimensional continuums (represented by graphs), and the grammars describe how these continuums transform during the process of optimal complex construction. This factorization of the whole process onto one-dimensional transformations using minimization of quadratic energy functionals allow us to construct efficient algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 539,345 |
1903.07799 | Random Pairwise Shapelets Forest | Shapelet is a discriminative subsequence of time series. An advanced shapelet-based method is to embed shapelet into accurate and fast random forest. However, it shows several limitations. First, random shapelet forest requires a large training cost for split threshold searching. Second, a single shapelet provides limited information for only one branch of the decision tree, resulting in insufficient accuracy and interpretability. Third, randomized ensemble causes interpretability declining. For that, this paper presents Random Pairwise Shapelets Forest (RPSF). RPSF combines a pair of shapelets from different classes to construct random forest. It omits threshold searching to be more efficient, includes more information for each node of the forest to be more effective. Moreover, a discriminability metric, Decomposed Mean Decrease Impurity (DMDI), is proposed to identify influential region for every class. Extensive experiments show RPSF improves the accuracy and training speed of shapelet-based forest. Case studies demonstrate the interpretability of our method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,700 |
2306.11104 | Markovian Embeddings for Coalitional Bargaining Games | We examine the Markovian properties of coalition bargaining games, in particular, the case where past rejected proposals cannot be repeated. We propose a Markovian embedding with filtrations to render the sates Markovian and thus, fit into the framework of stochastic games. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | true | 374,473 |
2010.14391 | Succinct and Robust Multi-Agent Communication With Temporal Message
Control | Recent studies have shown that introducing communication between agents can significantly improve overall performance in cooperative Multi-agent reinforcement learning (MARL). However, existing communication schemes often require agents to exchange an excessive number of messages at run-time under a reliable communication channel, which hinders its practicality in many real-world situations. In this paper, we present \textit{Temporal Message Control} (TMC), a simple yet effective approach for achieving succinct and robust communication in MARL. TMC applies a temporal smoothing technique to drastically reduce the amount of information exchanged between agents. Experiments show that TMC can significantly reduce inter-agent communication overhead without impacting accuracy. Furthermore, TMC demonstrates much better robustness against transmission loss than existing approaches in lossy networking environments. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 203,429 |
2410.09399 | Text Classification using Graph Convolutional Networks: A Comprehensive
Survey | Text classification is a quintessential and practical problem in natural language processing with applications in diverse domains such as sentiment analysis, fake news detection, medical diagnosis, and document classification. A sizable body of recent works exists where researchers have studied and tackled text classification from different angles with varying degrees of success. Graph convolution network (GCN)-based approaches have gained a lot of traction in this domain over the last decade with many implementations achieving state-of-the-art performance in more recent literature and thus, warranting the need for an updated survey. This work aims to summarize and categorize various GCN-based Text Classification approaches with regard to the architecture and mode of supervision. It identifies their strengths and limitations and compares their performance on various benchmark datasets. We also discuss future research directions and the challenges that exist in this domain. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 497,575 |
2305.12359 | An Optimal Two-Step Decoding at Receivers with Side Information in
PSK-Modulated Index Coding | This paper studies noisy index coding problems over single-input single-output broadcast channels. The codewords from a chosen index code of length $N$ are transmitted after $2^N$-PSK modulation over an AWGN channel. In "Index Coded PSK Modulation for prioritized Receivers," the authors showed that when a length-$N$ index code is transmitted as a $2^N$-PSK symbol, the ML decoder at a receiver decodes directly to the message bit rather than following the two-step decoding process of first demodulating the PSK symbol and equivalently the index-coded bits and then doing index-decoding. In this paper, we consider unprioritized receivers and follow the two-step decoding process at the receivers. After estimating the PSK symbol using an ML decoder, at a receiver, there might be more than one decoding strategy, i.e., a linear combination of index-coded bits and different subsets of side information bits, that can be used to estimate the requested message. Thomas et al. in ["Single Uniprior Index Coding With Min Max Probability of Error Over Fading Channels,"] showed that for binary-modulated index code transmissions, minimizing the number of transmissions used to decode a requested message is equivalent to minimizing the probability of error. This paper shows that this is no longer the case while employing multi-level modulations. Further, we consider that the side information available to each receiver is also noisy and derive an expression for the probability that a requested message bit is estimated erroneously at a receiver. We also show that the criterion for choosing a decoding strategy that gives the best probability of error performance at a receiver changes with the signal-to-noise ratio at which the side information is broadcast. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 365,962 |
2306.08056 | Distributed Trust Through the Lens of Software Architecture | Distributed trust is a nebulous concept that has evolved from different perspectives in recent years. While one can attribute its current prominence to blockchain and cryptocurrency, the distributed trust concept has been cultivating progress in federated learning, trustworthy and responsible AI in an ecosystem setting, data sharing, privacy issues across organizational boundaries, and zero trust cybersecurity. This paper will survey the concept of distributed trust in multiple disciplines. It will take a system/software architecture point of view to look at trust redistribution/shift and the associated tradeoffs in systems and applications enabled by distributed trust technologies. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 373,267 |
2108.04822 | Self-supervised Consensus Representation Learning for Attributed Graph | Attempting to fully exploit the rich information of topological structure and node features for attributed graph, we introduce self-supervised learning mechanism to graph representation learning and propose a novel Self-supervised Consensus Representation Learning (SCRL) framework. In contrast to most existing works that only explore one graph, our proposed SCRL method treats graph from two perspectives: topology graph and feature graph. We argue that their embeddings should share some common information, which could serve as a supervisory signal. Specifically, we construct the feature graph of node features via k-nearest neighbor algorithm. Then graph convolutional network (GCN) encoders extract features from two graphs respectively. Self-supervised loss is designed to maximize the agreement of the embeddings of the same node in the topology graph and the feature graph. Extensive experiments on real citation networks and social networks demonstrate the superiority of our proposed SCRL over the state-of-the-art methods on semi-supervised node classification task. Meanwhile, compared with its main competitors, SCRL is rather efficient. | false | false | false | true | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 250,125 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.