id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1912.11371 | Comparison of the P300 detection accuracy related to the BCI speller and
image recognition scenarios | There are several protocols in the Electroencephalography (EEG) recording scenarios which produce various types of event-related potentials (ERP). P300 pattern is a well-known ERP which produced by auditory and visual oddball paradigm and BCI speller system. In this study, P300 and non-P300 separability are investigated in two scenarios including image recognition paradigm and BCI speller. Image recognition scenario is an experiment that examines the participants, knowledge about an image that shown to them before by analyzing the EEG signal recorded during the observing of that image as visual stimulation. To do this, three types of famous classifiers (SVM, Bayes LDA, and sparse logistic regression) were used to classify EEG recordings in six classes problem. Filtered and down-sampled (temporal samples) of EEG recording were considered as features in classification P300 pattern. Also, different sets of EEG recording including 4, 8 and 16 channels and different trial numbers were used to considering various situations in comparison. The accuracy was increased by increasing the number of trials and channels. The results prove that better accuracy is observed in the case of the image recognition scenario for the different sets of channels and by using the different number of trials. So it can be concluded that P300 pattern which produced in image recognition paradigm is more separable than BCI (matrix speller). | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 158,554 |
0909.2234 | Universal and Composite Hypothesis Testing via Mismatched Divergence | For the universal hypothesis testing problem, where the goal is to decide between the known null hypothesis distribution and some other unknown distribution, Hoeffding proposed a universal test in the nineteen sixties. Hoeffding's universal test statistic can be written in terms of Kullback-Leibler (K-L) divergence between the empirical distribution of the observations and the null hypothesis distribution. In this paper a modification of Hoeffding's test is considered based on a relaxation of the K-L divergence test statistic, referred to as the mismatched divergence. The resulting mismatched test is shown to be a generalized likelihood-ratio test (GLRT) for the case where the alternate distribution lies in a parametric family of the distributions characterized by a finite dimensional parameter, i.e., it is a solution to the corresponding composite hypothesis testing problem. For certain choices of the alternate distribution, it is shown that both the Hoeffding test and the mismatched test have the same asymptotic performance in terms of error exponents. A consequence of this result is that the GLRT is optimal in differentiating a particular distribution from others in an exponential family. It is also shown that the mismatched test has a significant advantage over the Hoeffding test in terms of finite sample size performance. This advantage is due to the difference in the asymptotic variances of the two test statistics under the null hypothesis. In particular, the variance of the K-L divergence grows linearly with the alphabet size, making the test impractical for applications involving large alphabet distributions. The variance of the mismatched divergence on the other hand grows linearly with the dimension of the parameter space, and can hence be controlled through a prudent choice of the function class defining the mismatched divergence. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 4,476 |
2311.01933 | ForecastPFN: Synthetically-Trained Zero-Shot Forecasting | The vast majority of time-series forecasting approaches require a substantial training dataset. However, many real-life forecasting applications have very little initial observations, sometimes just 40 or fewer. Thus, the applicability of most forecasting methods is restricted in data-sparse commercial applications. While there is recent work in the setting of very limited initial data (so-called `zero-shot' forecasting), its performance is inconsistent depending on the data used for pretraining. In this work, we take a different approach and devise ForecastPFN, the first zero-shot forecasting model trained purely on a novel synthetic data distribution. ForecastPFN is a prior-data fitted network, trained to approximate Bayesian inference, which can make predictions on a new time series dataset in a single forward pass. Through extensive experiments, we show that zero-shot predictions made by ForecastPFN are more accurate and faster compared to state-of-the-art forecasting methods, even when the other methods are allowed to train on hundreds of additional in-distribution data points. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 405,234 |
2002.11477 | Learning a Directional Soft Lane Affordance Model for Road Scenes Using
Self-Supervision | Humans navigate complex environments in an organized yet flexible manner, adapting to the context and implicit social rules. Understanding these naturally learned patterns of behavior is essential for applications such as autonomous vehicles. However, algorithmically defining these implicit rules of human behavior remains difficult. This work proposes a novel self-supervised method for training a probabilistic network model to estimate the regions humans are most likely to drive in as well as a multimodal representation of the inferred direction of travel at each point. The model is trained on individual human trajectories conditioned on a representation of the driving environment. The model is shown to successfully generalize to new road scenes, demonstrating potential for real-world application as a prior for socially acceptable driving behavior in challenging or ambiguous scenarios which are poorly handled by explicit traffic rules. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 165,720 |
1701.01742 | Automated Design of CubeSats and Small Spacecrafts | The miniaturization of electronics, sensors and actuators has enabled the growing use of CubeSats and sub-20 kg spacecraft. Their reduced mass and volume has the potential to translate into significant reductions in required propellant and launch mass for interplanetary missions, earth observation and for astrophysics applications. There is an important need to optimize the design of these spacecraft to better ascertain their maximal capabilities by finding optimized solution, where mass, volume and power is a premium. Current spacecraft design methods require a team of experts, who use their engineering experience and judgement to develop a spacecraft design. Such an approach can miss innovative designs not thought of by a human design team. In this work we present a compelling alternative approach that extends the capabilities of a spacecraft engineering design team to search for and identify near-optimal solutions using machine learning. The approach enables automated design of a spacecraft that requires specifying quantitative goals, requiring reaching a target location or operating at a predetermined orbit for a required time. Next a virtual warehouse of components is specified that be selected to produce a candidate design. Candidate designs are produced using an artificial Darwinian approach, where fittest design survives and reproduce, while unfit individuals are culled off. Our past work in space robotic has produced systems designs and controllers that are human competitive. Finding a near-optimal solution presents vast improvements over a solution obtained through engineering judgment and point design alone. The approach shows a credible pathway to identify and evaluate many more candidate designs than it would be otherwise possible with a human design team alone. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 66,445 |
1609.06283 | Robotic Swarm Control from Spatio-Temporal Specifications | In this paper, we study the problem of controlling a two-dimensional robotic swarm with the purpose of achieving high level and complex spatio-temporal patterns. We use a rich spatio-temporal logic that is capable of describing a wide range of time varying and complex spatial configurations, and develop a method to encode such formal specifications as a set of mixed integer linear constraints, which are incorporated into a mixed integer linear programming problem. We plan trajectories for each individual robot such that the whole swarm satisfies the spatio-temporal requirements, while optimizing total robot movement and/or a metric that shows how strongly the swarm trajectory resembles given spatio-temporal behaviors. An illustrative case study is included. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 61,265 |
1902.10365 | A Distributionally Robust Optimization Method for Adversarial Multiple
Kernel Learning | We propose a novel data-driven method to learn a mixture of multiple kernels with random features that is certifiabaly robust against adverserial inputs. Specifically, we consider a distributionally robust optimization of the kernel-target alignment with respect to the distribution of training samples over a distributional ball defined by the Kullback-Leibler (KL) divergence. The distributionally robust optimization problem can be recast as a min-max optimization whose objective function includes a log-sum term. We develop a mini-batch biased stochastic primal-dual proximal method to solve the min-max optimization. To debias the minibatch algorithm, we use the Gumbel perturbation technique to estimate the log-sum term. We establish theoretical guarantees for the performance of the proposed multiple kernel learning method. In particular, we prove the consistency, asymptotic normality, stochastic equicontinuity, and the minimax rate of the empirical estimators. In addition, based on the notion of Rademacher and Gaussian complexities, we establish distributionally robust generalization bounds that are tighter than previous known bounds. More specifically, we leverage matrix concentration inequalities to establish distributionally robust generalization bounds. We validate our kernel learning approach for classification with the kernel SVMs on synthetic dataset generated by sampling multvariate Gaussian distributions with differernt variance structures. We also apply our kernel learning approach to the MNIST data-set and evaluate its robustness to perturbation of input images under different adversarial models. More specifically, we examine the robustness of the proposed kernel model selection technique against FGSM, PGM, C\&W, and DDN adversarial perturbations, and compare its performance with alternative state-of-the-art multiple kernel learning paradigms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,659 |
0901.2665 | A Density Matrix-based Algorithm for Solving Eigenvalue Problems | A new numerical algorithm for solving the symmetric eigenvalue problem is presented. The technique deviates fundamentally from the traditional Krylov subspace iteration based techniques (Arnoldi and Lanczos algorithms) or other Davidson-Jacobi techniques, and takes its inspiration from the contour integration and density matrix representation in quantum mechanics. It will be shown that this new algorithm - named FEAST - exhibits high efficiency, robustness, accuracy and scalability on parallel architectures. Examples from electronic structure calculations of Carbon nanotubes (CNT) are presented, and numerical performances and capabilities are discussed. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 2,998 |
2106.05907 | DAIR: Disentangled Attention Intrinsic Regularization for Safe and
Efficient Bimanual Manipulation | We address the problem of safely solving complex bimanual robot manipulation tasks with sparse rewards. Such challenging tasks can be decomposed into sub-tasks that are accomplishable by different robots concurrently or sequentially for better efficiency. While previous reinforcement learning approaches primarily focus on modeling the compositionality of sub-tasks, two fundamental issues are largely ignored particularly when learning cooperative strategies for two robots: (i) domination, i.e., one robot may try to solve a task by itself and leaves the other idle; (ii) conflict, i.e., one robot can interrupt another's workspace when executing different sub-tasks simultaneously, which leads to unsafe collisions. To tackle these two issues, we propose a novel technique called disentangled attention, which provides an intrinsic regularization for two robots to focus on separate sub-tasks and objects. We evaluate our method on five bimanual manipulation tasks. Experimental results show that our proposed intrinsic regularization successfully avoids domination and reduces conflicts for the policies, which leads to significantly more efficient and safer cooperative strategies than all the baselines. Our project page with videos is at https://mehooz.github.io/bimanual-attention. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 240,275 |
2311.12289 | ATLANTIC: Structure-Aware Retrieval-Augmented Language Model for
Interdisciplinary Science | Large language models record impressive performance on many natural language processing tasks. However, their knowledge capacity is limited to the pretraining corpus. Retrieval augmentation offers an effective solution by retrieving context from external knowledge sources to complement the language model. However, existing retrieval augmentation techniques ignore the structural relationships between these documents. Furthermore, retrieval models are not explored much in scientific tasks, especially in regard to the faithfulness of retrieved documents. In this paper, we propose a novel structure-aware retrieval augmented language model that accommodates document structure during retrieval augmentation. We create a heterogeneous document graph capturing multiple types of relationships (e.g., citation, co-authorship, etc.) that connect documents from more than 15 scientific disciplines (e.g., Physics, Medicine, Chemistry, etc.). We train a graph neural network on the curated document graph to act as a structural encoder for the corresponding passages retrieved during the model pretraining. Particularly, along with text embeddings of the retrieved passages, we obtain structural embeddings of the documents (passages) and fuse them together before feeding them to the language model. We evaluate our model extensively on various scientific benchmarks that include science question-answering and scientific document classification tasks. Experimental results demonstrate that structure-aware retrieval improves retrieving more coherent, faithful and contextually relevant passages, while showing a comparable performance in the overall accuracy. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 409,274 |
2302.08617 | Quantum Computing Provides Exponential Regret Improvement in Episodic
Reinforcement Learning | In this paper, we investigate the problem of \textit{episodic reinforcement learning} with quantum oracles for state evolution. To this end, we propose an \textit{Upper Confidence Bound} (UCB) based quantum algorithmic framework to facilitate learning of a finite-horizon MDP. Our quantum algorithm achieves an exponential improvement in regret as compared to the classical counterparts, achieving a regret of $\Tilde{\mathcal{O}}(1)$ as compared to $\Tilde{\mathcal{O}}(\sqrt{K})$ \footnote{$\Tilde{\mathcal{O}}(\cdot)$ hides logarithmic terms.}, $K$ being the number of training episodes. In order to achieve this advantage, we exploit efficient quantum mean estimation technique that provides quadratic improvement in the number of i.i.d. samples needed to estimate the mean of sub-Gaussian random variables as compared to classical mean estimation. This improvement is a key to the significant regret improvement in quantum reinforcement learning. We provide proof-of-concept experiments on various RL environments that in turn demonstrate performance gains of the proposed algorithmic framework. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 346,106 |
2401.08658 | End-To-End Planning of Autonomous Driving in Industry and Academia:
2022-2023 | This paper aims to provide a quick review of the methods including the technologies in detail that are currently reported in industry and academia. Specifically, this paper reviews the end-to-end planning, including Tesla FSD V12, Momenta 2023, Horizon Robotics 2023, Motional RoboTaxi 2022, Woven Planet (Toyota): Urban Driver, and Nvidia. In addition, we review the state-of-the-art academic studies that investigate end-to-end planning of autonomous driving. This paper provides readers with a concise structure and fast learning of state-of-the-art end-to-end planning for 2022-2023. This article provides a meaningful overview as introductory material for beginners to follow the state-of-the-art end-to-end planning of autonomous driving in industry and academia, as well as supplementary material for advanced researchers. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 421,976 |
2205.09296 | Opinion Manipulation on Farsi Twitter | For Iranians and the Iranian diaspora, the Farsi Twittersphere provides an important alternative to state media and an outlet for political discourse. But this understudied online space has become an opinion manipulation battleground, with diverse actors using inauthentic accounts to advance their goals and shape online narratives. Examining trending discussions crossing social cleavages in Iran, we explore how the dynamics of opinion manipulation differ across diverse issue areas. Our analysis suggests that opinion manipulation by inauthentic accounts is more prevalent in divisive political discussions than non-divisive or apolitical discussions. We show how Twitter's network structures help to reinforce the content propagated by clusters of inauthentic accounts in divisive political discussions. Analyzing both the content and structure of online discussions in the Iranian Twittersphere, this work contributes to a growing body of literature exploring the dynamics of online opinion manipulation, while improving our understanding of how information is controlled in the digital age. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 297,214 |
1110.2906 | Multiple dynamical time-scales in networks with hierarchically nested
modular organization | Many natural and engineered complex networks have intricate mesoscopic organization, e.g., the clustering of the constituent nodes into several communities or modules. Often, such modularity is manifested at several different hierarchical levels, where the clusters defined at one level appear as elementary entities at the next higher level. Using a simple model of a hierarchical modular network, we show that such a topological structure gives rise to characteristic time-scale separation between dynamics occurring at different levels of the hierarchy. This generalizes our earlier result for simple modular networks, where fast intra-modular and slow inter-modular processes were clearly distinguished. Investigating the process of synchronization of oscillators in a hierarchical modular network, we show the existence of as many distinct time-scales as there are hierarchical levels in the system. This suggests a possible functional role of such mesoscopic organization principle in natural systems, viz., in the dynamical separation of events occurring at different spatial scales. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 12,636 |
2410.19940 | Cobblestone: Iterative Automation for Formal Verification | Formal verification using proof assistants, such as Coq, is an effective way of improving software quality, but it is expensive. Writing proofs manually requires both significant effort and expertise. Recent research has used machine learning to automatically synthesize proofs, reducing verification effort, but these tools are able to prove only a fraction of the desired software properties. We introduce Cobblestone, a new proof-synthesis approach that improves on the state of the art by taking advantage of partial progress in proof synthesis attempts. Unlike prior tools, Cobblestone can produce multiple unsuccessful proofs using a large language model (LLM), identify the working portions of those proofs, and combine them into a single, successful proof, taking advantage of internal partial progress. We evaluate Cobblestone on two benchmarks of open-source Coq projects, controlling for training data leakage in LLM datasets. Fully automatically, Cobblestone can prove 48% of the theorems, while Proverbot9001, the previous state-of-the-art, learning-based, proof-synthesis tool, can prove 17%. Cobblestone establishes a new state of the art for fully automated proof synthesis tools for Coq. We also evaluate Cobblestone in a setting where it is given external partial proof progress from oracles, serving as proxies for a human proof engineer or another tool. When the theorem is broken down into a set of subgoals and Cobblestone is given a set of relevant lemmas already proven in the project, it can prove up to 58% of the theorems. We qualitatively study the theorems Cobblestone is and is not able to prove to outline potential future research directions to further improve proof synthesis, including developing interactive, semi-automated tools. Our research shows that tools can make better use of partial progress made during proof synthesis to more effectively automate formal verification. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 502,578 |
2305.10114 | Automatic Hyperparameter Tuning in Sparse Matrix Factorization | We study the problem of hyperparameter tuning in sparse matrix factorization under Bayesian framework. In the prior work, an analytical solution of sparse matrix factorization with Laplace prior was obtained by variational Bayes method under several approximations. Based on this solution, we propose a novel numerical method of hyperparameter tuning by evaluating the zero point of normalization factor in sparse matrix prior. We also verify that our method shows excellent performance for ground-truth sparse matrix reconstruction by comparing it with the widely-used algorithm of sparse principal component analysis. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 364,913 |
2105.06977 | Do Context-Aware Translation Models Pay the Right Attention? | Context-aware machine translation models are designed to leverage contextual information, but often fail to do so. As a result, they inaccurately disambiguate pronouns and polysemous words that require context for resolution. In this paper, we ask several questions: What contexts do human translators use to resolve ambiguous words? Are models paying large amounts of attention to the same context? What if we explicitly train them to do so? To answer these questions, we introduce SCAT (Supporting Context for Ambiguous Translations), a new English-French dataset comprising supporting context words for 14K translations that professional translators found useful for pronoun disambiguation. Using SCAT, we perform an in-depth analysis of the context used to disambiguate, examining positional and lexical characteristics of the supporting words. Furthermore, we measure the degree of alignment between the model's attention scores and the supporting context from SCAT, and apply a guided attention strategy to encourage agreement between the two. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 235,281 |
2111.06593 | Using Deep Learning Sequence Models to Identify SARS-CoV-2 Divergence | SARS-CoV-2 is an upper respiratory system RNA virus that has caused over 3 million deaths and infecting over 150 million worldwide as of May 2021. With thousands of strains sequenced to date, SARS-CoV-2 mutations pose significant challenges to scientists on keeping pace with vaccine development and public health measures. Therefore, an efficient method of identifying the divergence of lab samples from patients would greatly aid the documentation of SARS-CoV-2 genomics. In this study, we propose a neural network model that leverages recurrent and convolutional units to directly take in amino acid sequences of spike proteins and classify corresponding clades. We also compared our model's performance with Bidirectional Encoder Representations from Transformers (BERT) pre-trained on protein database. Our approach has the potential of providing a more computationally efficient alternative to current homology based intra-species differentiation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 266,118 |
1911.07323 | Layer-Dependent Importance Sampling for Training Deep and Large Graph
Convolutional Networks | Graph convolutional networks (GCNs) have recently received wide attentions, due to their successful applications in different graph tasks and different domains. Training GCNs for a large graph, however, is still a challenge. Original full-batch GCN training requires calculating the representation of all the nodes in the graph per GCN layer, which brings in high computation and memory costs. To alleviate this issue, several sampling-based methods have been proposed to train GCNs on a subset of nodes. Among them, the node-wise neighbor-sampling method recursively samples a fixed number of neighbor nodes, and thus its computation cost suffers from exponential growing neighbor size; while the layer-wise importance-sampling method discards the neighbor-dependent constraints, and thus the nodes sampled across layer suffer from sparse connection problem. To deal with the above two problems, we propose a new effective sampling algorithm called LAyer-Dependent ImportancE Sampling (LADIES). Based on the sampled nodes in the upper layer, LADIES selects their neighborhood nodes, constructs a bipartite subgraph and computes the importance probability accordingly. Then, it samples a fixed number of nodes by the calculated probability, and recursively conducts such procedure per layer to construct the whole computation graph. We prove theoretically and experimentally, that our proposed sampling algorithm outperforms the previous sampling methods in terms of both time and memory costs. Furthermore, LADIES is shown to have better generalization accuracy than original full-batch GCN, due to its stochastic nature. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 153,807 |
2204.12022 | Estimating the Resize Parameter in End-to-end Learned Image Compression | We describe a search-free resizing framework that can further improve the rate-distortion tradeoff of recent learned image compression models. Our approach is simple: compose a pair of differentiable downsampling/upsampling layers that sandwich a neural compression model. To determine resize factors for different inputs, we utilize another neural network jointly trained with the compression model, with the end goal of minimizing the rate-distortion objective. Our results suggest that "compression friendly" downsampled representations can be quickly determined during encoding by using an auxiliary network and differentiable image warping. By conducting extensive experimental tests on existing deep image compression models, we show results that our new resizing parameter estimation framework can provide Bj{\o}ntegaard-Delta rate (BD-rate) improvement of about 10% against leading perceptual quality engines. We also carried out a subjective quality study, the results of which show that our new approach yields favorable compressed images. To facilitate reproducible research in this direction, the implementation used in this paper is being made freely available online at: https://github.com/treammm/ResizeCompression. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 293,339 |
2311.00260 | Incentivized Collaboration in Active Learning | In collaborative active learning, where multiple agents try to learn labels from a common hypothesis, we introduce an innovative framework for incentivized collaboration. Here, rational agents aim to obtain labels for their data sets while keeping label complexity at a minimum. We focus on designing (strict) individually rational (IR) collaboration protocols, ensuring that agents cannot reduce their expected label complexity by acting individually. We first show that given any optimal active learning algorithm, the collaboration protocol that runs the algorithm as is over the entire data is already IR. However, computing the optimal algorithm is NP-hard. We therefore provide collaboration protocols that achieve (strict) IR and are comparable with the best known tractable approximation algorithm in terms of label complexity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 404,567 |
2101.11217 | Automated Crop Field Surveillance using Computer Vision | Artificial Intelligence is everywhere today. But unfortunately, Agriculture has not been able to get that much attention from Artificial Intelligence (AI). A lack of automation persists in the agriculture industry. For over many years, farmers and crop field owners have been facing a problem of trespassing of wild animals for which no feasible solution has been provided. Installing a fence or barrier like structure is neither feasible nor efficient due to the large areas covered by the fields. Also, if the landowner can afford to build a wall or barrier, government policies for building walls are often very irksome. The paper intends to give a simple intelligible solution to the problem with Automated Crop Field Surveillance using Computer Vision. The solution will significantly reduce the cost of crops destroyed annually and completely automate the security of the field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 217,199 |
2405.12247 | Focus on Low-Resolution Information: Multi-Granular Information-Lossless
Model for Low-Resolution Human Pose Estimation | In real-world applications of human pose estimation, low-resolution input images are frequently encountered when the performance of the image acquisition equipment is limited or the shooting distance is too far. However, existing state-of-the-art models for human pose estimation perform poorly on low-resolution images. One key reason is the presence of downsampling layers in these models, e.g., strided convolutions and pooling layers. It further reduces the already insufficient image information. Another key reason is that the body skeleton and human kinematic information are not fully utilized. In this work, we propose a Multi-Granular Information-Lossless (MGIL) model to replace the downsampling layers to address the above issues. Specifically, MGIL employs a Fine-grained Lossless Information Extraction (FLIE) module, which can prevent the loss of local information. Furthermore, we design a Coarse-grained Information Interaction (CII) module to adequately leverage human body structural information. To efficiently fuse cross-granular information and thoroughly exploit the relationships among keypoints, we further introduce a Multi-Granular Adaptive Fusion (MGAF) mechanism. The mechanism assigns weights to features of different granularities based on the content of the image. The model is effective, flexible, and universal. We show its potential in various vision tasks with comprehensive experiments. It outperforms the SOTA methods by 7.7 mAP on COCO and performs well with different input resolutions, different backbones, and different vision tasks. The code is provided in supplementary material. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 455,457 |
2405.07369 | Incorporating Anatomical Awareness for Enhanced Generalizability and
Progression Prediction in Deep Learning-Based Radiographic Sacroiliitis
Detection | Purpose: To examine whether incorporating anatomical awareness into a deep learning model can improve generalizability and enable prediction of disease progression. Methods: This retrospective multicenter study included conventional pelvic radiographs of 4 different patient cohorts focusing on axial spondyloarthritis (axSpA) collected at university and community hospitals. The first cohort, which consisted of 1483 radiographs, was split into training (n=1261) and validation (n=222) sets. The other cohorts comprising 436, 340, and 163 patients, respectively, were used as independent test datasets. For the second cohort, follow-up data of 311 patients was used to examine progression prediction capabilities. Two neural networks were trained, one on images cropped to the bounding box of the sacroiliac joints (anatomy-aware) and the other one on full radiographs. The performance of the models was compared using the area under the receiver operating characteristic curve (AUC), accuracy, sensitivity, and specificity. Results: On the three test datasets, the standard model achieved AUC scores of 0.853, 0.817, 0.947, with an accuracy of 0.770, 0.724, 0.850. Whereas the anatomy-aware model achieved AUC scores of 0.899, 0.846, 0.957, with an accuracy of 0.821, 0.744, 0.906, respectively. The patients who were identified as high risk by the anatomy aware model had an odds ratio of 2.16 (95% CI: 1.19, 3.86) for having progression of radiographic sacroiliitis within 2 years. Conclusion: Anatomical awareness can improve the generalizability of a deep learning model in detecting radiographic sacroiliitis. The model is published as fully open source alongside this study. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 453,685 |
2110.14578 | Spatio-Temporal Federated Learning for Massive Wireless Edge Networks | This paper presents a novel approach to conduct highly efficient federated learning (FL) over a massive wireless edge network, where an edge server and numerous mobile devices (clients) jointly learn a global model without transporting the huge amount of data collected by the mobile devices to the edge server. The proposed FL approach is referred to as spatio-temporal FL (STFL), which jointly exploits the spatial and temporal correlations between the learning updates from different mobile devices scheduled to join STFL in various training epochs. The STFL model not only represents the realistic intermittent learning behavior from the edge server to the mobile devices due to data delivery outage, but also features a mechanism of compensating loss learning updates in order to mitigate the impacts of intermittent learning. An analytical framework of STFL is proposed and employed to study the learning capability of STFL via its convergence performance. In particular, we have assessed the impact of data delivery outage, intermittent learning mitigation, and statistical heterogeneity of datasets on the convergence performance of STFL. The results provide crucial insights into the design and analysis of STFL-based wireless networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 263,575 |
2410.20639 | A Comparative Study of Multiple Deep Learning Algorithms for Efficient
Localization of Bone Joints in the Upper Limbs of Human Body | This paper addresses the medical imaging problem of joint detection in the upper limbs, viz. elbow, shoulder, wrist and finger joints. Localization of joints from X-Ray and Computerized Tomography (CT) scans is an essential step for the assessment of various bone-related medical conditions like Osteoarthritis, Rheumatoid Arthritis, and can even be used for automated bone fracture detection. Automated joint localization also detects the corresponding bones and can serve as input to deep learning-based models used for the computerized diagnosis of the aforementioned medical disorders. This in-creases the accuracy of prediction and aids the radiologists with analyzing the scans, which is quite a complex and exhausting task. This paper provides a detailed comparative study between diverse Deep Learning (DL) models - YOLOv3, YOLOv7, EfficientDet and CenterNet in multiple bone joint detections in the upper limbs of the human body. The research analyses the performance of different DL models, mathematically, graphically and visually. These models are trained and tested on a portion of the openly available MURA (musculoskeletal radiographs) dataset. The study found that the best Mean Average Precision (mAP at 0.5:0.95) values of YOLOv3, YOLOv7, EfficientDet and CenterNet are 35.3, 48.3, 46.5 and 45.9 respectively. Besides, it has been found YOLOv7 performed the best for accurately predicting the bounding boxes while YOLOv3 performed the worst in the Visual Analysis test. Code available at https://github.com/Sohambasu07/BoneJointsLocalization | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 502,895 |
2102.06525 | Leveraging Reinforcement Learning for evaluating Robustness of KNN
Search Algorithms | The problem of finding K-nearest neighbors in the given dataset for a given query point has been worked upon since several years. In very high dimensional spaces the K-nearest neighbor search (KNNS) suffers in terms of complexity in computation of high dimensional distances. With the issue of curse of dimensionality, it gets quite tedious to reliably bank on the results of variety approximate nearest neighbor search approaches. In this paper, we survey some novel K-Nearest Neighbor Search approaches that tackles the problem of Search from the perspectives of computations, the accuracy of approximated results and leveraging parallelism to speed-up computations. We attempt to derive a relationship between the true positive and false points for a given KNNS approach. Finally, in order to evaluate the robustness of a KNNS approach against adversarial points, we propose a generic Reinforcement Learning based framework for the same. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,778 |
2111.11348 | Paris-CARLA-3D: A Real and Synthetic Outdoor Point Cloud Dataset for
Challenging Tasks in 3D Mapping | Paris-CARLA-3D is a dataset of several dense colored point clouds of outdoor environments built by a mobile LiDAR and camera system. The data are composed of two sets with synthetic data from the open source CARLA simulator (700 million points) and real data acquired in the city of Paris (60 million points), hence the name Paris-CARLA-3D. One of the advantages of this dataset is to have simulated the same LiDAR and camera platform in the open source CARLA simulator as the one used to produce the real data. In addition, manual annotation of the classes using the semantic tags of CARLA was performed on the real data, allowing the testing of transfer methods from the synthetic to the real data. The objective of this dataset is to provide a challenging dataset to evaluate and improve methods on difficult vision tasks for the 3D mapping of outdoor environments: semantic segmentation, instance segmentation, and scene completion. For each task, we describe the evaluation protocol as well as the experiments carried out to establish a baseline. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 267,637 |
1910.09441 | DeepMNavigate: Deep Reinforced Multi-Robot Navigation Unifying Local &
Global Collision Avoidance | We present a novel algorithm (DeepMNavigate) for global multi-agent navigation in dense scenarios using deep reinforcement learning (DRL). Our approach uses local and global information for each robot from motion information maps. We use a three-layer CNN that takes these maps as input to generate a suitable action to drive each robot to its goal position. Our approach is general, learns an optimal policy using a multi-scenario, multi-state training algorithm, and can directly handle raw sensor measurements for local observations. We demonstrate the performance on dense, complex benchmarks with narrow passages and environments with tens of agents. We highlight the algorithm's benefits over prior learning methods and geometric decentralized algorithms in complex scenarios. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | 150,193 |
2210.14893 | Superstabilizing Control of Discrete-Time ARX Models under Error in
Variables | This paper applies a polynomial optimization based framework towards the superstabilizing control of an Autoregressive with Exogenous Input (ARX) model given noisy data observations. The recorded input and output values are corrupted with L-infinity bounded noise where the bounds are known. This is an instance of Error in Variables (EIV) in which true internal state of the ARX system remains unknown. The consistency set of ARX models compatible with noisy data has a bilinearity between unknown plant parameters and unknown noise terms. The requirement for a dynamic compensator to superstabilize all consistent plants is expressed using polynomial nonnegativity constraints, and solved using sum-of-squares (SOS) methods in a converging hierarchy of semidefinite programs in increasing size. The computational complexity of this method may be reduced by applying a Theorem of Alternatives to eliminate the noise terms. Effectiveness of this method is demonstrated on control of example ARX models. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 326,722 |
2304.01205 | When Evolutionary Computation Meets Privacy | Recently, evolutionary computation (EC) has been promoted by machine learning, distributed computing, and big data technologies, resulting in new research directions of EC like distributed EC and surrogate-assisted EC. These advances have significantly improved the performance and the application scope of EC, but also trigger privacy leakages, such as the leakage of optimal results and surrogate model. Accordingly, evolutionary computation combined with privacy protection is becoming an emerging topic. However, privacy concerns in evolutionary computation lack a systematic exploration, especially for the object, motivation, position, and method of privacy protection. To this end, in this paper, we discuss three typical optimization paradigms (i.e., \textit{centralized optimization, distributed optimization, and data-driven optimization}) to characterize optimization modes of evolutionary computation and propose BOOM to sort out privacy concerns in evolutionary computation. Specifically, the centralized optimization paradigm allows clients to outsource optimization problems to the centralized server and obtain optimization solutions from the server. While the distributed optimization paradigm exploits the storage and computational power of distributed devices to solve optimization problems. Also, the data-driven optimization paradigm utilizes data collected in history to tackle optimization problems lacking explicit objective functions. Particularly, this paper adopts BOOM to characterize the object and motivation of privacy protection in three typical optimization paradigms and discusses potential privacy-preserving technologies balancing optimization performance and privacy guarantees in three typical optimization paradigms. Furthermore, this paper attempts to foresee some new research directions of privacy-preserving evolutionary computation. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | true | false | false | 355,973 |
2309.11086 | From Unstable Contacts to Stable Control: A Deep Learning Paradigm for
HD-sEMG in Neurorobotics | In the past decade, there has been significant advancement in designing wearable neural interfaces for controlling neurorobotic systems, particularly bionic limbs. These interfaces function by decoding signals captured non-invasively from the skin's surface. Portable high-density surface electromyography (HD-sEMG) modules combined with deep learning decoding have attracted interest by achieving excellent gesture prediction and myoelectric control of prosthetic systems and neurorobots. However, factors like pixel-shape electrode size and unstable skin contact make HD-sEMG susceptible to pixel electrode drops. The sparse electrode-skin disconnections rooted in issues such as low adhesion, sweating, hair blockage, and skin stretch challenge the reliability and scalability of these modules as the perception unit for neurorobotic systems. This paper proposes a novel deep-learning model providing resiliency for HD-sEMG modules, which can be used in the wearable interfaces of neurorobots. The proposed 3D Dilated Efficient CapsNet model trains on an augmented input space to computationally `force' the network to learn channel dropout variations and thus learn robustness to channel dropout. The proposed framework maintained high performance under a sensor dropout reliability study conducted. Results show conventional models' performance significantly degrades with dropout and is recovered using the proposed architecture and the training paradigm. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 393,273 |
2306.06823 | Weakly supervised information extraction from inscrutable handwritten
document images | State-of-the-art information extraction methods are limited by OCR errors. They work well for printed text in form-like documents, but unstructured, handwritten documents still remain a challenge. Adapting existing models to domain-specific training data is quite expensive, because of two factors, 1) limited availability of the domain-specific documents (such as handwritten prescriptions, lab notes, etc.), and 2) annotations become even more challenging as one needs domain-specific knowledge to decode inscrutable handwritten document images. In this work, we focus on the complex problem of extracting medicine names from handwritten prescriptions using only weakly labeled data. The data consists of images along with the list of medicine names in it, but not their location in the image. We solve the problem by first identifying the regions of interest, i.e., medicine lines from just weak labels and then injecting a domain-specific medicine language model learned using only synthetically generated data. Compared to off-the-shelf state-of-the-art methods, our approach performs >2.5x better in medicine names extraction from prescriptions. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 372,767 |
2302.10150 | Information Retrieval in long documents: Word clustering approach for
improving Semantics | In this paper, we propose an alternative to deep neural networks for semantic information retrieval for the case of long documents. This new approach exploiting clustering techniques to take into account the meaning of words in Information Retrieval systems targeting long as well as short documents. This approach uses a specially designed clustering algorithm to group words with similar meanings into clusters. The dual representation (lexical and semantic) of documents and queries is based on the vector space model proposed by Gerard Salton in the vector space constituted by the formed clusters. The originalities of our proposal are at several levels: first, we propose an efficient algorithm for the construction of clusters of semantically close words using word embedding as input, then we define a formula for weighting these clusters, and then we propose a function allowing to combine efficiently the meanings of words with a lexical model widely used in Information Retrieval. The evaluation of our proposal in three contexts with two different datasets SQuAD and TREC-CAR has shown that is significantly improves the classical approaches only based on the keywords without degrading the lexical aspect. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 346,689 |
2001.08981 | Variants of Partial Update Augmented CLMS Algorithm and Their
Performance Analysis | Naturally complex-valued information or those presented in complex domain are effectively processed by an augmented complex least-mean-square (ACLMS) algorithm. In some applications, the ACLMS algorithm may be too computationally- and memory-intensive to implement. In this paper, a new algorithm, termed partial-update ACLMS (PU-ACLMS) algorithm is proposed, where only a fraction of the coefficient set is selected to update at each iteration. Doing so, two types of partial-update schemes are presented referred to as the sequential and stochastic partial-updates, to reduce computational load and power consumption in the corresponding adaptive filter. The computational cost for full-update PU-ACLMS and its partial-update implementations are discussed. Next, the steady-state mean and mean-square performance of PU-ACLMS for non-circular complex signals are analyzed and closed-form expressions of the steady-state excess mean-square error (EMSE) and mean-square deviation (MSD) are given. Then, employing the weighted energy-conservation relation, the EMSE and MSD learning curves are derived. The simulation results are verified and compared with those of theoretical predictions through numerical examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 161,446 |
2310.06404 | Hexa: Self-Improving for Knowledge-Grounded Dialogue System | A common practice in knowledge-grounded dialogue generation is to explicitly utilize intermediate steps (e.g., web-search, memory retrieval) with modular approaches. However, data for such steps are often inaccessible compared to those of dialogue responses as they are unobservable in an ordinary dialogue. To fill in the absence of these data, we develop a self-improving method to improve the generative performances of intermediate steps without the ground truth data. In particular, we propose a novel bootstrapping scheme with a guided prompt and a modified loss function to enhance the diversity of appropriate self-generated responses. Through experiments on various benchmark datasets, we empirically demonstrate that our method successfully leverages a self-improving mechanism in generating intermediate and final responses and improves the performances on the task of knowledge-grounded dialogue generation. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 398,565 |
1912.05743 | Exploratory Not Explanatory: Counterfactual Analysis of Saliency Maps
for Deep Reinforcement Learning | Saliency maps are frequently used to support explanations of the behavior of deep reinforcement learning (RL) agents. However, a review of how saliency maps are used in practice indicates that the derived explanations are often unfalsifiable and can be highly subjective. We introduce an empirical approach grounded in counterfactual reasoning to test the hypotheses generated from saliency maps and assess the degree to which they correspond to the semantics of RL environments. We use Atari games, a common benchmark for deep RL, to evaluate three types of saliency maps. Our results show the extent to which existing claims about Atari games can be evaluated and suggest that saliency maps are best viewed as an exploratory tool rather than an explanatory tool. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 157,182 |
2212.09034 | Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs | Graph neural networks (GNNs), as the de-facto model class for representation learning on graphs, are built upon the multi-layer perceptrons (MLP) architecture with additional message passing layers to allow features to flow across nodes. While conventional wisdom commonly attributes the success of GNNs to their advanced expressivity, we conjecture that this is not the main cause of GNNs' superiority in node-level prediction tasks. This paper pinpoints the major source of GNNs' performance gain to their intrinsic generalization capability, by introducing an intermediate model class dubbed as P(ropagational)MLP, which is identical to standard MLP in training, but then adopts GNN's architecture in testing. Intriguingly, we observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training. This finding sheds new insights into understanding the learning behavior of GNNs, and can be used as an analytic tool for dissecting various GNN-related research problems. As an initial step to analyze the inherent generalizability of GNNs, we show the essential difference between MLP and PMLP at infinite-width limit lies in the NTK feature map in the post-training stage. Moreover, by examining their extrapolation behavior, we find that though many GNNs and their PMLP counterparts cannot extrapolate non-linear functions for extremely out-of-distribution samples, they have greater potential to generalize to testing samples near the training data range as natural advantages of GNN architectures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 336,972 |
1609.03912 | Information Theoretic Structure Learning with Confidence | Information theoretic measures (e.g. the Kullback Liebler divergence and Shannon mutual information) have been used for exploring possibly nonlinear multivariate dependencies in high dimension. If these dependencies are assumed to follow a Markov factor graph model, this exploration process is called structure discovery. For discrete-valued samples, estimates of the information divergence over the parametric class of multinomial models lead to structure discovery methods whose mean squared error achieves parametric convergence rates as the sample size grows. However, a naive application of this method to continuous nonparametric multivariate models converges much more slowly. In this paper we introduce a new method for nonparametric structure discovery that uses weighted ensemble divergence estimators that achieve parametric convergence rates and obey an asymptotic central limit theorem that facilitates hypothesis testing and other types of statistical validation. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 60,941 |
1812.03565 | The Gap Between Model-Based and Model-Free Methods on the Linear
Quadratic Regulator: An Asymptotic Viewpoint | The effectiveness of model-based versus model-free methods is a long-standing question in reinforcement learning (RL). Motivated by recent empirical success of RL on continuous control tasks, we study the sample complexity of popular model-based and model-free algorithms on the Linear Quadratic Regulator (LQR). We show that for policy evaluation, a simple model-based plugin method requires asymptotically less samples than the classical least-squares temporal difference (LSTD) estimator to reach the same quality of solution; the sample complexity gap between the two methods can be at least a factor of state dimension. For policy evaluation, we study a simple family of problem instances and show that nominal (certainty equivalence principle) control also requires several factors of state and input dimension fewer samples than the policy gradient method to reach the same level of control performance on these instances. Furthermore, the gap persists even when employing commonly used baselines. To the best of our knowledge, this is the first theoretical result which demonstrates a separation in the sample complexity between model-based and model-free methods on a continuous control task. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,048 |
1509.01770 | Theoretical and Experimental Analyses of Tensor-Based Regression and
Classification | We theoretically and experimentally investigate tensor-based regression and classification. Our focus is regularization with various tensor norms, including the overlapped trace norm, the latent trace norm, and the scaled latent trace norm. We first give dual optimization methods using the alternating direction method of multipliers, which is computationally efficient when the number of training samples is moderate. We then theoretically derive an excess risk bound for each tensor norm and clarify their behavior. Finally, we perform extensive experiments using simulated and real data and demonstrate the superiority of tensor-based learning methods over vector- and matrix-based learning methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 46,652 |
2310.08085 | Low-Resource Clickbait Spoiling for Indonesian via Question Answering | Clickbait spoiling aims to generate a short text to satisfy the curiosity induced by a clickbait post. As it is a newly introduced task, the dataset is only available in English so far. Our contributions include the construction of manually labeled clickbait spoiling corpus in Indonesian and an evaluation on using cross-lingual zero-shot question answering-based models to tackle clikcbait spoiling for low-resource language like Indonesian. We utilize selection of multilingual language models. The experimental results suggest that XLM-RoBERTa (large) model outperforms other models for phrase and passage spoilers, meanwhile, mDeBERTa (base) model outperforms other models for multipart spoilers. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 399,256 |
2403.09039 | Detecting Anomalies in Dynamic Graphs via Memory enhanced Normality | Anomaly detection in dynamic graphs presents a significant challenge due to the temporal evolution of graph structures and attributes. The conventional approaches that tackle this problem typically employ an unsupervised learning framework, capturing normality patterns with exclusive normal data during training and identifying deviations as anomalies during testing. However, these methods face critical drawbacks: they either only depend on proxy tasks for representation without directly pinpointing normal patterns, or they neglect to differentiate between spatial and temporal normality patterns. More recent methods that use contrastive learning with negative sampling also face high computational costs, limiting their scalability to large graphs. To address these challenges, we introduce a novel Spatial-Temporal memories-enhanced graph autoencoder (STRIPE). Initially, STRIPE employs Graph Neural Networks (GNNs) and gated temporal convolution layers to extract spatial and temporal features. Then STRIPE incorporates separate spatial and temporal memory networks to capture and store prototypes of normal patterns, respectively. These stored patterns are retrieved and integrated with encoded graph embeddings through a mutual attention mechanism. Finally, the integrated features are fed into the decoder to reconstruct the graph streams which serve as the proxy task for anomaly detection. This comprehensive approach not only minimizes reconstruction errors but also emphasizes the compactness and distinctiveness of the embeddings w.r.t. the nearest memory prototypes. Extensive experiments on six benchmark datasets demonstrate the effectiveness and efficiency of STRIPE, where STRIPE significantly outperforms existing methods with 5.8% improvement in AUC scores and 4.62X faster in training time. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 437,599 |
1709.06653 | Unique Information via Dependency Constraints | The partial information decomposition (PID) is perhaps the leading proposal for resolving information shared between a set of sources and a target into redundant, synergistic, and unique constituents. Unfortunately, the PID framework has been hindered by a lack of a generally agreed-upon, multivariate method of quantifying the constituents. Here, we take a step toward rectifying this by developing a decomposition based on a new method that quantifies unique information. We first develop a broadly applicable method---the dependency decomposition---that delineates how statistical dependencies influence the structure of a joint distribution. The dependency decomposition then allows us to define a measure of the information about a target that can be uniquely attributed to a particular source as the least amount which the source-target statistical dependency can influence the information shared between the sources and the target. The result is the first measure that satisfies the core axioms of the PID framework while not satisfying the Blackwell relation, which depends on a particular interpretation of how the variables are related. This makes a key step forward to a practical PID. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 81,141 |
2212.10762 | AgAsk: An Agent to Help Answer Farmer's Questions From Scientific
Documents | Decisions in agriculture are increasingly data-driven; however, valuable agricultural knowledge is often locked away in free-text reports, manuals and journal articles. Specialised search systems are needed that can mine agricultural information to provide relevant answers to users' questions. This paper presents AgAsk -- an agent able to answer natural language agriculture questions by mining scientific documents. We carefully survey and analyse farmers' information needs. On the basis of these needs we release an information retrieval test collection comprising real questions, a large collection of scientific documents split in passages, and ground truth relevance assessments indicating which passages are relevant to each question. We implement and evaluate a number of information retrieval models to answer farmers questions, including two state-of-the-art neural ranking models. We show that neural rankers are highly effective at matching passages to questions in this context. Finally, we propose a deployment architecture for AgAsk that includes a client based on the Telegram messaging platform and retrieval model deployed on commodity hardware. The test collection we provide is intended to stimulate more research in methods to match natural language to answers in scientific documents. While the retrieval models were evaluated in the agriculture domain, they are generalisable and of interest to others working on similar problems. The test collection is available at: \url{https://github.com/ielab/agvaluate}. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 337,593 |
2012.09390 | Classifying Sequences of Extreme Length with Constant Memory Applied to
Malware Detection | Recent works within machine learning have been tackling inputs of ever-increasing size, with cybersecurity presenting sequence classification problems of particularly extreme lengths. In the case of Windows executable malware detection, inputs may exceed $100$ MB, which corresponds to a time series with $T=100,000,000$ steps. To date, the closest approach to handling such a task is MalConv, a convolutional neural network capable of processing up to $T=2,000,000$ steps. The $\mathcal{O}(T)$ memory of CNNs has prevented further application of CNNs to malware. In this work, we develop a new approach to temporal max pooling that makes the required memory invariant to the sequence length $T$. This makes MalConv $116\times$ more memory efficient, and up to $25.8\times$ faster to train on its original dataset, while removing the input length restrictions to MalConv. We re-invest these gains into improving the MalConv architecture by developing a new Global Channel Gating design, giving us an attention mechanism capable of learning feature interactions across 100 million time steps in an efficient manner, a capability lacked by the original MalConv CNN. Our implementation can be found at https://github.com/NeuromorphicComputationResearchProgram/MalConv2 | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 212,056 |
1910.13427 | Distribution Density, Tails, and Outliers in Machine Learning: Metrics
and Applications | We develop techniques to quantify the degree to which a given (training or testing) example is an outlier in the underlying distribution. We evaluate five methods to score examples in a dataset by how well-represented the examples are, for different plausible definitions of "well-represented", and apply these to four common datasets: MNIST, Fashion-MNIST, CIFAR-10, and ImageNet. Despite being independent approaches, we find all five are highly correlated, suggesting that the notion of being well-represented can be quantified. Among other uses, we find these methods can be combined to identify (a) prototypical examples (that match human expectations); (b) memorized training examples; and, (c) uncommon submodes of the dataset. Further, we show how we can utilize our metrics to determine an improved ordering for curriculum learning, and impact adversarial robustness. We release all metric values on training and test sets we studied. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 151,389 |
2208.12834 | Improving the Efficiency of Gradient Descent Algorithms Applied to
Optimization Problems with Dynamical Constraints | We introduce two block coordinate descent algorithms for solving optimization problems with ordinary differential equations (ODEs) as dynamical constraints. The algorithms do not need to implement direct or adjoint sensitivity analysis methods to evaluate loss function gradients. They results from reformulation of the original problem as an equivalent optimization problem with equality constraints. The algorithms naturally follow from steps aimed at recovering the gradient-decent algorithm based on ODE solvers that explicitly account for sensitivity of the ODE solution. In our first proposed algorithm we avoid explicitly solving the ODE by integrating the ODE solver as a sequence of implicit constraints. In our second algorithm, we use an ODE solver to reset the ODE solution, but no direct are adjoint sensitivity analysis methods are used. Both algorithm accepts mini-batch implementations and show significant efficiency benefits from GPU-based parallelization. We demonstrate the performance of the algorithms when applied to learning the parameters of the Cucker-Smale model. The algorithms are compared with gradient descent algorithms based on ODE solvers endowed with sensitivity analysis capabilities, for various number of state size, using Pytorch and Jax implementations. The experimental results demonstrate that the proposed algorithms are at least 4x faster than the Pytorch implementations, and at least 16x faster than Jax implementations. For large versions of the Cucker-Smale model, the Jax implementation is thousands of times faster than the sensitivity analysis-based implementation. In addition, our algorithms generate more accurate results both on training and test data. Such gains in computational efficiency is paramount for algorithms that implement real time parameter estimations, such as diagnosis algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 314,867 |
2006.01966 | The Typology of Polysemy: A Multilingual Distributional Framework | Lexical semantic typology has identified important cross-linguistic generalizations about the variation and commonalities in polysemy patterns---how languages package up meanings into words. Recent computational research has enabled investigation of lexical semantics at a much larger scale, but little work has explored lexical typology across semantic domains, nor the factors that influence cross-linguistic similarities. We present a novel computational framework that quantifies semantic affinity, the cross-linguistic similarity of lexical semantics for a concept. Our approach defines a common multilingual semantic space that enables a direct comparison of the lexical expression of concepts across languages. We validate our framework against empirical findings on lexical semantic typology at both the concept and domain levels. Our results reveal an intricate interaction between semantic domains and extra-linguistic factors, beyond language phylogeny, that co-shape the typology of polysemy across languages. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 179,913 |
2210.11660 | Learning Action Duration and Synergy in Task Planning for Human-Robot
Collaboration | A good estimation of the actions' cost is key in task planning for human-robot collaboration. The duration of an action depends on agents' capabilities and the correlation between actions performed simultaneously by the human and the robot. This paper proposes an approach to learning actions' costs and coupling between actions executed concurrently by humans and robots. We leverage the information from past executions to learn the average duration of each action and a synergy coefficient representing the effect of an action performed by the human on the duration of the action performed by the robot (and vice versa). We implement the proposed method in a simulated scenario where both agents can access the same area simultaneously. Safety measures require the robot to slow down when the human is close, denoting a bad synergy of tasks operating in the same area. We show that our approach can learn such bad couplings so that a task planner can leverage this information to find better plans. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 325,397 |
2403.14510 | Universal Differential Equations as a Common Modeling Language for
Neuroscience | The unprecedented availability of large-scale datasets in neuroscience has spurred the exploration of artificial deep neural networks (DNNs) both as empirical tools and as models of natural neural systems. Their appeal lies in their ability to approximate arbitrary functions directly from observations, circumventing the need for cumbersome mechanistic modeling. However, without appropriate constraints, DNNs risk producing implausible models, diminishing their scientific value. Moreover, the interpretability of DNNs poses a significant challenge, particularly with the adoption of more complex expressive architectures. In this perspective, we argue for universal differential equations (UDEs) as a unifying approach for model development and validation in neuroscience. UDEs view differential equations as parameterizable, differentiable mathematical objects that can be augmented and trained with scalable deep learning techniques. This synergy facilitates the integration of decades of extensive literature in calculus, numerical analysis, and neural modeling with emerging advancements in AI into a potent framework. We provide a primer on this burgeoning topic in scientific machine learning and demonstrate how UDEs fill in a critical gap between mechanistic, phenomenological, and data-driven models in neuroscience. We outline a flexible recipe for modeling neural systems with UDEs and discuss how they can offer principled solutions to inherent challenges across diverse neuroscience applications such as understanding neural computation, controlling neural systems, neural decoding, and normative modeling. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 440,110 |
1904.05724 | Improving SIEM for Critical SCADA Water Infrastructures Using Machine
Learning | Network Control Systems (NAC) have been used in many industrial processes. They aim to reduce the human factor burden and efficiently handle the complex process and communication of those systems. Supervisory control and data acquisition (SCADA) systems are used in industrial, infrastructure and facility processes (e.g. manufacturing, fabrication, oil and water pipelines, building ventilation, etc.) Like other Internet of Things (IoT) implementations, SCADA systems are vulnerable to cyber-attacks, therefore, a robust anomaly detection is a major requirement. However, having an accurate anomaly detection system is not an easy task, due to the difficulty to differentiate between cyber-attacks and system internal failures (e.g. hardware failures). In this paper, we present a model that detects anomaly events in a water system controlled by SCADA. Six Machine Learning techniques have been used in building and evaluating the model. The model classifies different anomaly events including hardware failures (e.g. sensor failures), sabotage and cyber-attacks (e.g. DoS and Spoofing). Unlike other detection systems, our proposed work focuses on notifying the operator when an anomaly occurs with a probability of the event occurring. This additional information helps in accelerating the mitigation process. The model is trained and tested using a real-world dataset. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 127,390 |
1409.8518 | ProbFuse: A Probabilistic Approach to Data Fusion | Data fusion is the combination of the results of independent searches on a document collection into one single output result set. It has been shown in the past that this can greatly improve retrieval effectiveness over that of the individual results. This paper presents probFuse, a probabilistic approach to data fusion. ProbFuse assumes that the performance of the individual input systems on a number of training queries is indicative of their future performance. The fused result set is based on probabilities of relevance calculated during this training process. Retrieval experiments using data from the TREC ad hoc collection demonstrate that probFuse achieves results superior to that of the popular CombMNZ fusion algorithm. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 36,417 |
2008.09394 | A Variational Approach to Unsupervised Sentiment Analysis | In this paper, we propose a variational approach to unsupervised sentiment analysis. Instead of using ground truth provided by domain experts, we use target-opinion word pairs as a supervision signal. For example, in a document snippet "the room is big," (room, big) is a target-opinion word pair. These word pairs can be extracted by using dependency parsers and simple rules. Our objective function is to predict an opinion word given a target word while our ultimate goal is to learn a sentiment classifier. By introducing a latent variable, i.e., the sentiment polarity, to the objective function, we can inject the sentiment classifier to the objective function via the evidence lower bound. We can learn a sentiment classifier by optimizing the lower bound. We also impose sophisticated constraints on opinion words as regularization which encourages that if two documents have similar (dissimilar) opinion words, the sentiment classifiers should produce similar (different) probability distribution. We apply our method to sentiment analysis on customer reviews and clinical narratives. The experiment results show our method can outperform unsupervised baselines in sentiment analysis task on both domains, and our method obtains comparable results to the supervised method with hundreds of labels per aspect in customer reviews domain, and obtains comparable results to supervised methods in clinical narratives domain. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 192,702 |
1105.1247 | Machine-Part cell formation through visual decipherable clustering of
Self Organizing Map | Machine-part cell formation is used in cellular manufacturing in order to process a large variety, quality, lower work in process levels, reducing manufacturing lead-time and customer response time while retaining flexibility for new products. This paper presents a new and novel approach for obtaining machine cells and part families. In the cellular manufacturing the fundamental problem is the formation of part families and machine cells. The present paper deals with the Self Organising Map (SOM) method an unsupervised learning algorithm in Artificial Intelligence, and has been used as a visually decipherable clustering tool of machine-part cell formation. The objective of the paper is to cluster the binary machine-part matrix through visually decipherable cluster of SOM color-coding and labelling via the SOM map nodes in such a way that the part families are processed in that machine cells. The Umatrix, component plane, principal component projection, scatter plot and histogram of SOM have been reported in the present work for the successful visualization of the machine-part cell formation. Computational result with the proposed algorithm on a set of group technology problems available in the literature is also presented. The proposed SOM approach produced solutions with a grouping efficacy that is at least as good as any results earlier reported in the literature and improved the grouping efficacy for 70% of the problems and found immensely useful to both industry practitioners and researchers. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,270 |
1304.2344 | Induction and Uncertainty Management Techniques Applied to Veterinary
Medical Diagnosis | This paper discusses a project undertaken between the Departments of Computing Science, Statistics, and the College of Veterinary Medicine to design a medical diagnostic system. On-line medical data has been collected in the hospital database system for several years. A number of induction methods are being used to extract knowledge from the data in an attempt to improve upon simple diagnostic charts used by the clinicians. They also enhance the results of classical statistical methods - finding many more significant variables. The second part of the paper describes an essentially Bayesian method of evidence combination using fuzzy events at an initial step. Results are presented and comparisons are made with other methods. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,652 |
1501.00311 | QANUS: An Open-source Question-Answering Platform | In this paper, we motivate the need for a publicly available, generic software framework for question-answering (QA) systems. We present an open-source QA framework QANUS which researchers can leverage on to build new QA systems easily and rapidly. The framework implements much of the code that will otherwise have been repeated across different QA systems. To demonstrate the utility and practicality of the framework, we further present a fully functioning factoid QA system QA-SYS built on top of QANUS. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 38,977 |
2311.14533 | Introducing 3DCNN ResNets for ASD full-body kinematic assessment: a
comparison with hand-crafted features | Autism Spectrum Disorder (ASD) is characterized by challenges in social communication and restricted patterns, with motor abnormalities gaining traction for early detection. However, kinematic analysis in ASD is limited, often lacking robust validation and relying on hand-crafted features for single tasks, leading to inconsistencies across studies. End-to-end models have emerged as promising methods to overcome the need for feature engineering. Our aim is to propose a newly adapted 3DCNN ResNet from and compare it to widely used hand-crafted features for motor ASD assessment. Specifically, we developed a virtual reality environment with multiple motor tasks and trained models using both approaches. We prioritized a reliable validation framework with repeated cross-validation. Results show the proposed model achieves a maximum accuracy of 85$\pm$3%, outperforming state-of-the-art end-to-end models with short 1-to-3 minute samples. Our comparative analysis with hand-crafted features shows feature-engineered models outperformed our end-to-end model in certain tasks. However, our end-to-end model achieved a higher mean AUC of 0.80$\pm$0.03. Additionally, statistical differences were found in model variance, with our end-to-end model providing more consistent results with less variability across all VR tasks, demonstrating domain generalization and reliability. These findings show that end-to-end models enable less variable and context-independent ASD classification without requiring domain knowledge or task specificity. However, they also recognize the effectiveness of hand-crafted features in specific task scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 410,142 |
2402.01104 | Simulation Framework for Vehicle and Electric Scooter Interaction | The number of shared micro-mobility services such as electric scooters (e-scooters) has an increasing trend due to the advantages of high efficiency and low cost in short-range travel in urban areas. However, due to the unique characteristics of moving behavior, it is commonly seen that e-scooters may share the road with other motor vehicles. The lack of protection may lead to severe injury for e-scooter riders. The scenario where an e-scooter crosses an intersection or makes a lane change while interacting with an approaching vehicle was commonly seen in real-life traffic data. Such scenarios are hazardous because the intention and behavior of the e-scooter may vary significantly based on the traffic environment conditions. Furthermore, some other vehicles may occlude the presence of the moving e-scooter, which can result in an unexpected collision. In this paper, we propose a simulation platform to mimic the interactions between vehicles and e-scooters. Several traffic scenarios are studied via qualitative and quantitative analysis. The proposed framework is shown to be valuable and efficient for the general risk analysis for vehicle and e-scooter interactions (VEI). | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 425,859 |
2305.19292 | Revisiting Random Forests in a Comparative Evaluation of Graph
Convolutional Neural Network Variants for Traffic Prediction | Traffic prediction is a spatiotemporal predictive task that plays an essential role in intelligent transportation systems. Today, graph convolutional neural networks (GCNNs) have become the prevailing models in the traffic prediction literature since they excel at extracting spatial correlations. In this work, we classify the components of successful GCNN prediction models and analyze the effects of matrix factorization, attention mechanism, and weight sharing on their performance. Furthermore, we compare these variations against random forests, a traditional regression method that predates GCNNs by over 15 years. We evaluated these methods using simulated data of two regions in Toronto as well as real-world sensor data from selected California highways. We found that incorporating matrix factorization, attention, and location-specific model weights either individually or collectively into GCNNs can result in a better overall performance. Moreover, although random forest regression is a less compact model, it matches or exceeds the performance of all variations of GCNNs in our experiments. This suggests that the current graph convolutional methods may not be the best approach to traffic prediction and there is still room for improvement. Finally, our findings also suggest that for future research on GCNN for traffic prediction to be credible, researchers must include performance comparison to random forests. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 369,462 |
2207.05377 | On the Generalization for Transfer Learning: An Information-Theoretic
Analysis | Transfer learning, or domain adaptation, is concerned with machine learning problems in which training and testing data come from possibly different probability distributions. In this work, we give an information-theoretic analysis of the generalization error and excess risk of transfer learning algorithms. Our results suggest, perhaps as expected, that the Kullback-Leibler (KL) divergence $D(\mu\|\mu')$ plays an important role in the characterizations where $\mu$ and $\mu'$ denote the distribution of the training data and the testing data, respectively. Specifically, we provide generalization error and excess risk upper bounds for learning algorithms where data from both distributions are available in the training phase. Recognizing that the bounds could be sub-optimal in general, we provide improved excess risk upper bounds for a certain class of algorithms, including the empirical risk minimization (ERM) algorithm, by making stronger assumptions through the \textit{central condition}. To demonstrate the usefulness of the bounds, we further extend the analysis to the Gibbs algorithm and the noisy stochastic gradient descent method. We then generalize the mutual information bound with other divergences such as $\phi$-divergence and Wasserstein distance, which may lead to tighter bounds and can handle the case when $\mu$ is not absolutely continuous with respect to $\mu'$. Several numerical results are provided to demonstrate our theoretical findings. Lastly, to address the problem that the bounds are often not directly applicable in practice due to the absence of the distributional knowledge of the data, we develop an algorithm (called InfoBoost) that dynamically adjusts the importance weights for both source and target data based on certain information measures. The empirical results show the effectiveness of the proposed algorithm. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 307,522 |
1401.6634 | Hermitian Self-Dual Cyclic Codes of Length $p^a$ over $GR(p^2,s)$ | In this paper, we study cyclic codes over the Galois ring ${\rm GR}({p^2},s)$. The main result is the characterization and enumeration of Hermitian self-dual cyclic codes of length $p^a$ over ${\rm GR}({p^2},s)$. Combining with some known results and the standard Discrete Fourier Transform decomposition, we arrive at the characterization and enumeration of Euclidean self-dual cyclic codes of any length over ${\rm GR}({p^2},s)$. Some corrections to results on Euclidean self-dual cyclic codes of even length over $\mathbb{Z}_4$ in Discrete Appl. Math. 128, (2003), 27 and Des. Codes Cryptogr. 39, (2006), 127 are provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,384 |
1806.04185 | A Corpus with Multi-Level Annotations of Patients, Interventions and
Outcomes to Support Language Processing for Medical Literature | We present a corpus of 5,000 richly annotated abstracts of medical articles describing clinical randomized controlled trials. Annotations include demarcations of text spans that describe the Patient population enrolled, the Interventions studied and to what they were Compared, and the Outcomes measured (the `PICO' elements). These spans are further annotated at a more granular level, e.g., individual interventions within them are marked and mapped onto a structured medical vocabulary. We acquired annotations from a diverse set of workers with varying levels of expertise and cost. We describe our data collection process and the corpus itself in detail. We then outline a set of challenging NLP tasks that would aid searching of the medical literature and the practice of evidence-based medicine. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 100,166 |
2111.02824 | A unified concurrent-composition method to state/event inference and
concealment in discrete-event systems | Discrete-event systems usually consist of discrete states and transitions between them caused by spontaneous occurrences of labelled (aka partially-observed) events. Due to the partially-observed feature, fundamental properties therein could be classified into two categories: state/event-inference-based properties (e.g., strong detectability, diagnosability, and predictability) and state-concealment-based properties (e.g., opacity). Intuitively, the former category describes whether one can use observed output sequences to infer the current and subsequent states, past occurrences of faulty events, or future certain occurrences of faulty events; while the latter describes whether one cannot use observed output sequences to infer whether some secret states have been visited (that is, whether the DES can conceal the status that its secret states have been visited). Over the past two decades these properties were studied separately using different methods. In this review article, for labeled finite-state automata, a unified concurrent-composition method is shown to verify all above inference-based properties and concealment-based properties, resulting in a unified mathematical framework for the two categories of properties. In addition, compared with the previous methods in the literature, the concurrent-composition method does not depend on assumptions and is more efficient. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | 264,982 |
1805.11875 | Energy-Efficient Caching for Scalable Videos in Heterogeneous Networks | By suppressing repeated content deliveries, wireless caching has the potential to substantially improve the energy efficiency (EE) of the fifth generation (5G) communication networks. In this paper, we propose two novel energy-efficient caching schemes in heterogeneous networks, namely, scalable video coding (SVC)-based fractional caching and SVC-based random caching, which can provide on-demand video services with different perceptual qualities. We derive the expressions for successful transmission probabilities and ergodic service rates. Based on the derivations and the established power consumption models, the EE maximization problems are formulated for the two proposed caching schemes. By taking logarithmic approximations of the l0-norm, the problems are efficiently solved by the standard gradient projection method. Numerical results validate the theoretical analysis and demonstrate the superiority of our proposed caching schemes, compared to three benchmark strategies. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 99,039 |
2007.05811 | Efficient List Decoding of Convolutional Polar Codes | An efficient implementation of min-sum SC/list decoding of convolutional polar codes is proposed. The complexity of the proposed implementation of SC decoding is more than two times smaller than the straightforward implementation. Moreover, the proposed list decoding algorithm does not require to copy any LLRs during decoding. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 186,793 |
1804.01296 | Gaussian Process Uncertainty in Age Estimation as a Measure of Brain
Abnormality | Multivariate regression models for age estimation are a powerful tool for assessing abnormal brain morphology associated to neuropathology. Age prediction models are built on cohorts of healthy subjects and are built to reflect normal aging patterns. The application of these multivariate models to diseased subjects usually results in high prediction errors, under the hypothesis that neuropathology presents a similar degenerative pattern as that of accelerated aging. In this work, we propose an alternative to the idea that pathology follows a similar trajectory than normal aging. Instead, we propose the use of metrics which measure deviations from the mean aging trajectory. We propose to measure these deviations using two different metrics: uncertainty in a Gaussian process regression model and a newly proposed age weighted uncertainty measure. Consequently, our approach assumes that pathologic brain patterns are different to those of normal aging. We present results for subjects with autism, mild cognitive impairment and Alzheimer's disease to highlight the versatility of the approach to different diseases and age ranges. We evaluate volume, thickness, and VBM features for quantifying brain morphology. Our evaluations are performed on a large number of images obtained from a variety of publicly available neuroimaging databases. Across all features, our uncertainty based measurements yield a better separation between diseased subjects and healthy individuals than the prediction error. Finally, we illustrate differences in the disease pattern to normal aging, supporting the application of uncertainty as a measure of neuropathology. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,203 |
2409.11579 | HEARTS: A Holistic Framework for Explainable, Sustainable and Robust
Text Stereotype Detection | Stereotypes are generalised assumptions about societal groups, and even state-of-the-art LLMs using in-context learning struggle to identify them accurately. Due to the subjective nature of stereotypes, where what constitutes a stereotype can vary widely depending on cultural, social, and individual perspectives, robust explainability is crucial. Explainable models ensure that these nuanced judgments can be understood and validated by human users, promoting trust and accountability. We address these challenges by introducing HEARTS (Holistic Framework for Explainable, Sustainable, and Robust Text Stereotype Detection), a framework that enhances model performance, minimises carbon footprint, and provides transparent, interpretable explanations. We establish the Expanded Multi-Grain Stereotype Dataset (EMGSD), comprising 57,201 labelled texts across six groups, including under-represented demographics like LGBTQ+ and regional stereotypes. Ablation studies confirm that BERT models fine-tuned on EMGSD outperform those trained on individual components. We then analyse a fine-tuned, carbon-efficient ALBERT-V2 model using SHAP to generate token-level importance values, ensuring alignment with human understanding, and calculate explainability confidence scores by comparing SHAP and LIME outputs... | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 489,218 |
2309.15881 | Enhancing Cross-Category Learning in Recommendation Systems with
Multi-Layer Embedding Training | Modern DNN-based recommendation systems rely on training-derived embeddings of sparse features. Input sparsity makes obtaining high-quality embeddings for rarely-occurring categories harder as their representations are updated infrequently. We demonstrate a training-time technique to produce superior embeddings via effective cross-category learning and theoretically explain its surprising effectiveness. The scheme, termed the multi-layer embeddings training (MLET), trains embeddings using factorization of the embedding layer, with an inner dimension higher than the target embedding dimension. For inference efficiency, MLET converts the trained two-layer embedding into a single-layer one thus keeping inference-time model size unchanged. Empirical superiority of MLET is puzzling as its search space is not larger than that of the single-layer embedding. The strong dependence of MLET on the inner dimension is even more surprising. We develop a theory that explains both of these behaviors by showing that MLET creates an adaptive update mechanism modulated by the singular vectors of embeddings. When tested on multiple state-of-the-art recommendation models for click-through rate (CTR) prediction tasks, MLET consistently produces better models, especially for rare items. At constant model quality, MLET allows embedding dimension, and model size, reduction by up to 16x, and 5.8x on average, across the models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 395,151 |
2110.02095 | Exploring the Limits of Large Scale Pre-training | Recent developments in large-scale machine learning suggest that by scaling up data, model size and training time properly, one might observe that improvements in pre-training would transfer favorably to most downstream tasks. In this work, we systematically study this phenomena and establish that, as we increase the upstream accuracy, the performance of downstream tasks saturates. In particular, we investigate more than 4800 experiments on Vision Transformers, MLP-Mixers and ResNets with number of parameters ranging from ten million to ten billion, trained on the largest scale of available image data (JFT, ImageNet21K) and evaluated on more than 20 downstream image recognition tasks. We propose a model for downstream performance that reflects the saturation phenomena and captures the nonlinear relationship in performance of upstream and downstream tasks. Delving deeper to understand the reasons that give rise to these phenomena, we show that the saturation behavior we observe is closely related to the way that representations evolve through the layers of the models. We showcase an even more extreme scenario where performance on upstream and downstream are at odds with each other. That is, to have a better downstream performance, we need to hurt upstream accuracy. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 259,004 |
2009.05478 | Projected Robust PCA with Application to Smooth Image Recovery | Most high-dimensional matrix recovery problems are studied under the assumption that the target matrix has certain intrinsic structures. For image data related matrix recovery problems, approximate low-rankness and smoothness are the two most commonly imposed structures. For approximately low-rank matrix recovery, the robust principal component analysis (PCA) is well-studied and proved to be effective. For smooth matrix problem, 2d fused Lasso and other total variation based approaches have played a fundamental role. Although both low-rankness and smoothness are key assumptions for image data analysis, the two lines of research, however, have very limited interaction. Motivated by taking advantage of both features, we in this paper develop a framework named projected robust PCA (PRPCA), under which the low-rank matrices are projected onto a space of smooth matrices. Consequently, a large class of image matrices can be decomposed as a low-rank and smooth component plus a sparse component. A key advantage of this decomposition is that the dimension of the core low-rank component can be significantly reduced. Consequently, our framework is able to address a problematic bottleneck of many low-rank matrix problems: singular value decomposition (SVD) on large matrices. Theoretically, we provide explicit statistical recovery guarantees of PRPCA and include classical robust PCA as a special case. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 195,333 |
2410.07196 | EEGUnity: Open-Source Tool in Facilitating Unified EEG Datasets Towards
Large-Scale EEG Model | The increasing number of dispersed EEG dataset publications and the advancement of large-scale Electroencephalogram (EEG) models have increased the demand for practical tools to manage diverse EEG datasets. However, the inherent complexity of EEG data, characterized by variability in content data, metadata, and data formats, poses challenges for integrating multiple datasets and conducting large-scale EEG model research. To tackle the challenges, this paper introduces EEGUnity, an open-source tool that incorporates modules of 'EEG Parser', 'Correction', 'Batch Processing', and 'Large Language Model Boost'. Leveraging the functionality of such modules, EEGUnity facilitates the efficient management of multiple EEG datasets, such as intelligent data structure inference, data cleaning, and data unification. In addition, the capabilities of EEGUnity ensure high data quality and consistency, providing a reliable foundation for large-scale EEG data research. EEGUnity is evaluated across 25 EEG datasets from different sources, offering several typical batch processing workflows. The results demonstrate the high performance and flexibility of EEGUnity in parsing and data processing. The project code is publicly available at github.com/Baizhige/EEGUnity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 496,533 |
2311.08007 | Clearer Frames, Anytime: Resolving Velocity Ambiguity in Video Frame
Interpolation | Existing video frame interpolation (VFI) methods blindly predict where each object is at a specific timestep t ("time indexing"), which struggles to predict precise object movements. Given two images of a baseball, there are infinitely many possible trajectories: accelerating or decelerating, straight or curved. This often results in blurry frames as the method averages out these possibilities. Instead of forcing the network to learn this complicated time-to-location mapping implicitly together with predicting the frames, we provide the network with an explicit hint on how far the object has traveled between start and end frames, a novel approach termed "distance indexing". This method offers a clearer learning goal for models, reducing the uncertainty tied to object speeds. We further observed that, even with this extra guidance, objects can still be blurry especially when they are equally far from both input frames (i.e., halfway in-between), due to the directional ambiguity in long-range motion. To solve this, we propose an iterative reference-based estimation strategy that breaks down a long-range prediction into several short-range steps. When integrating our plug-and-play strategies into state-of-the-art learning-based models, they exhibit markedly sharper outputs and superior perceptual quality in arbitrary time interpolations, using a uniform distance indexing map in the same format as time indexing. Additionally, distance indexing can be specified pixel-wise, which enables temporal manipulation of each object independently, offering a novel tool for video editing tasks like re-timing. The code is available at https://zzh-tech.github.io/InterpAny-Clearer/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 407,558 |
1711.10921 | Local Jet Pattern: A Robust Descriptor for Texture Classification | Methods based on local image features have recently shown promise for texture classification tasks, especially in the presence of large intra-class variation due to illumination, scale, and viewpoint changes. Inspired by the theories of image structure analysis, this paper presents a simple, efficient, yet robust descriptor namely local jet pattern (LJP) for texture classification. In this approach, a jet space representation of a texture image is derived from a set of derivatives of Gaussian (DtGs) filter responses up to second order, so called local jet vectors (LJV), which also satisfy the Scale Space properties. The LJP is obtained by utilizing the relationship of center pixel with the local neighborhood information in jet space. Finally, the feature vector of a texture region is formed by concatenating the histogram of LJP for all elements of LJV. All DtGs responses up to second order together preserves the intrinsic local image structure, and achieves invariance to scale, rotation, and reflection. This allows us to develop a texture classification framework which is discriminative and robust. Extensive experiments on five standard texture image databases, employing nearest subspace classifier (NSC), the proposed descriptor achieves 100%, 99.92%, 99.75%, 99.16%, and 99.65% accuracy for Outex_TC-00010 (Outex_TC10), and Outex_TC-00012 (Outex_TC12), KTH-TIPS, Brodatz, CUReT, respectively, which are outperforms the state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,694 |
2408.00297 | EmoTalk3D: High-Fidelity Free-View Synthesis of Emotional 3D Talking
Head | We present a novel approach for synthesizing 3D talking heads with controllable emotion, featuring enhanced lip synchronization and rendering quality. Despite significant progress in the field, prior methods still suffer from multi-view consistency and a lack of emotional expressiveness. To address these issues, we collect EmoTalk3D dataset with calibrated multi-view videos, emotional annotations, and per-frame 3D geometry. By training on the EmoTalk3D dataset, we propose a \textit{`Speech-to-Geometry-to-Appearance'} mapping framework that first predicts faithful 3D geometry sequence from the audio features, then the appearance of a 3D talking head represented by 4D Gaussians is synthesized from the predicted geometry. The appearance is further disentangled into canonical and dynamic Gaussians, learned from multi-view videos, and fused to render free-view talking head animation. Moreover, our model enables controllable emotion in the generated talking heads and can be rendered in wide-range views. Our method exhibits improved rendering quality and stability in lip motion generation while capturing dynamic facial details such as wrinkles and subtle expressions. Experiments demonstrate the effectiveness of our approach in generating high-fidelity and emotion-controllable 3D talking heads. The code and EmoTalk3D dataset are released at https://nju-3dv.github.io/projects/EmoTalk3D. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 477,781 |
2311.00993 | Scalable Probabilistic Forecasting in Retail with Gradient Boosted
Trees: A Practitioner's Approach | The recent M5 competition has advanced the state-of-the-art in retail forecasting. However, we notice important differences between the competition challenge and the challenges we face in a large e-commerce company. The datasets in our scenario are larger (hundreds of thousands of time series), and e-commerce can afford to have a larger assortment than brick-and-mortar retailers, leading to more intermittent data. To scale to larger dataset sizes with feasible computational effort, firstly, we investigate a two-layer hierarchy and propose a top-down approach to forecasting at an aggregated level with less amount of series and intermittency, and then disaggregating to obtain the decision-level forecasts. Probabilistic forecasts are generated under distributional assumptions. Secondly, direct training at the lower level with subsamples can also be an alternative way of scaling. Performance of modelling with subsets is evaluated with the main dataset. Apart from a proprietary dataset, the proposed scalable methods are evaluated using the Favorita dataset and the M5 dataset. We are able to show the differences in characteristics of the e-commerce and brick-and-mortar retail datasets. Notably, our top-down forecasting framework enters the top 50 of the original M5 competition, even with models trained at a higher level under a much simpler setting. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 404,862 |
1809.01859 | Deep Learning-Based Decoding for Constrained Sequence Codes | Constrained sequence codes have been widely used in modern communication and data storage systems. Sequences encoded with constrained sequence codes satisfy constraints imposed by the physical channel, hence enabling efficient and reliable transmission of coded symbols. Traditional encoding and decoding of constrained sequence codes rely on table look-up, which is prone to errors that occur during transmission. In this paper, we introduce constrained sequence decoding based on deep learning. With multiple layer perception (MLP) networks and convolutional neural networks (CNNs), we are able to achieve low bit error rates that are close to maximum a posteriori probability (MAP) decoding as well as improve the system throughput. Moreover, implementation of capacity-achieving fixed-length codes, where the complexity is prohibitively high with table look-up decoding, becomes practical with deep learning-based decoding. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 106,911 |
2401.12800 | Deep Learning in Physical Layer: Review on Data Driven End-to-End
Communication Systems and their Enabling Semantic Applications | Deep learning (DL) has revolutionized wireless communication systems by introducing datadriven end-to-end (E2E) learning, where the physical layer (PHY) is transformed into DL architectures to achieve peak optimization. Leveraging DL for E2E optimization in PHY significantly enhances its adaptability and performance in complex wireless environments, meeting the demands of advanced network systems such as 5G and beyond. Furthermore, this evolution of data-driven PHY optimization has also enabled advanced semantic applications across various modalities, including text, image, audio, video, and multimodal transmissions. These applications elevate communication from bit-level to semantic-level intelligence, making it capable of discerning context and intent. Although the PHY, as a DL architecture, plays a crucial role in enabling semantic communication (SemCom) systems, comprehensive studies that integrate both E2E communication and SemCom systems remain significantly underexplored. This highlights the novelty and potential of these integrative fields, marking them as a promising research domain. Therefore, this article provides a comprehensive review of the emerging field of data-driven PHY for E2E communication systems, emphasizing their role in enabling semantic applications across various modalities. It also identifies key challenges and potential research directions, serving as a crucial guide for future advancements in DL for E2E communication and SemCom systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 423,500 |
2205.05202 | Deep Learning-based Channel Estimation for Wideband Hybrid MmWave
Massive MIMO | Hybrid analog-digital (HAD) architecture is widely adopted in practical millimeter wave (mmWave) massive multiple-input multiple-output (MIMO) systems to reduce hardware cost and energy consumption. However, channel estimation in the context of HAD is challenging due to only limited radio frequency (RF) chains at transceivers. Although various compressive sensing (CS) algorithms have been developed to solve this problem by exploiting inherent channel sparsity and sparsity structures, practical effects, such as power leakage and beam squint, can still make the real channel features deviate from the assumed models and result in performance degradation. Also, the high complexity of CS algorithms caused by a large number of iterations hinders their applications in practice. To tackle these issues, we develop a deep learning (DL)-based channel estimation approach where the sparse Bayesian learning (SBL) algorithm is unfolded into a deep neural network (DNN). In each SBL layer, Gaussian variance parameters of the sparse angular domain channel are updated by a tailored DNN, which is able to effectively capture complicated channel sparsity structures in various domains. Besides, the measurement matrix is jointly optimized for performance improvement. Then, the proposed approach is extended to the multi-block case where channel correlation in time is further exploited to adaptively predict the measurement matrix and facilitate the update of Gaussian variance parameters. Based on simulation results, the proposed approaches significantly outperform existing approaches but with reduced complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 295,868 |
2310.00372 | Deep Active Learning with Noisy Oracle in Object Detection | Obtaining annotations for complex computer vision tasks such as object detection is an expensive and time-intense endeavor involving a large number of human workers or expert opinions. Reducing the amount of annotations required while maintaining algorithm performance is, therefore, desirable for machine learning practitioners and has been successfully achieved by active learning algorithms. However, it is not merely the amount of annotations which influences model performance but also the annotation quality. In practice, the oracles that are queried for new annotations frequently contain significant amounts of noise. Therefore, cleansing procedures are oftentimes necessary to review and correct given labels. This process is subject to the same budget as the initial annotation itself since it requires human workers or even domain experts. Here, we propose a composite active learning framework including a label review module for deep object detection. We show that utilizing part of the annotation budget to correct the noisy annotations partially in the active dataset leads to early improvements in model performance, especially when coupled with uncertainty-based query strategies. The precision of the label error proposals has a significant influence on the measured effect of the label review. In our experiments we achieve improvements of up to 4.5 mAP points of object detection performance by incorporating label reviews at equal annotation budget. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 395,952 |
1510.05436 | Color graph based wavelet transform with perceptual information | In this paper, we propose a numerical strategy to define a multiscale analysis for color and multicomponent images based on the representation of data on a graph. Our approach consists in computing the graph of an image using the psychovisual information and analysing it by using the spectral graph wavelet transform. We suggest introducing color dimension into the computation of the weights of the graph and using the geodesic distance as a means of distance measurement. We thus have defined a wavelet transform based on a graph with perceptual information by using the CIELab color distance. This new representation is illustrated with denoising and inpainting applications. Overall, by introducing psychovisual information in the graph computation for the graph wavelet transform we obtain very promising results. Therefore results in image restoration highlight the interest of the appropriate use of color information. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 48,022 |
1810.08691 | Audio-Based Activities of Daily Living (ADL) Recognition with
Large-Scale Acoustic Embeddings from Online Videos | Over the years, activity sensing and recognition has been shown to play a key enabling role in a wide range of applications, from sustainability and human-computer interaction to health care. While many recognition tasks have traditionally employed inertial sensors, acoustic-based methods offer the benefit of capturing rich contextual information, which can be useful when discriminating complex activities. Given the emergence of deep learning techniques and leveraging new, large-scaled multi-media datasets, this paper revisits the opportunity of training audio-based classifiers without the onerous and time-consuming task of annotating audio data. We propose a framework for audio-based activity recognition that makes use of millions of embedding features from public online video sound clips. Based on the combination of oversampling and deep learning approaches, our framework does not require further feature processing or outliers filtering as in prior work. We evaluated our approach in the context of Activities of Daily Living (ADL) by recognizing 15 everyday activities with 14 participants in their own homes, achieving 64.2% and 83.6% averaged within-subject accuracy in terms of top-1 and top-3 classification respectively. Individual class performance was also examined in the paper to further study the co-occurrence characteristics of the activities and the robustness of the framework. | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 110,884 |
0812.4642 | Error-Trellis State Complexity of LDPC Convolutional Codes Based on
Circulant Matrices | Let H(D) be the parity-check matrix of an LDPC convolutional code corresponding to the parity-check matrix H of a QC code obtained using the method of Tanner et al. We see that the entries in H(D) are all monomials and several rows (columns) have monomial factors. Let us cyclically shift the rows of H. Then the parity-check matrix H'(D) corresponding to the modified matrix H' defines another convolutional code. However, its free distance is lower-bounded by the minimum distance of the original QC code. Also, each row (column) of H'(D) has a factor different from the one in H(D). We show that the state-space complexity of the error-trellis associated with H'(D) can be significantly reduced by controlling the row shifts applied to H with the error-correction capability being preserved. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,853 |
2412.19455 | NijiGAN: Transform What You See into Anime with Contrastive
Semi-Supervised Learning and Neural Ordinary Differential Equations | Generative AI has transformed the animation industry. Several models have been developed for image-to-image translation, particularly focusing on converting real-world images into anime through unpaired translation. Scenimefy, a notable approach utilizing contrastive learning, achieves high fidelity anime scene translation by addressing limited paired data through semi-supervised training. However, it faces limitations due to its reliance on paired data from a fine-tuned StyleGAN in the anime domain, often producing low-quality datasets. Additionally, Scenimefy's high parameter architecture presents opportunities for computational optimization. This research introduces NijiGAN, a novel model incorporating Neural Ordinary Differential Equations (NeuralODEs), which offer unique advantages in continuous transformation modeling compared to traditional residual networks. NijiGAN successfully transforms real-world scenes into high fidelity anime visuals using half of Scenimefy's parameters. It employs pseudo-paired data generated through Scenimefy for supervised training, eliminating dependence on low-quality paired data and improving the training process. Our comprehensive evaluation includes ablation studies, qualitative, and quantitative analysis comparing NijiGAN to similar models. The testing results demonstrate that NijiGAN produces higher-quality images compared to AnimeGAN, as evidenced by a Mean Opinion Score (MOS) of 2.192, it surpasses AnimeGAN's MOS of 2.160. Furthermore, our model achieved a Frechet Inception Distance (FID) score of 58.71, outperforming Scenimefy's FID score of 60.32. These results demonstrate that NijiGAN achieves competitive performance against existing state-of-the-arts, especially Scenimefy as the baseline model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 520,848 |
2107.05344 | Post Triangular Rewiring Method for Shorter RRT Robot Path Planning | This paper proposed the 'Post Triangular Rewiring' method that minimizes the sacrifice of planning time and overcomes the limit of Optimality of sampling-based algorithm such as Rapidly-exploring Random Tree (RRT) algorithm. The proposed 'Post Triangular Rewiring' method creates a closer to the optimal path than RRT algorithm before application through the triangular inequality principle. The experiments were conducted to verify a performance of the proposed method. When the method proposed in this paper are applied to the RRT algorithm, the Optimality efficiency increase compared to the planning time. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 245,749 |
2502.00563 | Complex Wavelet Mutual Information Loss: A Multi-Scale Loss Function for
Semantic Segmentation | Recent advancements in deep neural networks have significantly enhanced the performance of semantic segmentation. However, class imbalance and instance imbalance remain persistent challenges, where smaller instances and thin boundaries are often overshadowed by larger structures. To address the multiscale nature of segmented objects, various models have incorporated mechanisms such as spatial attention and feature pyramid networks. Despite these advancements, most loss functions are still primarily pixel-wise, while regional and boundary-focused loss functions often incur high computational costs or are restricted to small-scale regions. To address this limitation, we propose complex wavelet mutual information (CWMI) loss, a novel loss function that leverages mutual information from subband images decomposed by a complex steerable pyramid. The complex steerable pyramid captures features across multiple orientations and preserves structural similarity across scales. Meanwhile, mutual information is well-suited for capturing high-dimensional directional features and exhibits greater noise robustness. Extensive experiments on diverse segmentation datasets demonstrate that CWMI loss achieves significant improvements in both pixel-wise accuracy and topological metrics compared to state-of-the-art methods, while introducing minimal computational overhead. The code is available at https://anonymous.4open.science/r/CWMI-83B7/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 529,438 |
1605.08415 | Structure-based control of complex networks with nonlinear dynamics | What can we learn about controlling a system solely from its underlying network structure? Here we adapt a recently developed framework for control of networks governed by a broad class of nonlinear dynamics that includes the major dynamic models of biological, technological, and social processes. This feedback-based framework provides realizable node overrides that steer a system towards any of its natural long term dynamic behaviors, regardless of the specific functional forms and system parameters. We use this framework on several real networks, identify the topological characteristics that underlie the predicted node overrides, and compare its predictions to those of structural controllability in control theory. Finally, we demonstrate this framework's applicability in dynamic models of gene regulatory networks and identify nodes whose override is necessary for control in the general case, but not in specific model instances. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 56,435 |
2212.09280 | Superimposed Channel Estimation in OTFS Modulation Using Compressive
Sensing | Orthogonal time frequency space (OTFS) technique is a two-dimensional modulation method that multiplexes information symbols in the delay-Doppler (DD) domain. OTFS combats high Doppler shift existing in high speed wireless communication. However, conventional channel estimation in OTFS suffers from high pilot overhead because guard symbols occupy a significant part of the DD domain grids. In this paper, a superimposed channel estimation is proposed which can completely estimate channel parameters without considering pilot overhead and performance degradation. As the channel state information (CSI) in the DD domain is sparse, a sparse recovery algorithm orthogonal matching pursuit (OMP) is used. Besides, our proposed method does not suffer from high peak to average power ratio (PAPR). To detect information symbols, a message passing (MP) detector, which exploits the sparsity of DD channel representation, is employed. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 337,060 |
2112.00592 | BeamSync: Over-The-Air Carrier Synchronization in Distributed
RadioWeaves | In a distributed multi-antenna system, multiple geographically separated transmit nodes communicate simultaneously to a receive node. Synchronization of these nodes is essential to achieve a good performance at the receiver. RadioWeaves is a new paradigm of cell-free massive MIMO array deployment using distributed multi-antenna panels in indoor environments. In this paper, we study the carrier frequency synchronization problem in distributed RadioWeave panels. We propose a novel, over-the-air synchronization protocol, which we call as BeamSync, to synchronize all the different multi-antenna transmit panels. We also show that beamforming the synchronization signal in the dominant direction of the channel between the panels is optimal and the synchronization performance is significantly better than traditional beamforming techniques. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 269,189 |
2309.12351 | Establishing trust in automated reasoning | Since its beginnings in the 1940s, automated reasoning by computers has become a tool of ever growing importance in scientific research. So far, the rules underlying automated reasoning have mainly been formulated by humans, in the form of program source code. Rules derived from large amounts of data, via machine learning techniques, are a complementary approach currently under intense development. The question of why we should trust these systems, and the results obtained with their help, has been discussed by philosophers of science but has so far received little attention by practitioners. The present work focuses on independent reviewing, an important source of trust in science, and identifies the characteristics of automated reasoning systems that affect their reviewability. It also discusses possible steps towards increasing reviewability and trustworthiness via a combination of technical and social measures. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 393,769 |
2011.09980 | Geography-Aware Self-Supervised Learning | Contrastive learning methods have significantly narrowed the gap between supervised and unsupervised learning on computer vision tasks. In this paper, we explore their application to geo-located datasets, e.g. remote sensing, where unlabeled data is often abundant but labeled data is scarce. We first show that due to their different characteristics, a non-trivial gap persists between contrastive and supervised learning on standard benchmarks. To close the gap, we propose novel training methods that exploit the spatio-temporal structure of remote sensing data. We leverage spatially aligned images over time to construct temporal positive pairs in contrastive learning and geo-location to design pre-text tasks. Our experiments show that our proposed method closes the gap between contrastive and supervised learning on image classification, object detection and semantic segmentation for remote sensing. Moreover, we demonstrate that the proposed method can also be applied to geo-tagged ImageNet images, improving downstream performance on various tasks. Project Webpage can be found at this link geography-aware-ssl.github.io. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 207,382 |
2409.06163 | MCDGLN: Masked Connection-based Dynamic Graph Learning Network for
Autism Spectrum Disorder | Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder characterized by complex physiological processes. Previous research has predominantly focused on static cerebral interactions, often neglecting the brain's dynamic nature and the challenges posed by network noise. To address these gaps, we introduce the Masked Connection-based Dynamic Graph Learning Network (MCDGLN). Our approach first segments BOLD signals using sliding temporal windows to capture dynamic brain characteristics. We then employ a specialized weighted edge aggregation (WEA) module, which uses the cross convolution with channel-wise element-wise convolutional kernel, to integrate dynamic functional connectivity and to isolating task-relevant connections. This is followed by topological feature extraction via a hierarchical graph convolutional network (HGCN), with key attributes highlighted by a self-attention module. Crucially, we refine static functional connections using a customized task-specific mask, reducing noise and pruning irrelevant links. The attention-based connection encoder (ACE) then enhances critical connections and compresses static features. The combined features are subsequently used for classification. Applied to the Autism Brain Imaging Data Exchange I (ABIDE I) dataset, our framework achieves a 73.3\% classification accuracy between ASD and Typical Control (TC) groups among 1,035 subjects. The pivotal roles of WEA and ACE in refining connectivity and enhancing classification accuracy underscore their importance in capturing ASD-specific features, offering new insights into the disorder. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 487,019 |
2004.11896 | The number of almost perfect nonlinear functions grows exponentially | Almost perfect nonlinear (APN) functions play an important role in the design of block ciphers as they offer the strongest resistance against differential cryptanalysis. Despite more than 25 years of research, only a limited number of APN functions are known. In this paper, we show that a recent construction by Taniguchi provides at least $\frac{\varphi(m)}{2}\left\lceil \frac{2^m+1}{3m} \right\rceil$ inequivalent APN functions on the finite field with ${2^{2m}}$ elements, where $\varphi$ denotes Euler's totient function. This is a great improvement of previous results: for even $m$, the best known lower bound has been $\frac{\varphi(m)}{2}\left(\lfloor \frac{m}{4}\rfloor +1\right)$, for odd $m$, there has been no such lower bound at all. Moreover, we determine the automorphism group of Taniguchi's APN functions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 174,060 |
1805.07412 | Wasserstein Measure Coresets | The proliferation of large data sets and Bayesian inference techniques motivates demand for better data sparsification. Coresets provide a principled way of summarizing a large dataset via a smaller one that is guaranteed to match the performance of the full data set on specific problems. Classical coresets, however, neglect the underlying data distribution, which is often continuous. We address this oversight by introducing Wasserstein measure coresets, an extension of coresets which by definition takes into account generalization. Our formulation of the problem, which essentially consists in minimizing the Wasserstein distance, is solvable via stochastic gradient descent. This yields an algorithm which simply requires sample access to the data distribution and is able to handle large data streams in an online manner. We validate our construction for inference and clustering. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 97,807 |
2305.15215 | Shadow Cones: A Generalized Framework for Partial Order Embeddings | Hyperbolic space has proven to be well-suited for capturing hierarchical relations in data, such as trees and directed acyclic graphs. Prior work introduced the concept of entailment cones, which uses partial orders defined by nested cones in the Poincar\'e ball to model hierarchies. Here, we introduce the ``shadow cones" framework, a physics-inspired entailment cone construction. Specifically, we model partial orders as subset relations between shadows formed by a light source and opaque objects in hyperbolic space. The shadow cones framework generalizes entailment cones to a broad class of formulations and hyperbolic space models beyond the Poincar\'e ball. This results in clear advantages over existing constructions: for example, shadow cones possess better optimization properties over constructions limited to the Poincar\'e ball. Our experiments on datasets of various sizes and hierarchical structures show that shadow cones consistently and significantly outperform existing entailment cone constructions. These results indicate that shadow cones are an effective way to model partial orders in hyperbolic space, offering physically intuitive and novel insights about the nature of such structures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 367,507 |
2306.09519 | Relation-Aware Network with Attention-Based Loss for Few-Shot Knowledge
Graph Completion | Few-shot knowledge graph completion (FKGC) task aims to predict unseen facts of a relation with few-shot reference entity pairs. Current approaches randomly select one negative sample for each reference entity pair to minimize a margin-based ranking loss, which easily leads to a zero-loss problem if the negative sample is far away from the positive sample and then out of the margin. Moreover, the entity should have a different representation under a different context. To tackle these issues, we propose a novel Relation-Aware Network with Attention-Based Loss (RANA) framework. Specifically, to better utilize the plentiful negative samples and alleviate the zero-loss issue, we strategically select relevant negative samples and design an attention-based loss function to further differentiate the importance of each negative sample. The intuition is that negative samples more similar to positive samples will contribute more to the model. Further, we design a dynamic relation-aware entity encoder for learning a context-dependent entity representation. Experiments demonstrate that RANA outperforms the state-of-the-art models on two benchmark datasets. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 373,857 |
2309.01961 | NICE: CVPR 2023 Challenge on Zero-shot Image Captioning | In this report, we introduce NICE (New frontiers for zero-shot Image Captioning Evaluation) project and share the results and outcomes of 2023 challenge. This project is designed to challenge the computer vision community to develop robust image captioning models that advance the state-of-the-art both in terms of accuracy and fairness. Through the challenge, the image captioning models were tested using a new evaluation dataset that includes a large variety of visual concepts from many domains. There was no specific training data provided for the challenge, and therefore the challenge entries were required to adapt to new types of image descriptions that had not been seen during training. This report includes information on the newly proposed NICE dataset, evaluation methods, challenge results, and technical details of top-ranking entries. We expect that the outcomes of the challenge will contribute to the improvement of AI models on various vision-language tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 389,871 |
2207.03965 | Power Response and Modelling Aspects of Power Electronic Loads in Case
of Voltage Drops | In this paper, the power response of power electronic loads in case of voltage drops are measured and their dynamics are analysed. Based on this, dynamic simulation models are derived which can be used for voltage stability investigations. For this, four loads with different power factor techniques are considered. In addition, the impact of the grid impedance and input filter on the power response are measured in the laboratory. Based on the measurements, the simulation models are described. It is also outlined under which aspects the components of the loads are normally dimensioned if no detailed information is available. A comparison with the measurements demonstrates that the simulation models capture the main dynamics. At the end of the paper, the load models are compared to a constant power load in a short-term voltage stability use case. The results indicate that the power electronic loads have a more positive influence on short-term voltage stability in case of voltage drops. Overall, the contributions of the paper are the identification of the basic power dynamics of power electronic loads for different voltage drops and a subsequent derivation of suitable simulation load models for voltage stability investigations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 307,034 |
0906.1339 | Error Exponents for Broadcast Channels with Degraded Message Sets | We consider a broadcast channel with a degraded message set, in which a single transmitter sends a common message to two receivers and a private message to one of the receivers only. The main goal of this work is to find new lower bounds to the error exponents of the strong user, the one that should decode both messages, and of the weak user, that should decode only the common message. Unlike previous works, where suboptimal decoders where used, the exponents we derive in this work pertain to optimal decoding and depend on both rates. We take two different approaches. The first approach is based, in part, on variations of Gallager-type bounding techniques that were presented in a much earlier work on error exponents for erasure/list decoding. The resulting lower bounds are quite simple to understand and to compute. The second approach is based on a technique that is rooted in statistical physics, and it is exponentially tight from the initial step and onward. This technique is based on analyzing the statistics of certain enumerators. Numerical results show that the bounds obtained by this technique are tighter than those obtained by the first approach and previous results. The derivation, however, is more complex than the first approach and the retrieved exponents are harder to compute. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,841 |
1704.00148 | Co-location Epidemic Tracking on London Public Transports Using Low
Power Mobile Magnetometer | The public transports provide an ideal means to enable contagious diseases transmission. This paper introduces a novel idea to detect co-location of people in such environment using just the ubiquitous geomagnetic field sensor on the smart phone. Essentially, given that all passengers must share the same journey between at least two consecutive stations, we have a long window to match the user trajectory. Our idea was assessed over a painstakingly survey of over 150 kilometres of travelling distance, covering different parts of London, using the overground trains, the underground tubes and the buses. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 71,035 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.