id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1910.06899
Identifying Epigenetic Signature of Breast Cancer with Machine Learning
The research reported in this paper identifies the epigenetic biomarker (methylation beta pattern) of breast cancer. Many cancers are triggered by abnormal gene expression levels caused by aberrant methylation of CpG sites in the DNA. In order to develop early diagnostics of cancer-causing methylations and to develop a treatment, it is necessary to identify a few dozen key cancer-related CpG methylation sites out of the millions of locations in the DNA. This research used public TCGA dataset to train a TensorFlow machine learning model to classify breast cancer versus non-breast-cancer tissue samples, based on over 300,000 methylation beta values in each sample. L1 regularization was applied to identify the CpG methylation sites most important for accurate classification. It was hypothesized that CpG sites with the highest learned model weights correspond to DNA locations most relevant to breast cancer. A reduced model trained on methylation betas of just the 25 CpG sites having the highest weights in the full model (trained on methylation betas at over 300,000 CpG sites) has achieved over 94% accuracy on evaluation data, confirming that the identified 25 CpG sites are indeed a biomarker of breast cancer.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
149,475
2303.18211
A Scale-Invariant Sorting Criterion to Find a Causal Order in Additive Noise Models
Additive Noise Models (ANMs) are a common model class for causal discovery from observational data and are often used to generate synthetic data for causal discovery benchmarking. Specifying an ANM requires choosing all parameters, including those not fixed by explicit assumptions. Reisach et al. (2021) show that sorting variables by increasing variance often yields an ordering close to a causal order and introduce var-sortability to quantify this alignment. Since increasing variances may be unrealistic and are scale-dependent, ANM data are often standardized in benchmarks. We show that synthetic ANM data are characterized by another pattern that is scale-invariant: the explainable fraction of a variable's variance, as captured by the coefficient of determination $R^2$, tends to increase along the causal order. The result is high $R^2$-sortability, meaning that sorting the variables by increasing $R^2$ yields an ordering close to a causal order. We propose an efficient baseline algorithm termed $R^2$-SortnRegress that exploits high $R^2$-sortability and that can match and exceed the performance of established causal discovery algorithms. We show analytically that sufficiently high edge weights lead to a relative decrease of the noise contributions along causal chains, resulting in increasingly deterministic relationships and high $R^2$. We characterize $R^2$-sortability for different simulation parameters and find high values in common settings. Our findings reveal high $R^2$-sortability as an assumption about the data generating process relevant to causal discovery and implicit in many ANM sampling schemes. It should be made explicit, as its prevalence in real-world data is unknown. For causal discovery benchmarking, we implement $R^2$-sortability, the $R^2$-SortnRegress algorithm, and ANM simulation procedures in our library CausalDisco at https://causaldisco.github.io/CausalDisco/.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
355,505
2301.12326
Team Resilience under Shock: An Empirical Analysis of GitHub Repositories during Early COVID-19 Pandemic
While many organizations have shifted to working remotely during the COVID-19 pandemic, how the remote workforce and the remote teams are influenced by and would respond to this and future shocks remain largely unknown. Software developers have relied on remote collaborations long before the pandemic, working in virtual teams (GitHub repositories). The dynamics of these repositories through the pandemic provide a unique opportunity to understand how remote teams react under shock. This work presents a systematic analysis. We measure the overall effect of the early pandemic on public GitHub repositories by comparing their sizes and productivity with the counterfactual outcomes forecasted as if there were no pandemic. We find that the productivity level and the number of active members of these teams vary significantly during different periods of the pandemic. We then conduct a finer-grained investigation and study the heterogeneous effects of the shock on individual teams. We find that the resilience of a team is highly correlated to certain properties of the team before the pandemic. Through a bootstrapped regression analysis, we reveal which types of teams are robust or fragile to the shock.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
342,490
2405.14458
YOLOv10: Real-Time End-to-End Object Detection
Over the past years, YOLOs have emerged as the predominant paradigm in the field of real-time object detection owing to their effective balance between computational cost and detection performance. Researchers have explored the architectural designs, optimization objectives, data augmentation strategies, and others for YOLOs, achieving notable progress. However, the reliance on the non-maximum suppression (NMS) for post-processing hampers the end-to-end deployment of YOLOs and adversely impacts the inference latency. Besides, the design of various components in YOLOs lacks the comprehensive and thorough inspection, resulting in noticeable computational redundancy and limiting the model's capability. It renders the suboptimal efficiency, along with considerable potential for performance improvements. In this work, we aim to further advance the performance-efficiency boundary of YOLOs from both the post-processing and model architecture. To this end, we first present the consistent dual assignments for NMS-free training of YOLOs, which brings competitive performance and low inference latency simultaneously. Moreover, we introduce the holistic efficiency-accuracy driven model design strategy for YOLOs. We comprehensively optimize various components of YOLOs from both efficiency and accuracy perspectives, which greatly reduces the computational overhead and enhances the capability. The outcome of our effort is a new generation of YOLO series for real-time end-to-end object detection, dubbed YOLOv10. Extensive experiments show that YOLOv10 achieves state-of-the-art performance and efficiency across various model scales. For example, our YOLOv10-S is 1.8$\times$ faster than RT-DETR-R18 under the similar AP on COCO, meanwhile enjoying 2.8$\times$ smaller number of parameters and FLOPs. Compared with YOLOv9-C, YOLOv10-B has 46\% less latency and 25\% fewer parameters for the same performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
456,439
1911.03118
Not Enough Data? Deep Learning to the Rescue!
Based on recent advances in natural language modeling and those in text generation capabilities, we propose a novel data augmentation method for text classification tasks. We use a powerful pre-trained neural network model to artificially synthesize new labeled data for supervised learning. We mainly focus on cases with scarce labeled data. Our method, referred to as language-model-based data augmentation (LAMBADA), involves fine-tuning a state-of-the-art language generator to a specific task through an initial training phase on the existing (usually small) labeled data. Using the fine-tuned model and given a class label, new sentences for the class are generated. Our process then filters these new sentences by using a classifier trained on the original data. In a series of experiments, we show that LAMBADA improves classifiers' performance on a variety of datasets. Moreover, LAMBADA significantly improves upon the state-of-the-art techniques for data augmentation, specifically those applicable to text classification tasks with little data.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
152,563
1811.00232
Textbook Question Answering with Multi-modal Context Graph Understanding and Self-supervised Open-set Comprehension
In this work, we introduce a novel algorithm for solving the textbook question answering (TQA) task which describes more realistic QA problems compared to other recent tasks. We mainly focus on two related issues with analysis of the TQA dataset. First, solving the TQA problems requires to comprehend multi-modal contexts in complicated input data. To tackle this issue of extracting knowledge features from long text lessons and merging them with visual features, we establish a context graph from texts and images, and propose a new module f-GCN based on graph convolutional networks (GCN). Second, scientific terms are not spread over the chapters and subjects are split in the TQA dataset. To overcome this so called "out-of-domain" issue, before learning QA problems, we introduce a novel self-supervised open-set learning process without any annotations. The experimental results show that our model significantly outperforms prior state-of-the-art methods. Moreover, ablation studies validate that both methods of incorporating f-GCN for extracting knowledge from multi-modal contexts and our newly proposed self-supervised learning process are effective for TQA problems.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
112,045
2006.04655
Random Hypervolume Scalarizations for Provable Multi-Objective Black Box Optimization
Single-objective black box optimization (also known as zeroth-order optimization) is the process of minimizing a scalar objective $f(x)$, given evaluations at adaptively chosen inputs $x$. In this paper, we consider multi-objective optimization, where $f(x)$ outputs a vector of possibly competing objectives and the goal is to converge to the Pareto frontier. Quantitatively, we wish to maximize the standard hypervolume indicator metric, which measures the dominated hypervolume of the entire set of chosen inputs. In this paper, we introduce a novel scalarization function, which we term the hypervolume scalarization, and show that drawing random scalarizations from an appropriately chosen distribution can be used to efficiently approximate the hypervolume indicator metric. We utilize this connection to show that Bayesian optimization with our scalarization via common acquisition functions, such as Thompson Sampling or Upper Confidence Bound, provably converges to the whole Pareto frontier by deriving tight hypervolume regret bounds on the order of $\widetilde{O}(\sqrt{T})$. Furthermore, we highlight the general utility of our scalarization framework by showing that any provably convergent single-objective optimization process can be effortlessly converted to a multi-objective optimization process with provable convergence guarantees.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
180,762
2109.05140
Refocusing on Relevance: Personalization in NLG
Many NLG tasks such as summarization, dialogue response, or open domain question answering focus primarily on a source text in order to generate a target response. This standard approach falls short, however, when a user's intent or context of work is not easily recoverable based solely on that source text -- a scenario that we argue is more of the rule than the exception. In this work, we argue that NLG systems in general should place a much higher level of emphasis on making use of additional context, and suggest that relevance (as used in Information Retrieval) be thought of as a crucial tool for designing user-oriented text-generating tasks. We further discuss possible harms and hazards around such personalization, and argue that value-sensitive design represents a crucial path forward through these challenges.
true
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
254,673
2409.17054
Using LLM for Real-Time Transcription and Summarization of Doctor-Patient Interactions into ePuskesmas in Indonesia
One of the key issues contributing to inefficiency in Puskesmas is the time-consuming nature of doctor-patient interactions. Doctors need to conduct thorough consultations, which include diagnosing the patient's condition, providing treatment advice, and transcribing detailed notes into medical records. In regions with diverse linguistic backgrounds, doctors often have to ask clarifying questions, further prolonging the process. While diagnosing is essential, transcription and summarization can often be automated using AI to improve time efficiency and help doctors enhance care quality and enable early diagnosis and intervention. This paper proposes a solution using a localized large language model (LLM) to transcribe, translate, and summarize doctor-patient conversations. We utilize the Whisper model for transcription and GPT-3 to summarize them into the ePuskemas medical records format. This system is implemented as an add-on to an existing web browser extension, allowing doctors to fill out patient forms while talking. By leveraging this solution for real-time transcription, translation, and summarization, doctors can improve the turnaround time for patient care while enhancing the quality of records, which become more detailed and insightful for future visits. This innovation addresses challenges like overcrowded facilities and the administrative burden on healthcare providers in Indonesia. We believe this solution will help doctors save time, provide better care, and produce more accurate medical records, representing a significant step toward modernizing healthcare and ensuring patients receive timely, high-quality care, even in resource-constrained settings.
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
491,630
1702.03275
Batch Renormalization: Towards Reducing Minibatch Dependence in Batch-Normalized Models
Batch Normalization is quite effective at accelerating and improving the training of deep models. However, its effectiveness diminishes when the training minibatches are small, or do not consist of independent samples. We hypothesize that this is due to the dependence of model layer inputs on all the examples in the minibatch, and different activations being produced between training and inference. We propose Batch Renormalization, a simple and effective extension to ensure that the training and inference models generate the same outputs that depend on individual examples rather than the entire minibatch. Models trained with Batch Renormalization perform substantially better than batchnorm when training with small or non-i.i.d. minibatches. At the same time, Batch Renormalization retains the benefits of batchnorm such as insensitivity to initialization and training efficiency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
68,104
2104.13254
Proceedings - AI/ML for Cybersecurity: Challenges, Solutions, and Novel Ideas at SIAM Data Mining 2021
Malicious cyber activity is ubiquitous and its harmful effects have dramatic and often irreversible impacts on society. Given the shortage of cybersecurity professionals, the ever-evolving adversary, the massive amounts of data which could contain evidence of an attack, and the speed at which defensive actions must be taken, innovations which enable autonomy in cybersecurity must continue to expand, in order to move away from a reactive defense posture and towards a more proactive one. The challenges in this space are quite different from those associated with applying AI in other domains such as computer vision. The environment suffers from an incredibly high degree of uncertainty, stemming from the intractability of ingesting all the available data, as well as the possibility that malicious actors are manipulating the data. Another unique challenge in this space is the dynamism of the adversary causes the indicators of compromise to change frequently and without warning. In spite of these challenges, machine learning has been applied to this domain and has achieved some success in the realm of detection. While this aspect of the problem is far from solved, a growing part of the commercial sector is providing ML-enhanced capabilities as a service. Many of these entities also provide platforms which facilitate the deployment of these automated solutions. Academic research in this space is growing and continues to influence current solutions, as well as strengthen foundational knowledge which will make autonomous agents in this space a possibility.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
232,449
2306.10044
A Practical Entity Linking System for Tables in Scientific Literature
Entity linking is an important step towards constructing knowledge graphs that facilitate advanced question answering over scientific documents, including the retrieval of relevant information included in tables within these documents. This paper introduces a general-purpose system for linking entities to items in the Wikidata knowledge base. It describes how we adapt this system for linking domain-specific entities, especially for those entities embedded within tables drawn from COVID-19-related scientific literature. We describe the setup of an efficient offline instance of the system that enables our entity-linking approach to be more feasible in practice. As part of a broader approach to infer the semantic meaning of scientific tables, we leverage the structural and semantic characteristics of the tables to improve overall entity linking performance.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
374,069
1312.5641
Recursive Robust PCA or Recursive Sparse Recovery in Large but Structured Noise (parts 1 and 2 combined)
This work studies the recursive robust principal components analysis (PCA) problem. If the outlier is the signal-of-interest, this problem can be interpreted as one of recursively recovering a time sequence of sparse vectors, $S_t$, in the presence of large but structured noise, $L_t$. The structure that we assume on $L_t$ is that $L_t$ is dense and lies in a low dimensional subspace that is either fixed or changes "slowly enough". A key application where this problem occurs is in video surveillance where the goal is to separate a slowly changing background ($L_t$) from moving foreground objects ($S_t$) on-the-fly. To solve the above problem, in recent work, we introduced a novel solution called Recursive Projected CS (ReProCS). In this work we develop a simple modification of the original ReProCS idea and analyze it. This modification assumes knowledge of a subspace change model on the $L_t$'s. Under mild assumptions and a denseness assumption on the unestimated part of the subspace of $L_t$ at various times, we show that, with high probability (w.h.p.), the proposed approach can exactly recover the support set of $S_t$ at all times; and the reconstruction errors of both $S_t$ and $L_t$ are upper bounded by a time-invariant and small value. In simulation experiments, we observe that the last assumption holds as long as there is some support change of $S_t$ every few frames.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
29,249
1909.07480
Z-Net: an Anisotropic 3D DCNN for Medical CT Volume Segmentation
Accurate volume segmentation from the Computed Tomography (CT) scan is a common prerequisite for pre-operative planning, intra-operative guidance and quantitative assessment of therapeutic outcomes in robot-assisted Minimally Invasive Surgery (MIS). 3D Deep Convolutional Neural Network (DCNN) is a viable solution for this task, but is memory intensive. Small isotropic patches are cropped from the original and large CT volume to mitigate this issue in practice, but it may cause discontinuities between the adjacent patches and severe class-imbalances within individual sub-volumes. This paper presents a new 3D DCNN framework, namely Z-Net, to tackle the discontinuity and class-imbalance issue by preserving a full field-of-view of the objects in the XY planes using anisotropic spatial separable convolutions. The proposed Z-Net can be seamlessly integrated into existing 3D DCNNs with isotropic convolutions such as 3D U-Net and V-Net, with improved volume segmentation Intersection over Union (IoU) - up to $12.6\%$. Detailed validation of Z-Net is provided for CT aortic, liver and lung segmentation, demonstrating the effectiveness and practical value of Z-Net for intra-operative 3D navigation in robot-assisted MIS.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
145,672
1901.02073
Locally Repairable Convolutional Codes with Sliding Window Repair
Locally repairable convolutional codes (LRCCs) for distributed storage systems (DSSs) are introduced in this work. They enable local repair, for a single node erasure (or more generally, $ \partial - 1 $ erasures per local group), and sliding-window global repair, which can correct erasure patterns with up to $ {\rm d}^c_j - 1 $ erasures in every window of $ j+1 $ consecutive blocks of $ n $ nodes, where $ {\rm d}^c_j $ is the $ j $th column distance of the code. The parameter $ j $ can be adjusted, for a fixed LRCC, according to different catastrophic erasure patterns, requiring only to contact $ n(j+1) - {\rm d}^c_j + 1 $ nodes, plus less than $ \mu n $ other nodes, in the storage system, where $ \mu $ is the memory of the code. A Singleton-type bound is provided for $ {\rm d}^c_j $. If it attains such a bound, an LRCC can correct the same number of catastrophic erasures in a window of length $ n(j+1) $ as an optimal locally repairable block code of the same rate and locality, and with block length $ n(j+1) $. In addition, the LRCC is able to perform the flexible and somehow local sliding-window repair by adjusting $ j $. Furthermore, by adjusting and/or sliding the window, the LRCC can potentially correct more erasures in the original window of $ n(j+1) $ nodes than an optimal locally repairable block code of the same rate and locality, and length $ n(j+1) $. Finally, the concept of partial maximum distance profile (partial MDP) codes is introduced. Partial MDP codes can correct all information-theoretically correctable erasure patterns for a given locality, local distance and information rate. An explicit construction of partial MDP codes whose column distances attain the provided Singleton-type bound, up to certain parameter $ j=L $, is obtained based on known maximum sum-rank distance convolutional codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,124
1711.02478
Grafting for Combinatorial Boolean Model using Frequent Itemset Mining
This paper introduces the combinatorial Boolean model (CBM), which is defined as the class of linear combinations of conjunctions of Boolean attributes. This paper addresses the issue of learning CBM from labeled data. CBM is of high knowledge interpretability but na\"{i}ve learning of it requires exponentially large computation time with respect to data dimension and sample size. To overcome this computational difficulty, we propose an algorithm GRAB (GRAfting for Boolean datasets), which efficiently learns CBM within the $L_1$-regularized loss minimization framework. The key idea of GRAB is to reduce the loss minimization problem to the weighted frequent itemset mining, in which frequent patterns are efficiently computable. We employ benchmark datasets to empirically demonstrate that GRAB is effective in terms of computational efficiency, prediction accuracy and knowledge discovery.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
84,067
2406.13867
Error-Correcting Graph Codes
In this paper, we construct Error-Correcting Graph Codes. An error-correcting graph code of distance $\delta$ is a family $C$ of graphs on a common vertex set of size $n$, such that if we start with any graph in $C$, we would have to modify the neighborhoods of at least $\delta n$ vertices in order to obtain some other graph in $C$. This is a natural graph generalization of the standard Hamming distance error-correcting codes for binary strings. Yohananov and Yaakobi were the first to construct codes in this metric, constructing good codes for $\delta < 1/2$, and optimal codes for a large-alphabet analogue. We extend their work by showing 1. Combinatorial results determining the optimal rate vs. distance trade-off nonconstructively. 2. Graph code analogues of Reed-Solomon codes and code concatenation, leading to positive distance codes for all rates and positive rate codes for all distances. 3. Graph code analogues of dual-BCH codes, yielding large codes with distance $\delta = 1-o(1)$. This gives an explicit ''graph code of Ramsey graphs''. Several recent works, starting with the paper of Alon, Gujgiczer, K\"orner, Milojevi\'c, and Simonyi, have studied more general graph codes; where the symmetric difference between any two graphs in the code is required to have some desired property. Error-correcting graph codes are a particularly interesting instantiation of this concept.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
466,030
2203.11372
Sensitivity of Single-Pulse Radar Detection to Radar State Uncertainty
Mission planners for aircraft operating under threat of detection from ground-based radar systems are often concerned with the probability of detection. Current approaches to path planning in such environments consider the radar state (i.e. radar position and parameters) to be deterministic and known. In practice, there is uncertainty in the radar state which induces uncertainty in the probability of detection. This paper presents a method to incorporate the uncertainty of the radar state in a single-pulse radar detection model. The method linearizes the radar detection model with respect to the the radar state and uses the linearized models to estimate, to the first order, the variance of the probability of detection. The results in this paper validate the linearization using Monte Carlo analysis and illustrate the sensitivity of the probability of detection to radar state uncertainty.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
286,884
2307.01954
FEMDA: Une m\'ethode de classification robuste et flexible
Linear and Quadratic Discriminant Analysis (LDA and QDA) are well-known classical methods but can heavily suffer from non-Gaussian distributions and/or contaminated datasets, mainly because of the underlying Gaussian assumption that is not robust. This paper studies the robustness to scale changes in the data of a new discriminant analysis technique where each data point is drawn by its own arbitrary Elliptically Symmetrical (ES) distribution and its own arbitrary scale parameter. Such a model allows for possibly very heterogeneous, independent but non-identically distributed samples. The new decision rule derived is simple, fast, and robust to scale changes in the data compared to other state-of-the-art method
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
377,529
1808.10328
Asymptotically Optimal Codes Correcting Fixed-Length Duplication Errors in DNA Storage Systems
A (tandem) duplication of length $ k $ is an insertion of an exact copy of a substring of length $ k $ next to its original position. This and related types of impairments are of relevance in modeling communication in the presence of synchronization errors, as well as in several information storage applications. We demonstrate that Levenshtein's construction of binary codes correcting insertions of zeros is, with minor modifications, applicable also to channels with arbitrary alphabets and with duplication errors of arbitrary (but fixed) length $ k $. Furthermore, we derive bounds on the cardinality of optimal $ q $-ary codes correcting up to $ t $ duplications of length $ k $, and establish the following corollaries in the asymptotic regime of growing block-length: 1.) the presented family of codes is optimal for every $ q, t, k $, in the sense of the asymptotic scaling of code redundancy; 2.) the upper bound, when specialized to $ q = 2 $, $ k = 1 $, improves upon Levenshtein's bound for every $ t \geq 3 $; 3.) the bounds coincide for $ t = 1 $, thus yielding the exact asymptotic behavior of the size of optimal single-duplication-correcting codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
106,365
2011.12010
Uncertainty Estimation and Calibration with Finite-State Probabilistic RNNs
Uncertainty quantification is crucial for building reliable and trustable machine learning systems. We propose to estimate uncertainty in recurrent neural networks (RNNs) via stochastic discrete state transitions over recurrent timesteps. The uncertainty of the model can be quantified by running a prediction several times, each time sampling from the recurrent state transition distribution, leading to potentially different results if the model is uncertain. Alongside uncertainty quantification, our proposed method offers several advantages in different settings. The proposed method can (1) learn deterministic and probabilistic automata from data, (2) learn well-calibrated models on real-world classification tasks, (3) improve the performance of out-of-distribution detection, and (4) control the exploration-exploitation trade-off in reinforcement learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
208,025
1412.6720
Off-grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function
Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation owing to advantages over conventional ones. However the performance of compressive sensing (CS)-based estimation methods decreases when true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this work, we analyze the convexity of the DML function in the vicinity of the global solution. Especially under the condition of large array, we search for an approximately convex range around the ture DOAs to guarantee the DML function convex. Based on the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments show that the rough convex range accords well with the exact convex range of the DML function with large array and demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
38,702
2201.02913
Intelligent Reflecting Surface-Aided LEO Satellite Communication: Cooperative Passive Beamforming and Distributed Channel Estimation
We consider in this paper a new intelligent reflecting surface (IRS)-aided LEO satellite communication system, by utilizing the controllable phase shifts of massive passive reflecting elements to achieve flexible beamforming, which copes with the time-varying channel between the high-mobility satellite (SAT) and ground node (GN) cost-effectively. In particular, we propose a new architecture for IRS-aided LEO satellite communication where IRSs are deployed at both sides of the SAT and GN, and study their cooperative passive beamforming (CPB) design over line-of-sight (LoS)-dominant single-reflection and double-reflection channels. Specifically, we jointly optimize the active transmit/receive beamforming at the SAT/GN as well as the CPB at two-sided IRSs to maximize the overall channel gain from the SAT to each GN. Interestingly, we show that under LoS channel conditions, the high-dimensional SAT-GN channel can be decomposed into the outer product of two low-dimensional vectors. By exploiting the decomposed SAT-GN channel, we decouple the original beamforming optimization problem into two simpler subproblems corresponding to the SAT and GN sides, respectively, which are both solved in closed-form. Furthermore, we propose an efficient transmission protocol to conduct channel estimation and beam tracking, which only requires independent processing of the SAT and GN in a distributed manner, thus substantially reducing the implementation complexity. Simulation results validate the performance advantages of the proposed IRS-aided LEO satellite communication system with two-sided cooperative IRSs, as compared to various baseline schemes such as the conventional reflect-array and one-sided IRS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
274,694
2207.02818
mu-synthesis-based Generalized Robust Framework for Grid-following and Grid-forming Inverters
Grid-following and grid-forming inverters are integral components of microgrids and for integration of renewable energy sources with the grid. For grid following inverters, which need to emulate controllable current sources, a significant challenge is to address the large uncertainty of the grid impedance. For grid forming inverters, which need to emulate a controllable voltage source, large uncertainty due to varying loads has to be addressed. In this article, a mu-synthesis-based robust control design methodology, where performance under quantified uncertainty is guaranteed, is developed under a unified approach for both grid-following and grid-forming inverters. The control objectives, while designing the proposed optimal controllers, are: i) reference tracking, disturbance rejection, harmonic compensation capability with sufficient LCL resonance damping under large variations of grid impedance uncertainty for grid-following inverters; ii) reference tracking, disturbance rejection, harmonic compensation capability with enhanced dynamic response under large variations of equivalent loading uncertainty for grid-forming inverters. A combined system-in-the-loop (SIL), controller hardware-in-the-loop (CHIL) and power hardware-in-the-loop (PHIL) based experimental validation on 10 kVA microgrid system with two physical inverter systems is conducted in order to evaluate the efficacy and viability of the proposed controllers.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
306,629
2401.09980
A Comparative Analysis of U-Net-based models for Segmentation of Cardiac MRI
Medical imaging refers to the technologies and methods utilized to view the human body and its inside, in order to diagnose, monitor, or even treat medical disorders. This paper aims to explore the application of deep learning techniques in the semantic segmentation of Cardiac short-axis MRI (Magnetic Resonance Imaging) images, aiming to enhance the diagnosis, monitoring, and treatment of medical disorders related to the heart. The focus centers on implementing various architectures that are derivatives of U-Net, to effectively isolate specific parts of the heart for comprehensive anatomical and functional analysis. Through a combination of images, graphs, and quantitative metrics, the efficacy of the models and their predictions are showcased. Additionally, this paper addresses encountered challenges and outline strategies for future improvements. This abstract provides a concise overview of the efforts in utilizing deep learning for cardiac image segmentation, emphasizing both the accomplishments and areas for further refinement.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
422,447
2411.18656
The Return of Pseudosciences in Artificial Intelligence: Have Machine Learning and Deep Learning Forgotten Lessons from Statistics and History?
In today's world, AI programs powered by Machine Learning are ubiquitous, and have achieved seemingly exceptional performance across a broad range of tasks, from medical diagnosis and credit rating in banking, to theft detection via video analysis, and even predicting political or sexual orientation from facial images. These predominantly deep learning methods excel due to their extraordinary capacity to process vast amounts of complex data to extract complex correlations and relationship from different levels of features. In this paper, we contend that the designers and final users of these ML methods have forgotten a fundamental lesson from statistics: correlation does not imply causation. Not only do most state-of-the-art methods neglect this crucial principle, but by doing so they often produce nonsensical or flawed causal models, akin to social astrology or physiognomy. Consequently, we argue that current efforts to make AI models more ethical by merely reducing biases in the training data are insufficient. Through examples, we will demonstrate that the potential for harm posed by these methods can only be mitigated by a complete rethinking of their core models, improved quality assessment metrics and policies, and by maintaining humans oversight throughout the process.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
511,951
2302.09991
Towards Measuring and Scoring Speaker Diarization Fairness
Speaker diarization, or the task of finding "who spoke and when", is now used in almost every speech processing application. Nevertheless, its fairness has not yet been evaluated because there was no protocol to study its biases one by one. In this paper we propose a protocol and a scoring method designed to evaluate speaker diarization fairness. This protocol is applied on a large dataset of spoken utterances and report the performances of speaker diarization depending on the gender, the age, the accent of the speaker and the length of the spoken sentence. Some biases induced by the gender, or the accent of the speaker were identified when we applied a state-of-the-art speaker diarization method.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
346,645
2407.05412
FM-OSD: Foundation Model-Enabled One-Shot Detection of Anatomical Landmarks
One-shot detection of anatomical landmarks is gaining significant attention for its efficiency in using minimal labeled data to produce promising results. However, the success of current methods heavily relies on the employment of extensive unlabeled data to pre-train an effective feature extractor, which limits their applicability in scenarios where a substantial amount of unlabeled data is unavailable. In this paper, we propose the first foundation model-enabled one-shot landmark detection (FM-OSD) framework for accurate landmark detection in medical images by utilizing solely a single template image without any additional unlabeled data. Specifically, we use the frozen image encoder of visual foundation models as the feature extractor, and introduce dual-branch global and local feature decoders to increase the resolution of extracted features in a coarse to fine manner. The introduced feature decoders are efficiently trained with a distance-aware similarity learning loss to incorporate domain knowledge from the single template image. Moreover, a novel bidirectional matching strategy is developed to improve both robustness and accuracy of landmark detection in the case of scattered similarity map obtained by foundation models. We validate our method on two public anatomical landmark detection datasets. By using solely a single template image, our method demonstrates significant superiority over strong state-of-the-art one-shot landmark detection methods.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
470,964
2002.10819
Variational Inference and Bayesian CNNs for Uncertainty Estimation in Multi-Factorial Bone Age Prediction
Additionally to the extensive use in clinical medicine, biological age (BA) in legal medicine is used to assess unknown chronological age (CA) in applications where identification documents are not available. Automatic methods for age estimation proposed in the literature are predicting point estimates, which can be misleading without the quantification of predictive uncertainty. In our multi-factorial age estimation method from MRI data, we used the Variational Inference approach to estimate the uncertainty of a Bayesian CNN model. Distinguishing model uncertainty from data uncertainty, we interpreted data uncertainty as biological variation, i.e. the range of possible CA of subjects having the same BA.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
165,525
cmp-lg/9803001
Automating Coreference: The Role of Annotated Training Data
We report here on a study of interannotator agreement in the coreference task as defined by the Message Understanding Conference (MUC-6 and MUC-7). Based on feedback from annotators, we clarified and simplified the annotation specification. We then performed an analysis of disagreement among several annotators, concluding that only 16% of the disagreements represented genuine disagreement about coreference; the remainder of the cases were mostly typographical errors or omissions, easily reconciled. Initially, we measured interannotator agreement in the low 80s for precision and recall. To try to improve upon this, we ran several experiments. In our final experiment, we separated the tagging of candidate noun phrases from the linking of actual coreferring expressions. This method shows promise - interannotator agreement climbed to the low 90s - but it needs more extensive validation. These results position the research community to broaden the coreference task to multiple languages, and possibly to different kinds of coreference.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
536,855
1905.03389
Learning to Evolve
Evolution and learning are two of the fundamental mechanisms by which life adapts in order to survive and to transcend limitations. These biological phenomena inspired successful computational methods such as evolutionary algorithms and deep learning. Evolution relies on random mutations and on random genetic recombination. Here we show that learning to evolve, i.e. learning to mutate and recombine better than at random, improves the result of evolution in terms of fitness increase per generation and even in terms of attainable fitness. We use deep reinforcement learning to learn to dynamically adjust the strategy of evolutionary algorithms to varying circumstances. Our methods outperform classical evolutionary algorithms on combinatorial and continuous optimization problems.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
130,185
1504.02719
Diffusion Component Analysis: Unraveling Functional Topology in Biological Networks
Complex biological systems have been successfully modeled by biochemical and genetic interaction networks, typically gathered from high-throughput (HTP) data. These networks can be used to infer functional relationships between genes or proteins. Using the intuition that the topological role of a gene in a network relates to its biological function, local or diffusion based "guilt-by-association" and graph-theoretic methods have had success in inferring gene functions. Here we seek to improve function prediction by integrating diffusion-based methods with a novel dimensionality reduction technique to overcome the incomplete and noisy nature of network data. In this paper, we introduce diffusion component analysis (DCA), a framework that plugs in a diffusion model and learns a low-dimensional vector representation of each node to encode the topological properties of a network. As a proof of concept, we demonstrate DCA's substantial improvement over state-of-the-art diffusion-based approaches in predicting protein function from molecular interaction networks. Moreover, our DCA framework can integrate multiple networks from heterogeneous sources, consisting of genomic information, biochemical experiments and other resources, to even further improve function prediction. Yet another layer of performance gain is achieved by integrating the DCA framework with support vector machines that take our node vector representations as features. Overall, our DCA framework provides a novel representation of nodes in a network that can be used as a plug-in architecture to other machine learning algorithms to decipher topological properties of and obtain novel insights into interactomes.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
41,952
2105.03117
Contrastive Learning for Unsupervised Image-to-Image Translation
Image-to-image translation aims to learn a mapping between different groups of visually distinguishable images. While recent methods have shown impressive ability to change even intricate appearance of images, they still rely on domain labels in training a model to distinguish between distinct visual features. Such dependency on labels often significantly limits the scope of applications since consistent and high-quality labels are expensive. Instead, we wish to capture visual features from images themselves and apply them to enable realistic translation without human-generated labels. To this end, we propose an unsupervised image-to-image translation method based on contrastive learning. The key idea is to learn a discriminator that differentiates between distinctive styles and let the discriminator supervise a generator to transfer those styles across images. During training, we randomly sample a pair of images and train the generator to change the appearance of one towards another while keeping the original structure. Experimental results show that our method outperforms the leading unsupervised baselines in terms of visual quality and translation accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
234,044
1909.07145
ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text (RRC-ArT)
This paper reports the ICDAR2019 Robust Reading Challenge on Arbitrary-Shaped Text (RRC-ArT) that consists of three major challenges: i) scene text detection, ii) scene text recognition, and iii) scene text spotting. A total of 78 submissions from 46 unique teams/individuals were received for this competition. The top performing score of each challenge is as follows: i) T1 - 82.65%, ii) T2.1 - 74.3%, iii) T2.2 - 85.32%, iv) T3.1 - 53.86%, and v) T3.2 - 54.91%. Apart from the results, this paper also details the ArT dataset, tasks description, evaluation metrics and participants methods. The dataset, the evaluation kit as well as the results are publicly available at https://rrc.cvc.uab.es/?ch=14
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
145,605
2411.12756
FedCL-Ensemble Learning: A Framework of Federated Continual Learning with Ensemble Transfer Learning Enhanced for Alzheimer's MRI Classifications while Preserving Privacy
This research work introduces a novel approach to the classification of Alzheimer's disease by using the advanced deep learning techniques combined with secure data processing methods. This research work primary uses transfer learning models such as ResNet, ImageNet, and VNet to extract high-level features from medical image data. Thereafter, these pre-trained models were fine-tuned for Alzheimer's related subtle patterns such that the model is capable of robust feature extraction over varying data sources. Further, the federated learning approaches were incorporated to tackle a few other challenges related to classification, aimed to provide better prediction performance and protect data privacy. The proposed model was built using federated learning without sharing sensitive patient data. This way, the decentralized model benefits from the large and diversified dataset that it is trained upon while ensuring confidentiality. The cipher-based encryption mechanism is added that allows us to secure the transportation of data and further ensure the privacy and integrity of patient information throughout training and classification. The results of the experiments not only help to improve the accuracy of the classification of Alzheimer's but at the same time provides a framework for secure and collaborative analysis of health care data.
false
false
false
false
true
true
true
false
false
false
false
true
false
false
false
false
false
false
509,526
2309.01262
Multimodal Contrastive Learning with Hard Negative Sampling for Human Activity Recognition
Human Activity Recognition (HAR) systems have been extensively studied by the vision and ubiquitous computing communities due to their practical applications in daily life, such as smart homes, surveillance, and health monitoring. Typically, this process is supervised in nature and the development of such systems requires access to large quantities of annotated data. However, the higher costs and challenges associated with obtaining good quality annotations have rendered the application of self-supervised methods an attractive option and contrastive learning comprises one such method. However, a major component of successful contrastive learning is the selection of good positive and negative samples. Although positive samples are directly obtainable, sampling good negative samples remain a challenge. As human activities can be recorded by several modalities like camera and IMU sensors, we propose a hard negative sampling method for multimodal HAR with a hard negative sampling loss for skeleton and IMU data pairs. We exploit hard negatives that have different labels from the anchor but are projected nearby in the latent space using an adjustable concentration parameter. Through extensive experiments on two benchmark datasets: UTD-MHAD and MMAct, we demonstrate the robustness of our approach forlearning strong feature representation for HAR tasks, and on the limited data setting. We further show that our model outperforms all other state-of-the-art methods for UTD-MHAD dataset, and self-supervised methods for MMAct: Cross session, even when uni-modal data are used during downstream activity recognition.
true
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
389,614
2309.09384
Mitigating Over-Smoothing and Over-Squashing using Augmentations of Forman-Ricci Curvature
While Graph Neural Networks (GNNs) have been successfully leveraged for learning on graph-structured data across domains, several potential pitfalls have been described recently. Those include the inability to accurately leverage information encoded in long-range connections (over-squashing), as well as difficulties distinguishing the learned representations of nearby nodes with growing network depth (over-smoothing). An effective way to characterize both effects is discrete curvature: Long-range connections that underlie over-squashing effects have low curvature, whereas edges that contribute to over-smoothing have high curvature. This observation has given rise to rewiring techniques, which add or remove edges to mitigate over-smoothing and over-squashing. Several rewiring approaches utilizing graph characteristics, such as curvature or the spectrum of the graph Laplacian, have been proposed. However, existing methods, especially those based on curvature, often require expensive subroutines and careful hyperparameter tuning, which limits their applicability to large-scale graphs. Here we propose a rewiring technique based on Augmented Forman-Ricci curvature (AFRC), a scalable curvature notation, which can be computed in linear time. We prove that AFRC effectively characterizes over-smoothing and over-squashing effects in message-passing GNNs. We complement our theoretical results with experiments, which demonstrate that the proposed approach achieves state-of-the-art performance while significantly reducing the computational cost in comparison with other methods. Utilizing fundamental properties of discrete curvature, we propose effective heuristics for hyperparameters in curvature-based rewiring, which avoids expensive hyperparameter searches, further improving the scalability of the proposed approach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
392,586
2302.12506
Exploring the Enablers of Digital Transformation in Small and Medium-Sized Enterprise
Recently, digital transformation has caught much attention of both academics and practitioners. With the advent of digital technologies, small-and-medium-sized enterprises (SMEs) have obtained the capacity to initiate digital transformation initiatives in a similar fashion to large-sized organizations. The innate characteristics of digital technologies also favor SMEs in promoting initiation of digital transformation. However, the process digital transformation in SMEs remains a black box and the existing findings of digital transformation in SMEs are limited and remain fragmented. Considering the important contribution SMEs can offer to nations and economies; it is timely and relevant to conduct a profound analysis on digital transformation in SMEs. By conducting a thorough review of existing related literature in management, information systems, and business disciplines, this book chapter aims to understand both internal and external enablers of the digital transformation in SMEs.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
347,595
1301.7236
Reverse Berlekamp-Massey Decoding
We propose a new algorithm for decoding Reed-Solomon codes (up to half the minimum distance) and for computing inverses in $F[x]/m(x)$. The proposed algorithm is similar in spirit and structure to the Berlekamp-Massey algorithm, but it works naturally for general $m(x)$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
21,588
2303.17523
Quantum Circuit Fidelity Improvement with Long Short-Term Memory Networks
Although NISQ computers show great promise in accelerating many tasks that are not practically possible using classical computation, useful quantum computing is still a long way off. One important reason is due to the fragile nature of quantum hardware. As the building blocks of a quantum circuit (QC), quantum gates and qubits are susceptible to external interference, and therefore even a simple QC can produce extremely noisy output. Since it is hard to distinguish whether the output represents meaningful computation or just random noise, it raises the question of how much we can rely on the output of a QC, i.e., the fidelity of the QC. In this paper, we purpose a simple yet intuitive metric to measure the fidelity of a QC. By using this metric, we can observe the evolution of fidelity with time as the QC interacts with its external environment. Consequently, we can frame fidelity prediction as a Time Series Forecasting problem and use Long Short-Term Memory (LSTM) neural networks to better estimate the fidelity of a QC. This gives the user better opportunities to optimize the mapping of qubits into the quantum hardware for larger gains. We introduce the LSTM architecture and present a complete workflow to build the training circuit dataset. The trained LSTM system, Q-fid, can predict the output fidelity of a QC running on a specific quantum processor, without the need for any separate input of hardware calibration data or gate error rates. Evaluated on the QASMbench NISQ benchmark suite, Q-fid's prediction achieves an average RMSE of 0.0515, up to 24.7x more accurate than the default Qiskit transpile tool mapomatic. When used to find the high-fidelity circuit layouts from the available circuit transpilations, Q-fid predicts the fidelity for the top 10% layouts with an average RMSE of 0.0252, up to 32.8x more accurate than mapomatic.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
355,237
1505.04474
Visual Semantic Role Labeling
In this paper we introduce the problem of Visual Semantic Role Labeling: given an image we want to detect people doing actions and localize the objects of interaction. Classical approaches to action recognition either study the task of action classification at the image or video clip level or at best produce a bounding box around the person doing the action. We believe such an output is inadequate and a complete understanding can only come when we are able to associate objects in the scene to the different semantic roles of the action. To enable progress towards this goal, we annotate a dataset of 16K people instances in 10K images with actions they are doing and associate objects in the scene with different semantic roles for each action. Finally, we provide a set of baseline algorithms for this task and analyze error modes providing directions for future work.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
43,192
2304.03550
Hierarchical Disentanglement-Alignment Network for Robust SAR Vehicle Recognition
Vehicle recognition is a fundamental problem in SAR image interpretation. However, robustly recognizing vehicle targets is a challenging task in SAR due to the large intraclass variations and small interclass variations. Additionally, the lack of large datasets further complicates the task. Inspired by the analysis of target signature variations and deep learning explainability, this paper proposes a novel domain alignment framework named the Hierarchical Disentanglement-Alignment Network (HDANet) to achieve robustness under various operating conditions. Concisely, HDANet integrates feature disentanglement and alignment into a unified framework with three modules: domain data generation, multitask-assisted mask disentanglement, and domain alignment of target features. The first module generates diverse data for alignment, and three simple but effective data augmentation methods are designed to simulate target signature variations. The second module disentangles the target features from background clutter using the multitask-assisted mask to prevent clutter from interfering with subsequent alignment. The third module employs a contrastive loss for domain alignment to extract robust target features from generated diverse data and disentangled features. Lastly, the proposed method demonstrates impressive robustness across nine operating conditions in the MSTAR dataset, and extensive qualitative and quantitative analyses validate the effectiveness of our framework.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
356,860
1207.0132
Answering Table Queries on the Web using Column Keywords
We present the design of a structured search engine which returns a multi-column table in response to a query consisting of keywords describing each of its columns. We answer such queries by exploiting the millions of tables on the Web because these are much richer sources of structured knowledge than free-format text. However, a corpus of tables harvested from arbitrary HTML web pages presents huge challenges of diversity and redundancy not seen in centrally edited knowledge bases. We concentrate on one concrete task in this paper. Given a set of Web tables T1, . . ., Tn, and a query Q with q sets of keywords Q1, . . ., Qq, decide for each Ti if it is relevant to Q and if so, identify the mapping between the columns of Ti and query columns. We represent this task as a graphical model that jointly maps all tables by incorporating diverse sources of clues spanning matches in different parts of the table, corpus-wide co-occurrence statistics, and content overlap across table columns. We define a novel query segmentation model for matching keywords to table columns, and a robust mechanism of exploiting content overlap across table columns. We design efficient inference algorithms based on bipartite matching and constrained graph cuts to solve the joint labeling task. Experiments on a workload of 59 queries over a 25 million web table corpus shows significant boost in accuracy over baseline IR methods.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
17,126
2309.01236
BodySLAM++: Fast and Tightly-Coupled Visual-Inertial Camera and Human Motion Tracking
Robust, fast, and accurate human state - 6D pose and posture - estimation remains a challenging problem. For real-world applications, the ability to estimate the human state in real-time is highly desirable. In this paper, we present BodySLAM++, a fast, efficient, and accurate human and camera state estimation framework relying on visual-inertial data. BodySLAM++ extends an existing visual-inertial state estimation framework, OKVIS2, to solve the dual task of estimating camera and human states simultaneously. Our system improves the accuracy of both human and camera state estimation with respect to baseline methods by 26% and 12%, respectively, and achieves real-time performance at 15+ frames per second on an Intel i7-model CPU. Experiments were conducted on a custom dataset containing both ground truth human and camera poses collected with an indoor motion tracking system.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
389,601
1307.7159
MacWilliams Extension Theorems and the Local-Global Property for Codes over Rings
The MacWilliams extension theorem is investigated for various weight functions over finite Frobenius rings. The problem is reformulated in terms of a local-global property for subgroups of the general linear group. Among other things, it is shown that the extension theorem holds true for poset weights if and only if the underlying poset is hierarchical. Specifically, the Rosenbloom-Tsfasman weight for vector codes satisfies the extension theorem, whereas the Niederreiter-Rosenbloom-Tsfasman weight for matrix codes does not. A short character-theoretic proof of the well-known MacWilliams extension theorem for the homogeneous weight is provided. Moreover it is shown that the extension theorem carries over to direct products of weights, but not to symmetrized products.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
26,072
2112.02279
U2-Former: A Nested U-shaped Transformer for Image Restoration
While Transformer has achieved remarkable performance in various high-level vision tasks, it is still challenging to exploit the full potential of Transformer in image restoration. The crux lies in the limited depth of applying Transformer in the typical encoder-decoder framework for image restoration, resulting from heavy self-attention computation load and inefficient communications across different depth (scales) of layers. In this paper, we present a deep and effective Transformer-based network for image restoration, termed as U2-Former, which is able to employ Transformer as the core operation to perform image restoration in a deep encoding and decoding space. Specifically, it leverages the nested U-shaped structure to facilitate the interactions across different layers with different scales of feature maps. Furthermore, we optimize the computational efficiency for the basic Transformer block by introducing a feature-filtering mechanism to compress the token representation. Apart from the typical supervision ways for image restoration, our U2-Former also performs contrastive learning in multiple aspects to further decouple the noise component from the background image. Extensive experiments on various image restoration tasks, including reflection removal, rain streak removal and dehazing respectively, demonstrate the effectiveness of the proposed U2-Former.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,796
2211.00746
3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D Point Clouds
We propose a method for joint detection and tracking of multiple objects in 3D point clouds, a task conventionally treated as a two-step process comprising object detection followed by data association. Our method embeds both steps into a single end-to-end trainable network eliminating the dependency on external object detectors. Our model exploits temporal information employing multiple frames to detect objects and track them in a single network, thereby making it a utilitarian formulation for real-world scenarios. Computing affinity matrix by employing features similarity across consecutive point cloud scans forms an integral part of visual tracking. We propose an attention-based refinement module to refine the affinity matrix by suppressing erroneous correspondences. The module is designed to capture the global context in affinity matrix by employing self-attention within each affinity matrix and cross-attention across a pair of affinity matrices. Unlike competing approaches, our network does not require complex post-processing algorithms, and processes raw LiDAR frames to directly output tracking results. We demonstrate the effectiveness of our method on the three tracking benchmarks: JRDB, Waymo, and KITTI. Experimental evaluations indicate the ability of our model to generalize well across datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
327,981
2310.16781
Kiki or Bouba? Sound Symbolism in Vision-and-Language Models
Although the mapping between sound and meaning in human language is assumed to be largely arbitrary, research in cognitive science has shown that there are non-trivial correlations between particular sounds and meanings across languages and demographic groups, a phenomenon known as sound symbolism. Among the many dimensions of meaning, sound symbolism is particularly salient and well-demonstrated with regards to cross-modal associations between language and the visual domain. In this work, we address the question of whether sound symbolism is reflected in vision-and-language models such as CLIP and Stable Diffusion. Using zero-shot knowledge probing to investigate the inherent knowledge of these models, we find strong evidence that they do show this pattern, paralleling the well-known kiki-bouba effect in psycholinguistics. Our work provides a novel method for demonstrating sound symbolism and understanding its nature using computational tools. Our code will be made publicly available.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
402,871
2406.10083
On the Evaluation of Speech Foundation Models for Spoken Language Understanding
The Spoken Language Understanding Evaluation (SLUE) suite of benchmark tasks was recently introduced to address the need for open resources and benchmarking of complex spoken language understanding (SLU) tasks, including both classification and sequence generation tasks, on natural speech. The benchmark has demonstrated preliminary success in using pre-trained speech foundation models (SFM) for these SLU tasks. However, the community still lacks a fine-grained understanding of the comparative utility of different SFMs. Inspired by this, we ask: which SFMs offer the most benefits for these complex SLU tasks, and what is the most effective approach for incorporating these SFMs? To answer this, we perform an extensive evaluation of multiple supervised and self-supervised SFMs using several evaluation protocols: (i) frozen SFMs with a lightweight prediction head, (ii) frozen SFMs with a complex prediction head, and (iii) fine-tuned SFMs with a lightweight prediction head. Although the supervised SFMs are pre-trained on much more speech recognition data (with labels), they do not always outperform self-supervised SFMs; the latter tend to perform at least as well as, and sometimes better than, supervised SFMs, especially on the sequence generation tasks in SLUE. While there is no universally optimal way of incorporating SFMs, the complex prediction head gives the best performance for most tasks, although it increases the inference time. We also introduce an open-source toolkit and performance leaderboard, SLUE-PERB, for these tasks and modeling strategies.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
464,216
2402.01763
When Large Language Models Meet Vector Databases: A Survey
This survey explores the synergistic potential of Large Language Models (LLMs) and Vector Databases (VecDBs), a burgeoning but rapidly evolving research area. With the proliferation of LLMs comes a host of challenges, including hallucinations, outdated knowledge, prohibitive commercial application costs, and memory issues. VecDBs emerge as a compelling solution to these issues by offering an efficient means to store, retrieve, and manage the high-dimensional vector representations intrinsic to LLM operations. Through this nuanced review, we delineate the foundational principles of LLMs and VecDBs and critically analyze their integration's impact on enhancing LLM functionalities. This discourse extends into a discussion on the speculative future developments in this domain, aiming to catalyze further research into optimizing the confluence of LLMs and VecDBs for advanced data handling and knowledge extraction capabilities.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
true
false
426,204
2110.11425
A Machine Learning Framework Towards Transparency in Experts' Decision Quality
Expert workers make non-trivial decisions with significant implications. Experts' decision accuracy is thus a fundamental aspect of their judgment quality, key to both management and consumers of experts' services. Yet, in many important settings, transparency in experts' decision quality is rarely possible because ground truth data for evaluating the experts' decisions is costly and available only for a limited set of decisions. Furthermore, different experts typically handle exclusive sets of decisions, and thus prior solutions that rely on the aggregation of multiple experts' decisions for the same instance are inapplicable. We first formulate the problem of estimating experts' decision accuracy in this setting and then develop a machine-learning-based framework to address it. Our method effectively leverages both abundant historical data on workers' past decisions, and scarce decision instances with ground truth information. We conduct extensive empirical evaluations of our method's performance relative to alternatives using both semi-synthetic data based on publicly available datasets, and purposefully compiled dataset on real workers' decisions. The results show that our approach is superior to existing alternatives across diverse settings, including different data domains, experts' qualities, and the amount of ground truth data. To our knowledge, this paper is the first to posit and address the problem of estimating experts' decision accuracies from historical data with scarcely available ground truth, and it is the first to offer comprehensive results for this problem setting, establishing the performances that can be achieved across settings, as well as the state-of-the-art performance on which future work can build.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
262,477
2410.03705
Gradient Boosting Decision Trees on Medical Diagnosis over Tabular Data
Medical diagnosis is a crucial task in the medical field, in terms of providing accurate classification and respective treatments. Having near-precise decisions based on correct diagnosis can affect a patient's life itself, and may extremely result in a catastrophe if not classified correctly. Several traditional machine learning (ML), such as support vector machines (SVMs) and logistic regression, and state-of-the-art tabular deep learning (DL) methods, including TabNet and TabTransformer, have been proposed and used over tabular medical datasets. Additionally, due to the superior performances, lower computational costs, and easier optimization over different tasks, ensemble methods have been used in the field more recently. They offer a powerful alternative in terms of providing successful medical decision-making processes in several diagnosis tasks. In this study, we investigated the benefits of ensemble methods, especially the Gradient Boosting Decision Tree (GBDT) algorithms in medical classification tasks over tabular data, focusing on XGBoost, CatBoost, and LightGBM. The experiments demonstrate that GBDT methods outperform traditional ML and deep neural network architectures and have the highest average rank over several benchmark tabular medical diagnosis datasets. Furthermore, they require much less computational power compared to DL models, creating the optimal methodology in terms of high performance and lower complexity.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
494,906
2204.12463
Focal Sparse Convolutional Networks for 3D Object Detection
Non-uniformed 3D sparse data, e.g., point clouds or voxels in different spatial positions, make contribution to the task of 3D object detection in different ways. Existing basic components in sparse convolutional networks (Sparse CNNs) process all sparse data, regardless of regular or submanifold sparse convolution. In this paper, we introduce two new modules to enhance the capability of Sparse CNNs, both are based on making feature sparsity learnable with position-wise importance prediction. They are focal sparse convolution (Focals Conv) and its multi-modal variant of focal sparse convolution with fusion, or Focals Conv-F for short. The new modules can readily substitute their plain counterparts in existing Sparse CNNs and be jointly trained in an end-to-end fashion. For the first time, we show that spatially learnable sparsity in sparse convolution is essential for sophisticated 3D object detection. Extensive experiments on the KITTI, nuScenes and Waymo benchmarks validate the effectiveness of our approach. Without bells and whistles, our results outperform all existing single-model entries on the nuScenes test benchmark at the paper submission time. Code and models are at https://github.com/dvlab-research/FocalsConv.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
293,483
2402.14299
We Choose to Go to Space: Agent-driven Human and Multi-Robot Collaboration in Microgravity
We present SpaceAgents-1, a system for learning human and multi-robot collaboration (HMRC) strategies under microgravity conditions. Future space exploration requires humans to work together with robots. However, acquiring proficient robot skills and adept collaboration under microgravity conditions poses significant challenges within ground laboratories. To address this issue, we develop a microgravity simulation environment and present three typical configurations of intra-cabin robots. We propose a hierarchical heterogeneous multi-agent collaboration architecture: guided by foundation models, a Decision-Making Agent serves as a task planner for human-robot collaboration, while individual Skill-Expert Agents manage the embodied control of robots. This mechanism empowers the SpaceAgents-1 system to execute a range of intricate long-horizon HMRC tasks.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
431,616
2410.16151
Small Contributions, Small Networks: Efficient Neural Network Pruning Based on Relative Importance
Recent advancements have scaled neural networks to unprecedented sizes, achieving remarkable performance across a wide range of tasks. However, deploying these large-scale models on resource-constrained devices poses significant challenges due to substantial storage and computational requirements. Neural network pruning has emerged as an effective technique to mitigate these limitations by reducing model size and complexity. In this paper, we introduce an intuitive and interpretable pruning method based on activation statistics, rooted in information theory and statistical analysis. Our approach leverages the statistical properties of neuron activations to identify and remove weights with minimal contributions to neuron outputs. Specifically, we build a distribution of weight contributions across the dataset and utilize its parameters to guide the pruning process. Furthermore, we propose a Pruning-aware Training strategy that incorporates an additional regularization term to enhance the effectiveness of our pruning method. Extensive experiments on multiple datasets and network architectures demonstrate that our method consistently outperforms several baseline and state-of-the-art pruning techniques.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
500,898
2502.09795
Vision-based Geo-Localization of Future Mars Rotorcraft in Challenging Illumination Conditions
Planetary exploration using aerial assets has the potential for unprecedented scientific discoveries on Mars. While NASA's Mars helicopter Ingenuity proved flight in Martian atmosphere is possible, future Mars rotocrafts will require advanced navigation capabilities for long-range flights. One such critical capability is Map-based Localization (MbL) which registers an onboard image to a reference map during flight in order to mitigate cumulative drift from visual odometry. However, significant illumination differences between rotocraft observations and a reference map prove challenging for traditional MbL systems, restricting the operational window of the vehicle. In this work, we investigate a new MbL system and propose Geo-LoFTR, a geometry-aided deep learning model for image registration that is more robust under large illumination differences than prior models. The system is supported by a custom simulation framework that uses real orbital maps to produce large amounts of realistic images of the Martian terrain. Comprehensive evaluations show that our proposed system outperforms prior MbL efforts in terms of localization accuracy under significant lighting and scale variations. Furthermore, we demonstrate the validity of our approach across a simulated Martian day.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
533,602
1902.06068
Deep Learning for Image Super-resolution: A Survey
Image Super-Resolution (SR) is an important class of image processing techniques to enhance the resolution of images and videos in computer vision. Recent years have witnessed remarkable progress of image super-resolution using deep learning techniques. This article aims to provide a comprehensive survey on recent advances of image super-resolution using deep learning approaches. In general, we can roughly group the existing studies of SR techniques into three major categories: supervised SR, unsupervised SR, and domain-specific SR. In addition, we also cover some other important issues, such as publicly available benchmark datasets and performance evaluation metrics. Finally, we conclude this survey by highlighting several future directions and open issues which should be further addressed by the community in the future.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
121,675
2108.08712
Teaching Uncertainty Quantification in Machine Learning through Use Cases
Uncertainty in machine learning is not generally taught as general knowledge in Machine Learning course curricula. In this paper we propose a short curriculum for a course about uncertainty in machine learning, and complement the course with a selection of use cases, aimed to trigger discussion and let students play with the concepts of uncertainty in a programming setting. Our use cases cover the concept of output uncertainty, Bayesian neural networks and weight distributions, sources of uncertainty, and out of distribution detection. We expect that this curriculum and set of use cases motivates the community to adopt these important concepts into courses for safety in AI.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
251,353
1107.4496
Cartesian stiffness matrix of manipulators with passive joints: analytical approach
The paper focuses on stiffness matrix computation for manipulators with passive joints. It proposes both explicit analytical expressions and an efficient recursive procedure that are applicable in general case and allow obtaining the desired matrix either in analytical or numerical form. Advantages of the developed technique and its ability to produce both singular and non-singular stiffness matrices are illustrated by application examples that deal with stiffness modeling of two Stewart-Gough platforms.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
11,402
2412.08521
EMS: Adaptive Evict-then-Merge Strategy for Head-wise KV Cache Compression Based on Global-Local Importance
As large language models (LLMs) continue to advance, the demand for higher quality and faster processing of long contexts across various applications is growing. KV cache is widely adopted as it stores previously generated key and value tokens, effectively reducing redundant computations during inference. However, as memory overhead becomes a significant concern, efficient compression of KV cache has gained increasing attention. Most existing methods perform compression from two perspectives: identifying important tokens and designing compression strategies. However, these approaches often produce biased distributions of important tokens due to the influence of accumulated attention scores or positional encoding. Furthermore, they overlook the sparsity and redundancy across different heads, which leads to difficulties in preserving the most effective information at the head level. To this end, we propose EMS to overcome these limitations, while achieving better KV cache compression under extreme compression ratios. Specifically, we introduce a Global-Local score that combines accumulated attention scores from both global and local KV tokens to better identify the token importance. For the compression strategy, we design an adaptive and unified Evict-then-Merge framework that accounts for the sparsity and redundancy of KV tokens across different heads. Additionally, we implement the head-wise parallel compression through a zero-class mechanism to enhance efficiency. Extensive experiments demonstrate our SOTA performance even under extreme compression ratios. EMS consistently achieves the lowest perplexity, improves scores by over 1.28 points across four LLMs on LongBench under a 256 cache budget, and preserves 95% retrieval accuracy with a cache budget less than 2% of the context length in the Needle-in-a-Haystack task.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
516,124
1311.2978
Authorship Attribution Using Word Network Features
In this paper, we explore a set of novel features for authorship attribution of documents. These features are derived from a word network representation of natural language text. As has been noted in previous studies, natural language tends to show complex network structure at word level, with low degrees of separation and scale-free (power law) degree distribution. There has also been work on authorship attribution that incorporates ideas from complex networks. The goal of our paper is to explore properties of these complex networks that are suitable as features for machine-learning-based authorship attribution of documents. We performed experiments on three different datasets, and obtained promising results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
28,372
2201.02503
A Review of Deep Learning Techniques for Markerless Human Motion on Synthetic Datasets
Markerless motion capture has become an active field of research in computer vision in recent years. Its extensive applications are known in a great variety of fields, including computer animation, human motion analysis, biomedical research, virtual reality, and sports science. Estimating human posture has recently gained increasing attention in the computer vision community, but due to the depth of uncertainty and the lack of the synthetic datasets, it is a challenging task. Various approaches have recently been proposed to solve this problem, many of which are based on deep learning. They are primarily focused on improving the performance of existing benchmarks with significant advances, especially 2D images. Based on powerful deep learning techniques and recently collected real-world datasets, we explored a model that can predict the skeleton of an animation based solely on 2D images. Frames generated from different real-world datasets with synthesized poses using different body shapes from simple to complex. The implementation process uses DeepLabCut on its own dataset to perform many necessary steps, then use the input frames to train the model. The output is an animated skeleton for human movement. The composite dataset and other results are the "ground truth" of the deep model.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
274,563
1909.04801
Faster Johnson-Lindenstrauss Transforms via Kronecker Products
The Kronecker product is an important matrix operation with a wide range of applications in supporting fast linear transforms, including signal processing, graph theory, quantum computing and deep learning. In this work, we introduce a generalization of the fast Johnson-Lindenstrauss projection for embedding vectors with Kronecker product structure, the Kronecker fast Johnson-Lindenstrauss transform (KFJLT). The KFJLT reduces the embedding cost to an exponential factor of the standard fast Johnson-Lindenstrauss transform (FJLT)'s cost when applied to vectors with Kronecker structure, by avoiding explicitly forming the full Kronecker products. We prove that this computational gain comes with only a small price in embedding power: given $N = \prod_{k=1}^d n_k$, consider a finite set of $p$ points in a tensor product of $d$ constituent Euclidean spaces $\bigotimes_{k=d}^{1}\mathbb{R}^{n_k} \subset \mathbb{R}^{N}$. With high probability, a random KFJLT matrix of dimension $N \times m$ embeds the set of points up to multiplicative distortion $(1\pm \varepsilon)$ provided by $m \gtrsim \varepsilon^{-2} \cdot \log^{2d - 1} (p) \cdot \log N$. We conclude by describing a direct application of the KFJLT to the efficient solution of large-scale Kronecker-structured least squares problems for fitting the CP tensor decomposition.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
144,900
2210.03488
AlphaFold Distillation for Protein Design
Inverse protein folding, the process of designing sequences that fold into a specific 3D structure, is crucial in bio-engineering and drug discovery. Traditional methods rely on experimentally resolved structures, but these cover only a small fraction of protein sequences. Forward folding models like AlphaFold offer a potential solution by accurately predicting structures from sequences. However, these models are too slow for integration into the optimization loop of inverse folding models during training. To address this, we propose using knowledge distillation on folding model confidence metrics, such as pTM or pLDDT scores, to create a faster and end-to-end differentiable distilled model. This model can then be used as a structure consistency regularizer in training the inverse folding model. Our technique is versatile and can be applied to other design tasks, such as sequence-based protein infilling. Experimental results show that our method outperforms non-regularized baselines, yielding up to 3% improvement in sequence recovery and up to 45% improvement in protein diversity while maintaining structural consistency in generated sequences. Code is available at https://github.com/IBM/AFDistill
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
322,057
2212.09121
RIScatter: Unifying Backscatter Communication and Reconfigurable Intelligent Surface
Backscatter Communication (BackCom) nodes harvest energy from and modulate information over external carriers. Reconfigurable Intelligent Surface (RIS) adapts phase shift response to alter channel strength in specific directions. In this paper, we unify those two seemingly different technologies (and their derivatives) into one architecture called RIScatter. RIScatter is a batteryless cognitive radio that recycles ambient signal in an adaptive and customizable manner, where dispersed or co-located scatter nodes partially modulate their information and partially engineer the wireless channel. The key is to render the probability distribution of reflection states as a joint function of the information source, Channel State Information (CSI), and relative priority of coexisting links. This enables RIScatter to softly bridge BackCom and RIS; reduce to either in special cases; or evolve in a mixed form for heterogeneous traffic control and universal hardware design. We also propose a low-complexity Successive Interference Cancellation (SIC)-free receiver that exploits the properties of RIScatter. For a single-user multi-node network, we characterize the achievable primary-(total-)backscatter rate region by optimizing the input distribution at scatter nodes, the active beamforming at the Access Point (AP), and the energy decision regions at the user. Simulations demonstrate RIScatter nodes can shift between backscatter modulation and passive beamforming.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
337,009
2008.08882
BOIL: Towards Representation Change for Few-shot Learning
Model Agnostic Meta-Learning (MAML) is one of the most representative of gradient-based meta-learning algorithms. MAML learns new tasks with a few data samples using inner updates from a meta-initialization point and learns the meta-initialization parameters with outer updates. It has recently been hypothesized that representation reuse, which makes little change in efficient representations, is the dominant factor in the performance of the meta-initialized model through MAML in contrast to representation change, which causes a significant change in representations. In this study, we investigate the necessity of representation change for the ultimate goal of few-shot learning, which is solving domain-agnostic tasks. To this aim, we propose a novel meta-learning algorithm, called BOIL (Body Only update in Inner Loop), which updates only the body (extractor) of the model and freezes the head (classifier) during inner loop updates. BOIL leverages representation change rather than representation reuse. This is because feature vectors (representations) have to move quickly to their corresponding frozen head vectors. We visualize this property using cosine similarity, CKA, and empirical results without the head. BOIL empirically shows significant performance improvement over MAML, particularly on cross-domain tasks. The results imply that representation change in gradient-based meta-learning approaches is a critical component.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
192,530
cs/0606021
A simulation engine to support production scheduling using genetics-based machine learning
The ever higher complexity of manufacturing systems, continually shortening life cycles of products and their increasing variety, as well as the unstable market situation of the recent years require introducing grater flexibility and responsiveness to manufacturing processes. From this perspective, one of the critical manufacturing tasks, which traditionally attract significant attention in both academia and the industry, but which have no satisfactory universal solution, is production scheduling. This paper proposes an approach based on genetics-based machine learning (GBML) to treat the problem of flow shop scheduling. By the approach, a set of scheduling rules is represented as an individual of genetic algorithms, and the fitness of the individual is estimated based on the makespan of the schedule generated by using the rule-set. A concept of the interactive software environment consisting of a simulator and a GBML simulation engine is introduced to support human decision-making during scheduling. A pilot study is underway to evaluate the performance of the GBML technique in comparison with other methods (such as Johnson's algorithm and simulated annealing) while completing test examples.
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
539,506
2410.04458
A Comprehensive Framework for Analyzing the Convergence of Adam: Bridging the Gap with SGD
Adaptive Moment Estimation (Adam) is a cornerstone optimization algorithm in deep learning, widely recognized for its flexibility with adaptive learning rates and efficiency in handling large-scale data. However, despite its practical success, the theoretical understanding of Adam's convergence has been constrained by stringent assumptions, such as almost surely bounded stochastic gradients or uniformly bounded gradients, which are more restrictive than those typically required for analyzing stochastic gradient descent (SGD). In this paper, we introduce a novel and comprehensive framework for analyzing the convergence properties of Adam. This framework offers a versatile approach to establishing Adam's convergence. Specifically, we prove that Adam achieves asymptotic (last iterate sense) convergence in both the almost sure sense and the \(L_1\) sense under the relaxed assumptions typically used for SGD, namely \(L\)-smoothness and the ABC inequality. Meanwhile, under the same assumptions, we show that Adam attains non-asymptotic sample complexity bounds similar to those of SGD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
495,291
2401.12497
Building Minimal and Reusable Causal State Abstractions for Reinforcement Learning
Two desiderata of reinforcement learning (RL) algorithms are the ability to learn from relatively little experience and the ability to learn policies that generalize to a range of problem specifications. In factored state spaces, one approach towards achieving both goals is to learn state abstractions, which only keep the necessary variables for learning the tasks at hand. This paper introduces Causal Bisimulation Modeling (CBM), a method that learns the causal relationships in the dynamics and reward functions for each task to derive a minimal, task-specific abstraction. CBM leverages and improves implicit modeling to train a high-fidelity causal dynamics model that can be reused for all tasks in the same environment. Empirical validation on manipulation environments and Deepmind Control Suite reveals that CBM's learned implicit dynamics models identify the underlying causal relationships and state abstractions more accurately than explicit ones. Furthermore, the derived state abstractions allow a task learner to achieve near-oracle levels of sample efficiency and outperform baselines on all tasks.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
423,404
1604.08055
Selecting the Selection
Modern saturation-based Automated Theorem Provers typically implement the superposition calculus for reasoning about first-order logic with or without equality. Practical implementations of this calculus use a variety of literal selections and term orderings to tame the growth of the search space and help steer proof search. This paper introduces the notion of lookahead selection that estimates (looks ahead) the effect on the search space of selecting a literal. There is also a case made for the use of incomplete selection functions that attempt to restrict the search space instead of satisfying some completeness criteria. Experimental evaluation in the \Vampire\ theorem prover shows that both lookahead selection and incomplete selection significantly contribute to solving hard problems unsolvable by other methods.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
55,160
2206.09315
Knowledge Learning with Crowdsourcing: A Brief Review and Systematic Perspective
Big data have the characteristics of enormous volume, high velocity, diversity, value-sparsity, and uncertainty, which lead the knowledge learning from them full of challenges. With the emergence of crowdsourcing, versatile information can be obtained on-demand so that the wisdom of crowds is easily involved to facilitate the knowledge learning process. During the past thirteen years, researchers in the AI community made great efforts to remove the obstacles in the field of learning from crowds. This concentrated survey paper comprehensively reviews the technical progress in crowdsourcing learning from a systematic perspective that includes three dimensions of data, models, and learning processes. In addition to reviewing existing important work, the paper places a particular emphasis on providing some promising blueprints on each dimension as well as discussing the lessons learned from our past research work, which will light up the way for new researchers and encourage them to pursue new contributions.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
303,515
1606.04217
Word Representation Models for Morphologically Rich Languages in Neural Machine Translation
Dealing with the complex word forms in morphologically rich languages is an open problem in language processing, and is particularly important in translation. In contrast to most modern neural systems of translation, which discard the identity for rare words, in this paper we propose several architectures for learning word representations from character and morpheme level word decompositions. We incorporate these representations in a novel machine translation model which jointly learns word alignments and translations via a hard attention mechanism. Evaluating on translating from several morphologically rich languages into English, we show consistent improvements over strong baseline methods, of between 1 and 1.5 BLEU points.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
57,216
1708.09085
Argument Strength is in the Eye of the Beholder: Audience Effects in Persuasion
Americans spend about a third of their time online, with many participating in online conversations on social and political issues. We hypothesize that social media arguments on such issues may be more engaging and persuasive than traditional media summaries, and that particular types of people may be more or less convinced by particular styles of argument, e.g. emotional arguments may resonate with some personalities while factual arguments resonate with others. We report a set of experiments testing at large scale how audience variables interact with argument style to affect the persuasiveness of an argument, an under-researched topic within natural language processing. We show that belief change is affected by personality factors, with conscientious, open and agreeable people being more convinced by emotional arguments.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
79,733
2002.03848
Time Series Alignment with Global Invariances
Multivariate time series are ubiquitous objects in signal processing. Measuring a distance or similarity between two such objects is of prime interest in a variety of applications, including machine learning, but can be very difficult as soon as the temporal dynamics and the representation of the time series, {\em i.e.} the nature of the observed quantities, differ from one another. In this work, we propose a novel distance accounting both feature space and temporal variabilities by learning a latent global transformation of the feature space together with a temporal alignment, cast as a joint optimization problem. The versatility of our framework allows for several variants depending on the invariance class at stake. Among other contributions, we define a differentiable loss for time series and present two algorithms for the computation of time series barycenters under this new geometry. We illustrate the interest of our approach on both simulated and real world data and show the robustness of our approach compared to state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,429
1211.6101
Design of Calibration Experiments for Identification of Manipulator Elastostatic Parameters
The paper is devoted to the elastostatic calibration of industrial robots, which is used for precise machining of large-dimensional parts made of composite materials. In this technological process, the interaction between the robot and the workpiece causes essential elastic deflections of the manipulator components that should be compensated by the robot controller using relevant elastostatic model of this mechanism. To estimate parameters of this model, an advanced calibration technique is applied that is based on the non-linear experiment design theory, which is adopted for this particular application. In contrast to previous works, it is proposed a concept of the user-defined test-pose, which is used to evaluate the calibration experiments quality. In the frame of this concept, the related optimization problem is defined and numerical routines are developed, which allow generating optimal set of manipulator configurations and corresponding forces/torques for a given number of the calibration experiments. Some specific kinematic constraints are also taken into account, which insure feasibility of calibration experiments for the obtained configurations and allow avoiding collision between the robotic manipulator and the measurement equipment. The efficiency of the developed technique is illustrated by an application example that deals with elastostatic calibration of the serial manipulator used for robot-based machining.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
19,955
1610.08379
Decomposition of Multi-Agent Planning under Distributed Motion and Task LTL Specifications
The aim of this work is to introduce an efficient procedure for discrete multi-agent planning under local complex temporal logic behavior specifications. While the first part of an agent's behavior specification constraints the agent's trace and is independent, the second part of the specification expresses the agent's tasks in terms of the services to be provided along the trace and may impose requests for the other agents' collaborations. To fight the extreme computational complexity of centralized multi-agent planning, we propose a two-phase automata-based solution, where we systematically decouple the planning procedure for the two types of specifications. At first, we only consider the former specifications in a fully decentralized way and we compactly represent each agents' admissible traces by abstracting away the states that are insignificant for the satisfaction of their latter specifications. Second, the synchronized planning procedure uses only the compact representations. The satisfaction of the overall specification is guaranteed by construction for each agent. An illustrative example demonstrating the practical benefits of the solution is included.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
62,918
1206.5915
Graph Based Classification Methods Using Inaccurate External Classifier Information
In this paper we consider the problem of collectively classifying entities where relational information is available across the entities. In practice inaccurate class distribution for each entity is often available from another (external) classifier. For example this distribution could come from a classifier built using content features or a simple dictionary. Given the relational and inaccurate external classifier information, we consider two graph based settings in which the problem of collective classification can be solved. In the first setting the class distribution is used to fix labels to a subset of nodes and the labels for the remaining nodes are obtained like in a transductive setting. In the other setting the class distributions of all nodes are used to define the fitting function part of a graph regularized objective function. We define a generalized objective function that handles both the settings. Methods like harmonic Gaussian field and local-global consistency (LGC) reported in the literature can be seen as special cases. We extend the LGC and weighted vote relational neighbor classification (WvRN) methods to support usage of external classifier information. We also propose an efficient least squares regularization (LSR) based method and relate it to information regularization methods. All the methods are evaluated on several benchmark and real world datasets. Considering together speed, robustness and accuracy, experimental results indicate that the LSR and WvRN-extension methods perform better than other methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
16,878
2306.08426
Patterns of Patterns II
Our earlier paper "Patterns of Patterns" combined three techniques from training, futures studies, and design in a design pattern called PLACARD that helps groups of people work together effectively. We used that pattern in five hands-on workshop case studies which took place at various locations in the US and the UK. This experience report documents what we learned, including the way our thinking about PLACARD evolved, together with additional patterns our work generated. We evaluate the reproducibility of our methods and results, and consider the broader economic implications of this way of working. We discuss implications of our prototyping work for the design of future platforms, drawing connections with recent developments in cognitive science and artificial intelligence. This positions our patterns of patterns as a toolkit for the design and governance of systems that combine social dynamics with technical components.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
373,411
2202.11797
Training Characteristic Functions with Reinforcement Learning: XAI-methods play Connect Four
One of the goals of Explainable AI (XAI) is to determine which input components were relevant for a classifier decision. This is commonly know as saliency attribution. Characteristic functions (from cooperative game theory) are able to evaluate partial inputs and form the basis for theoretically "fair" attribution methods like Shapley values. Given only a standard classifier function, it is unclear how partial input should be realised. Instead, most XAI-methods for black-box classifiers like neural networks consider counterfactual inputs that generally lie off-manifold. This makes them hard to evaluate and easy to manipulate. We propose a setup to directly train characteristic functions in the form of neural networks to play simple two-player games. We apply this to the game of Connect Four by randomly hiding colour information from our agents during training. This has three advantages for comparing XAI-methods: It alleviates the ambiguity about how to realise partial input, makes off-manifold evaluation unnecessary and allows us to compare the methods by letting them play against each other.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
282,001
2311.01937
Supermind Ideator: Exploring generative AI to support creative problem-solving
Previous efforts to support creative problem-solving have included (a) techniques (such as brainstorming and design thinking) to stimulate creative ideas, and (b) software tools to record and share these ideas. Now, generative AI technologies can suggest new ideas that might never have occurred to the users, and users can then select from these ideas or use them to stimulate even more ideas. Here, we describe such a system, Supermind Ideator. The system uses a large language model (GPT 3.5) and adds prompting, fine tuning, and a user interface specifically designed to help people use creative problem-solving techniques. Some of these techniques can be applied to any problem; others are specifically intended to help generate innovative ideas about how to design groups of people and/or computers ("superminds"). We also describe our early experiences with using this system and suggest ways it could be extended to support additional techniques for other specific problem-solving domains.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
405,235
2404.13906
Generating Attractive and Authentic Copywriting from Customer Reviews
The goal of product copywriting is to capture the interest of potential buyers by emphasizing the features of products through text descriptions. As e-commerce platforms offer a wide range of services, it's becoming essential to dynamically adjust the styles of these auto-generated descriptions. Typical approaches to copywriting generation often rely solely on specified product attributes, which may result in dull and repetitive content. To tackle this issue, we propose to generate copywriting based on customer reviews, as they provide firsthand practical experiences with products, offering a richer source of information than just product attributes. We have developed a sequence-to-sequence framework, enhanced with reinforcement learning, to produce copywriting that is attractive, authentic, and rich in information. Our framework outperforms all existing baseline and zero-shot large language models, including LLaMA-2-chat-7B and GPT-3.5, in terms of both attractiveness and faithfulness. Furthermore, this work features the use of LLMs for aspect-based summaries collection and argument allure assessment. Experiments demonstrate the effectiveness of using LLMs for marketing domain corpus construction. The code and the dataset is publicly available at: https://github.com/YuXiangLin1234/Copywriting-Generation.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
448,495
2010.04227
A low discrepancy sequence on graphs
Many applications such as election forecasting, environmental monitoring, health policy, and graph based machine learning require taking expectation of functions defined on the vertices of a graph. We describe a construction of a sampling scheme analogous to the so called Leja points in complex potential theory that can be proved to give low discrepancy estimates for the approximation of the expected value by the impirical expected value based on these points. In contrast to classical potential theory where the kernel is fixed and the equilibrium distribution depends upon the kernel, we fix a probability distribution and construct a kernel (which represents the graph structure) for which the equilibrium distribution is the given probability distribution. Our estimates do not depend upon the size of the graph.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
199,652
2002.11509
Region of Interest Identification for Brain Tumors in Magnetic Resonance Images
Glioma is a common type of brain tumor, and accurate detection of it plays a vital role in the diagnosis and treatment process. Despite advances in medical image analyzing, accurate tumor segmentation in brain magnetic resonance (MR) images remains a challenge due to variations in tumor texture, position, and shape. In this paper, we propose a fast, automated method, with light computational complexity, to find the smallest bounding box around the tumor region. This region-of-interest can be used as a preprocessing step in training networks for subregion tumor segmentation. By adopting the outputs of this algorithm, redundant information is removed; hence the network can focus on learning notable features related to subregions' classes. The proposed method has six main stages, in which the brain segmentation is the most vital step. Expectation-maximization (EM) and K-means algorithms are used for brain segmentation. The proposed method is evaluated on the BraTS 2015 dataset, and the average gained DICE score is 0.73, which is an acceptable result for this application.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
165,734
1912.07685
Pairwise Feedback for Data Programming
The scalability of the labeling process and the attainable quality of labels have become limiting factors for many applications of machine learning. The programmatic creation of labeled datasets via the synthesis of noisy heuristics provides a promising avenue to address this problem. We propose to improve modeling of latent class variables in the programmatic creation of labeled datasets by incorporating pairwise feedback into the process. We discuss the ease with which such pairwise feedback can be obtained or generated in many application domains. Our experiments show that even a small number of sources of pairwise feedback can substantially improve the quality of the posterior estimate of the latent class variable.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
157,654
1802.09933
Guaranteed Sufficient Decrease for Stochastic Variance Reduced Gradient Optimization
In this paper, we propose a novel sufficient decrease technique for stochastic variance reduced gradient descent methods such as SVRG and SAGA. In order to make sufficient decrease for stochastic optimization, we design a new sufficient decrease criterion, which yields sufficient decrease versions of stochastic variance reduction algorithms such as SVRG-SD and SAGA-SD as a byproduct. We introduce a coefficient to scale current iterate and to satisfy the sufficient decrease property, which takes the decisions to shrink, expand or even move in the opposite direction, and then give two specific update rules of the coefficient for Lasso and ridge regression. Moreover, we analyze the convergence properties of our algorithms for strongly convex problems, which show that our algorithms attain linear convergence rates. We also provide the convergence guarantees of our algorithms for non-strongly convex problems. Our experimental results further verify that our algorithms achieve significantly better performance than their counterparts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
91,425
2404.14946
StoryTTS: A Highly Expressive Text-to-Speech Dataset with Rich Textual Expressiveness Annotations
While acoustic expressiveness has long been studied in expressive text-to-speech (ETTS), the inherent expressiveness in text lacks sufficient attention, especially for ETTS of artistic works. In this paper, we introduce StoryTTS, a highly ETTS dataset that contains rich expressiveness both in acoustic and textual perspective, from the recording of a Mandarin storytelling show. A systematic and comprehensive labeling framework is proposed for textual expressiveness. We analyze and define speech-related textual expressiveness in StoryTTS to include five distinct dimensions through linguistics, rhetoric, etc. Then we employ large language models and prompt them with a few manual annotation examples for batch annotation. The resulting corpus contains 61 hours of consecutive and highly prosodic speech equipped with accurate text transcriptions and rich textual expressiveness annotations. Therefore, StoryTTS can aid future ETTS research to fully mine the abundant intrinsic textual and acoustic features. Experiments are conducted to validate that TTS models can generate speech with improved expressiveness when integrating with the annotated textual labels in StoryTTS.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
448,878
1909.05646
Magnetically actuated artificial microswimmers as mobile microparticle manipulators
Micro-scale swimming robots have been envisaged for many medical applications such as targeted drug delivery, where the microrobot will be expected to navigate in a fluid through channels carrying a payload. Alternatively, in many cases, such a payload does not have to be physically bound to the swimmer, but may be instead manipulated and steered through the channel by the microrobot. We investigate this problem of contactless manipulation of a microparticle by mobile microswimmer in a fluid at low Reynolds number. We consider a model of a magnetically actuated artificial microswimmer, whose locomotion through a fluid induces a disturbance velocity field in the fluid, that then acts to propel a cargo particle in its vicinity. The problem investigated in this paper is therefore one of coupled locomotion-manipulation of two bodies in a fluid. The magnetic swimmer's motion is actuated by an externally applied magnetic field of constant strength but whose direction rotates at a constant rate in a plane. The swimmer propels itself in the direction perpendicular to this plane if the frequency associated with the periodic magnetic field is above a critical frequency. Below this critical frequency, the swimmer tumbles in place without net locomotion. The coupled fluid-swimmer-cargo particle dynamics are solved numerically using the method of Stokesian dynamics. The induced motion of the cargo particle is shown to be controllable. This is achieved by switching the planes of rotation of the magnetic field and switching frequency of the magnetic field above and below the critical frequency. While a swimmer with a specific geometry has been used in the model, the results of this paper are applicable to swimmers with other geometries and means of propulsion. The results of this paper show that microswimmers can be utilized as mobile manipulators of microparticles in a fluid.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
145,153
1905.01253
Network interpolation
Given a set of snapshots from a temporal network we develop, analyze, and experimentally validate a so-called network interpolation scheme. Our method allows us to build a plausible, albeit random, sequence of graphs that transition between any two given graphs. Importantly, our model is well characterized by a Markov chain, and we leverage this representation to analytically estimate the hitting time (to a predefined distance to the target graph) and long term behavior of our model. These observations also serve to provide interpretation and justification for a rate parameter in our model. Lastly, through a mix of synthetic and real-world data experiments we demonstrate that our model builds reasonable graph trajectories between snapshots, as measured through various graph statistics. In these experiments, we find that our interpolation scheme compares favorably to common network growth models, such as preferential attachment and triadic closure.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
129,671
2303.06180
Optimizing Federated Learning for Medical Image Classification on Distributed Non-iid Datasets with Partial Labels
Numerous large-scale chest x-ray datasets have spearheaded expert-level detection of abnormalities using deep learning. However, these datasets focus on detecting a subset of disease labels that could be present, thus making them distributed and non-iid with partial labels. Recent literature has indicated the impact of batch normalization layers on the convergence of federated learning due to domain shift associated with non-iid data with partial labels. To that end, we propose FedFBN, a federated learning framework that draws inspiration from transfer learning by using pretrained networks as the model backend and freezing the batch normalization layers throughout the training process. We evaluate FedFBN with current FL strategies using synthetic iid toy datasets and large-scale non-iid datasets across scenarios with partial and complete labels. Our results demonstrate that FedFBN outperforms current aggregation strategies for training global models using distributed and non-iid data with partial labels.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
350,728
2312.16845
Evaluating the Performance of Large Language Models for Spanish Language in Undergraduate Admissions Exams
This study evaluates the performance of large language models, specifically GPT-3.5 and BARD (supported by Gemini Pro model), in undergraduate admissions exams proposed by the National Polytechnic Institute in Mexico. The exams cover Engineering/Mathematical and Physical Sciences, Biological and Medical Sciences, and Social and Administrative Sciences. Both models demonstrated proficiency, exceeding the minimum acceptance scores for respective academic programs to up to 75% for some academic programs. GPT-3.5 outperformed BARD in Mathematics and Physics, while BARD performed better in History and questions related to factual information. Overall, GPT-3.5 marginally surpassed BARD with scores of 60.94% and 60.42%, respectively.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
418,536
1609.07588
Modeling and simulation of non-linear and hysteresis behavior of magneto-rheological dampers in the example of quarter-car model
This paper presents reviews of several models and numerical simulation models of non-linear and hysteresis behaviors of magneto-rheological liquid dampers in MATLAB/Simulink in the example of quarter-car model of vehicle suspension simulation, such as, Bingham, Dahl, LuGre and Bouc-Wen models. In addition, it demonstrates numerical simulation models built in MATLAB/Simulink and discusses results from numerical simulation models for two different input excitations from terrain.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
61,458
2408.08926
Cybench: A Framework for Evaluating Cybersecurity Capabilities and Risks of Language Models
Language Model (LM) agents for cybersecurity that are capable of autonomously identifying vulnerabilities and executing exploits have potential to cause real-world impact. Policymakers, model providers, and researchers in the AI and cybersecurity communities are interested in quantifying the capabilities of such agents to help mitigate cyberrisk and investigate opportunities for penetration testing. Toward that end, we introduce Cybench, a framework for specifying cybersecurity tasks and evaluating agents on those tasks. We include 40 professional-level Capture the Flag (CTF) tasks from 4 distinct CTF competitions, chosen to be recent, meaningful, and spanning a wide range of difficulties. Each task includes its own description, starter files, and is initialized in an environment where an agent can execute commands and observe outputs. Since many tasks are beyond the capabilities of existing LM agents, we introduce subtasks for each task, which break down a task into intermediary steps for a more detailed evaluation. To evaluate agent capabilities, we construct a cybersecurity agent and evaluate 8 models: GPT-4o, OpenAI o1-preview, Claude 3 Opus, Claude 3.5 Sonnet, Mixtral 8x22b Instruct, Gemini 1.5 Pro, Llama 3 70B Chat, and Llama 3.1 405B Instruct. For the top performing models (GPT-4o and Claude 3.5 Sonnet), we further investigate performance across 4 agent scaffolds (structed bash, action-only, pseudoterminal, and web search). Without subtask guidance, agents leveraging Claude 3.5 Sonnet, GPT-4o, OpenAI o1-preview, and Claude 3 Opus successfully solved complete tasks that took human teams up to 11 minutes to solve. In comparison, the most difficult task took human teams 24 hours and 54 minutes to solve. All code and data are publicly available at https://cybench.github.io.
false
false
false
false
true
false
true
false
true
false
false
false
true
true
false
false
false
false
481,216
cs/0612133
Tales of Huffman
We study the new problem of Huffman-like codes subject to individual restrictions on the code-word lengths of a subset of the source words. These are prefix codes with minimal expected code-word length for a random source where additionally the code-word lengths of a subset of the source words is prescribed, possibly differently for every such source word. Based on a structural analysis of properties of optimal solutions, we construct an efficient dynamic programming algorithm for this problem, and for an integer programming problem that may be of independent interest.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
540,000
2112.02373
3rd Place: A Global and Local Dual Retrieval Solution to Facebook AI Image Similarity Challenge
As a basic task of computer vision, image similarity retrieval is facing the challenge of large-scale data and image copy attacks. This paper presents our 3rd place solution to the matching track of Image Similarity Challenge (ISC) 2021 organized by Facebook AI. We propose a multi-branch retrieval method of combining global descriptors and local descriptors to cover all attack cases. Specifically, we attempt many strategies to optimize global descriptors, including abundant data augmentations, self-supervised learning with a single Transformer model, overlay detection preprocessing. Moreover, we introduce the robust SIFT feature and GPU Faiss for local retrieval which makes up for the shortcomings of the global retrieval. Finally, KNN-matching algorithm is used to judge the match and merge scores. We show some ablation experiments of our method, which reveals the complementary advantages of global and local features.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
269,827
2402.08790
Improving Molecule Generation and Drug Discovery with a Knowledge-enhanced Generative Model
Recent advancements in generative models have established state-of-the-art benchmarks in the generation of molecules and novel drug candidates. Despite these successes, a significant gap persists between generative models and the utilization of extensive biomedical knowledge, often systematized within knowledge graphs, whose potential to inform and enhance generative processes has not been realized. In this paper, we present a novel approach that bridges this divide by developing a framework for knowledge-enhanced generative models called KARL. We develop a scalable methodology to extend the functionality of knowledge graphs while preserving semantic integrity, and incorporate this contextual information into a generative framework to guide a diffusion-based model. The integration of knowledge graph embeddings with our generative model furnishes a robust mechanism for producing novel drug candidates possessing specific characteristics while ensuring validity and synthesizability. KARL outperforms state-of-the-art generative models on both unconditional and targeted generation tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
429,239
1210.0293
Feedback Interference Alignment: Exact Alignment for Three Users in Two Time Slots
We study the three-user interference channel where each transmitter has local feedback of the signal from its targeted receiver. We show that in the important case where the channel coefficients are static, exact alignment can be achieved over two time slots using linear schemes. This is in contrast with the interference channel where no feedback is utilized, where it seems that either an infinite number of channel extensions or infinite precision is required for exact alignment. We also demonstrate, via simulations, that our scheme significantly outperforms time-sharing even at finite SNR.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
18,865
2408.10861
DVRP-MHSI: Dynamic Visualization Research Platform for Multimodal Human-Swarm Interaction
In recent years, there has been a significant amount of research on algorithms and control methods for distributed collaborative robots. However, the emergence of collective behavior in a swarm is still difficult to predict and control. Nevertheless, human interaction with the swarm helps render the swarm more predictable and controllable, as human operators can utilize intuition or knowledge that is not always available to the swarm. Therefore, this paper designs the Dynamic Visualization Research Platform for Multimodal Human-Swarm Interaction (DVRP-MHSI), which is an innovative open system that can perform real-time dynamic visualization and is specifically designed to accommodate a multitude of interaction modalities (such as brain-computer, eye-tracking, electromyographic, and touch-based interfaces), thereby expediting progress in human-swarm interaction research. Specifically, the platform consists of custom-made low-cost omnidirectional wheeled mobile robots, multitouch screens and two workstations. In particular, the mutitouch screens can recognize human gestures and the shapes of objects placed on them, and they can also dynamically render diverse scenes. One of the workstations processes communication information within robots and the other one implements human-robot interaction methods. The development of DVRP-MHSI frees researchers from hardware or software details and allows them to focus on versatile swarm algorithms and human-swarm interaction methods without being limited to fixed scenarios, tasks, and interfaces. The effectiveness and potential of the platform for human-swarm interaction studies are validated by several demonstrative experiments.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
482,055
2501.16825
Can Transformers Learn Full Bayesian Inference in Context?
Transformers have emerged as the dominant architecture in the field of deep learning, with a broad range of applications and remarkable in-context learning (ICL) capabilities. While not yet fully understood, ICL has already proved to be an intriguing phenomenon, allowing transformers to learn in context -- without requiring further training. In this paper, we further advance the understanding of ICL by demonstrating that transformers can perform full Bayesian inference for commonly used statistical models in context. More specifically, we introduce a general framework that builds on ideas from prior fitted networks and continuous normalizing flows which enables us to infer complex posterior distributions for methods such as generalized linear models and latent factor models. Extensive experiments on real-world datasets demonstrate that our ICL approach yields posterior samples that are similar in quality to state-of-the-art MCMC or variational inference methods not operating in context.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
528,121
2407.19421
Improved physics-informed neural network in mitigating gradient related failures
Physics-informed neural networks (PINNs) integrate fundamental physical principles with advanced data-driven techniques, driving significant advancements in scientific computing. However, PINNs face persistent challenges with stiffness in gradient flow, which limits their predictive capabilities. This paper presents an improved PINN (I-PINN) to mitigate gradient-related failures. The core of I-PINN is to combine the respective strengths of neural networks with an improved architecture and adaptive weights containingupper bounds. The capability to enhance accuracy by at least one order of magnitude and accelerate convergence, without introducing extra computational complexity relative to the baseline model, is achieved by I-PINN. Numerical experiments with a variety of benchmarks illustrate the improved accuracy and generalization of I-PINN. The supporting data and code are accessible at https://github.com/PanChengN/I-PINN.git, enabling broader research engagement.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
476,778
2402.11637
Poisoning Federated Recommender Systems with Fake Users
Federated recommendation is a prominent use case within federated learning, yet it remains susceptible to various attacks, from user to server-side vulnerabilities. Poisoning attacks are particularly notable among user-side attacks, as participants upload malicious model updates to deceive the global model, often intending to promote or demote specific targeted items. This study investigates strategies for executing promotion attacks in federated recommender systems. Current poisoning attacks on federated recommender systems often rely on additional information, such as the local training data of genuine users or item popularity. However, such information is challenging for the potential attacker to obtain. Thus, there is a need to develop an attack that requires no extra information apart from item embeddings obtained from the server. In this paper, we introduce a novel fake user based poisoning attack named PoisonFRS to promote the attacker-chosen targeted item in federated recommender systems without requiring knowledge about user-item rating data, user attributes, or the aggregation rule used by the server. Extensive experiments on multiple real-world datasets demonstrate that PoisonFRS can effectively promote the attacker-chosen targeted item to a large portion of genuine users and outperform current benchmarks that rely on additional information about the system. We further observe that the model updates from both genuine and fake users are indistinguishable within the latent space.
false
false
false
false
false
true
true
false
false
false
false
false
true
false
false
false
false
false
430,488