id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2501.03941 | Synthetic Data Privacy Metrics | Recent advancements in generative AI have made it possible to create synthetic datasets that can be as accurate as real-world data for training AI models, powering statistical insights, and fostering collaboration with sensitive datasets while offering strong privacy guarantees. Effectively measuring the empirical privacy of synthetic data is an important step in the process. However, while there is a multitude of new privacy metrics being published every day, there currently is no standardization. In this paper, we review the pros and cons of popular metrics that include simulations of adversarial attacks. We also review current best practices for amending generative models to enhance the privacy of the data they create (e.g. differential privacy). | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 523,050 |
2111.05265 | High-order joint embedding for multi-level link prediction | Link prediction infers potential links from observed networks, and is one of the essential problems in network analyses. In contrast to traditional graph representation modeling which only predicts two-way pairwise relations, we propose a novel tensor-based joint network embedding approach on simultaneously encoding pairwise links and hyperlinks onto a latent space, which captures the dependency between pairwise and multi-way links in inferring potential unobserved hyperlinks. The major advantage of the proposed embedding procedure is that it incorporates both the pairwise relationships and subgroup-wise structure among nodes to capture richer network information. In addition, the proposed method introduces a hierarchical dependency among links to infer potential hyperlinks, and leads to better link prediction. In theory we establish the estimation consistency for the proposed embedding approach, and provide a faster convergence rate compared to link prediction utilizing pairwise links or hyperlinks only. Numerical studies on both simulation settings and Facebook ego-networks indicate that the proposed method improves both hyperlink and pairwise link prediction accuracy compared to existing link prediction algorithms. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,739 |
2402.07927 | A Systematic Survey of Prompt Engineering in Large Language Models:
Techniques and Applications | Prompt engineering has emerged as an indispensable technique for extending the capabilities of large language models (LLMs) and vision-language models (VLMs). This approach leverages task-specific instructions, known as prompts, to enhance model efficacy without modifying the core model parameters. Rather than updating the model parameters, prompts allow seamless integration of pre-trained models into downstream tasks by eliciting desired model behaviors solely based on the given prompt. Prompts can be natural language instructions that provide context to guide the model or learned vector representations that activate relevant knowledge. This burgeoning field has enabled success across various applications, from question-answering to commonsense reasoning. However, there remains a lack of systematic organization and understanding of the diverse prompt engineering methods and techniques. This survey paper addresses the gap by providing a structured overview of recent advancements in prompt engineering, categorized by application area. For each prompting approach, we provide a summary detailing the prompting methodology, its applications, the models involved, and the datasets utilized. We also delve into the strengths and limitations of each approach and include a taxonomy diagram and table summarizing datasets, models, and critical points of each prompting technique. This systematic analysis enables a better understanding of this rapidly developing field and facilitates future research by illuminating open challenges and opportunities for prompt engineering. | true | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 428,901 |
2308.12962 | Motion-Guided Masking for Spatiotemporal Representation Learning | Several recent works have directly extended the image masked autoencoder (MAE) with random masking into video domain, achieving promising results. However, unlike images, both spatial and temporal information are important for video understanding. This suggests that the random masking strategy that is inherited from the image MAE is less effective for video MAE. This motivates the design of a novel masking algorithm that can more efficiently make use of video saliency. Specifically, we propose a motion-guided masking algorithm (MGM) which leverages motion vectors to guide the position of each mask over time. Crucially, these motion-based correspondences can be directly obtained from information stored in the compressed format of the video, which makes our method efficient and scalable. On two challenging large-scale video benchmarks (Kinetics-400 and Something-Something V2), we equip video MAE with our MGM and achieve up to +$1.3\%$ improvement compared to previous state-of-the-art methods. Additionally, our MGM achieves equivalent performance to previous video MAE using up to $66\%$ fewer training epochs. Lastly, we show that MGM generalizes better to downstream transfer learning and domain adaptation tasks on the UCF101, HMDB51, and Diving48 datasets, achieving up to +$4.9\%$ improvement compared to baseline methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 387,734 |
2302.13399 | Path Integral Based Convolution and Pooling for Heterogeneous Graph
Neural Networks | Graph neural networks (GNN) extends deep learning to graph-structure dataset. Similar to Convolutional Neural Networks (CNN) using on image prediction, convolutional and pooling layers are the foundation to success for GNN on graph prediction tasks. In the initial PAN paper, it uses a path integral based graph neural networks for graph prediction. Specifically, it uses a convolution operation that involves every path linking the message sender and receiver with learnable weights depending on the path length, which corresponds to the maximal entropy random walk. It further generalizes such convolution operation to a new transition matrix called maximal entropy transition (MET). Because the diagonal entries of the MET matrix is directly related to the subgraph centrality, it provide a trial mechanism for pooling based on centrality score. While the initial PAN paper only considers node features. We further extends its capability to handle complex heterogeneous graph including both node and edge features. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 347,932 |
2101.03133 | Infections Forecasting and Intervention Effect Evaluation for COVID-19
via a Data-Driven Markov Process and Heterogeneous Simulation | The Coronavirus Disease 2019 (COVID-19) pandemic has caused tremendous amount of deaths and a devastating impact on the economic development all over the world. Thus, it is paramount to control its further transmission, for which purpose it is necessary to find the mechanism of its transmission process and evaluate the effect of different control strategies. To deal with these issues, we describe the transmission of COVID-19 as an explosive Markov process with four parameters. The state transitions of the proposed Markov process can clearly disclose the terrible explosion and complex heterogeneity of COVID-19. Based on this, we further propose a simulation approach with heterogeneous infections. Experimentations show that our approach can closely track the real transmission process of COVID-19, disclose its transmission mechanism, and forecast the transmission under different non-drug intervention strategies. More importantly, our approach can helpfully develop effective strategies for controlling COVID-19 and appropriately compare their control effect in different countries/cities. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 214,824 |
2405.00041 | A theory of best choice selection through objective arguments grounded
in Linear Response Theory concepts | In this paper, we propose how to use objective arguments grounded in statistical mechanics concepts in order to obtain a single number, obtained after aggregation, which would allow to rank "agents", "opinions", ..., all defined in a very broad sense. We aim toward any process which should a priori demand or lead to some consensus in order to attain the presumably best choice among many possibilities. In order to precise the framework, we discuss previous attempts, recalling trivial "means of scores", - weighted or not, Condorcet paradox, TOPSIS, etc. We demonstrate through geometrical arguments on a toy example, with 4 criteria, that the pre-selected order of criteria in previous attempts makes a difference on the final result. However, it might be unjustified. Thus, we base our "best choice theory" on the linear response theory in statistical mechanics: we indicate that one should be calculating correlations functions between all possible choice evaluations, thereby avoiding an arbitrarily ordered set of criteria. We justify the point through an example with 6 possible criteria. Applications in many fields are suggested. Beside, two toy models serving as practical examples and illustrative arguments are given in an Appendix. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 450,779 |
1408.4700 | Constructive Multiuser Interference in Symbol Level Precoding for the
MISO Downlink Channel | This paper investigates the problem of interference among the simultaneous multiuser transmissions in the downlink of multiple antennas systems. Using symbol level precoding, a new approach towards the multiuser interference is discussed along this paper. The concept of exploiting the interference between the spatial multiuser transmissions by jointly utilizing the data information (DI) and channel state information (CSI), in order to design symbol-level precoders, is proposed. In this direction, the interference among the data streams is transformed under certain conditions to useful signal that can improve the signal to interference noise ratio (SINR) of the downlink transmissions. We propose a maximum ratio transmission (MRT) based algorithm that jointly exploits DI and CSI to glean the benefits from constructive multiuser interference. Subsequently, a relation between the constructive interference downlink transmission and physical layer multicasting is established. In this context, novel constructive interference precoding techniques that tackle the transmit power minimization (min power) with individual SINR constraints at each user's receivers is proposed. Furthermore, fairness through maximizing the weighted minimum SINR (max min SINR) of the users is addressed by finding the link between the min power and max min SINR problems. Moreover, heuristic precoding techniques are proposed to tackle the weighted sum rate problem. Finally, extensive numerical results show that the proposed schemes outperform other state of the art techniques. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 35,481 |
2010.02089 | CopulaGNN: Towards Integrating Representational and Correlational Roles
of Graphs in Graph Neural Networks | Graph-structured data are ubiquitous. However, graphs encode diverse types of information and thus play different roles in data representation. In this paper, we distinguish the \textit{representational} and the \textit{correlational} roles played by the graphs in node-level prediction tasks, and we investigate how Graph Neural Network (GNN) models can effectively leverage both types of information. Conceptually, the representational information provides guidance for the model to construct better node features; while the correlational information indicates the correlation between node outcomes conditional on node features. Through a simulation study, we find that many popular GNN models are incapable of effectively utilizing the correlational information. By leveraging the idea of the copula, a principled way to describe the dependence among multivariate random variables, we offer a general solution. The proposed Copula Graph Neural Network (CopulaGNN) can take a wide range of GNN models as base models and utilize both representational and correlational information stored in the graphs. Experimental results on two types of regression tasks verify the effectiveness of the proposed method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 198,904 |
2109.00126 | Online Dynamic Window (ODW) Assisted Two-stage LSTM Frameworks for
Indoor Localization | Internet of Things (IoT)-based indoor localization has gained significant popularity recently to satisfy the ever-increasing requirements of indoor Location-based Services (LBS). In this context, Inertial Measurement Unit (IMU)-based localization is of interest as it provides a scalable solution independent of any proprietary sensors/modules. Existing IMU-based methodologies, however, are mainly developed based on statistical heading and step length estimation techniques that suffer from cumulative error issues and have extensive computational time requirements limiting their application for real-time indoor positioning. To address the aforementioned issues, we propose the Online Dynamic Window (ODW)-assisted two-stage Long Short Term Memory (LSTM) localization framework. Three ODWs are proposed, where the first model uses a Natural Language Processing (NLP)-inspired Dynamic Window (DW) approach, which significantly reduces the required computational time. The second framework is developed based on a Signal Processing Dynamic Windowing (SP-DW) approach to further reduce the required processing time of the two-stage LSTM-based model. The third ODW, referred to as the SP-NLP, combines the first two windowing mechanisms to further improve the overall achieved accuracy. Compared to the traditional LSTM-based positioning approaches, which suffer from either high tensor computation requirements or low accuracy, the proposed ODW-assisted models can perform indoor localization in a near-real time fashion with high accuracy. Performances of the proposed ODW-assisted models are evaluated based on a real Pedestrian Dead Reckoning (PDR) dataset. The results illustrate potentials of the proposed ODW-assisted techniques in achieving high classification accuracy with significantly reduced computational time, making them applicable for near real-time implementations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 253,009 |
1711.10967 | The Block Point Process Model for Continuous-Time Event-Based Dynamic
Networks | We consider the problem of analyzing timestamped relational events between a set of entities, such as messages between users of an on-line social network. Such data are often analyzed using static or discrete-time network models, which discard a significant amount of information by aggregating events over time to form network snapshots. In this paper, we introduce a block point process model (BPPM) for continuous-time event-based dynamic networks. The BPPM is inspired by the well-known stochastic block model (SBM) for static networks. We show that networks generated by the BPPM follow an SBM in the limit of a growing number of nodes. We use this property to develop principled and efficient local search and variational inference procedures initialized by regularized spectral clustering. We fit BPPMs with exponential Hawkes processes to analyze several real network data sets, including a Facebook wall post network with over 3,500 nodes and 130,000 events. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 85,708 |
1102.3603 | A Graph Theoretical Approach for Network Coding in Wireless Body Area
Networks | Modern medical wireless systems, such as wireless body area networks (WBANs), are applications of wireless networks that can be used as a tool of data transmission between patients and doctors. Accuracy of data transmission is an important requirement for such systems. In this paper, we will propose a WBAN which is robust against erasures and describe its properties using graph theoretic techniques. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 9,259 |
2409.16018 | Lattice-Based Vulnerabilities in Lee Metric Post-Quantum Cryptosystems | Post-quantum cryptography has gained attention due to the need for secure cryptographic systems in the face of quantum computing. Code-based and lattice-based cryptography are two prominent approaches, both heavily studied within the NIST standardization project. Code-based cryptography -- most prominently exemplified by the McEliece cryptosystem -- is based on the hardness of decoding random linear error-correcting codes. Despite the McEliece cryptosystem having been unbroken for several decades, it suffers from large key sizes, which has led to exploring variants using metrics than the Hamming metric, such as the Lee metric. This alternative metric may allow for smaller key sizes, but requires further analysis for potential vulnerabilities to lattice-based attack techniques. In this paper, we consider a generic Lee metric based McEliece type cryptosystem and evaluate its security against lattice-based attacks. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 491,170 |
1203.4924 | A Flexible Channel Coding Approach for Short-Length Codewords | This letter introduces a novel channel coding design framework for short-length codewords that permits balancing the tradeoff between the bit error rate floor and waterfall region by modifying a single real-valued parameter. The proposed approach is based on combining convolutional coding with a $q$-ary linear combination and unequal energy allocation, the latter being controlled by the aforementioned parameter. EXIT charts are used to shed light on the convergence characteristics of the associated iterative decoder, which is described in terms of factor graphs. Simulation results show that the proposed scheme is able to adjust its end-to-end error rate performance efficiently and easily, on the contrary to previous approaches that require a full code redesign when the error rate requirements of the application change. Simulations also show that, at mid-range bit-error rates, there is a small performance penalty with respect to the previous approaches. However, the EXIT chart analysis and the simulation results suggest that for very low bit-error rates the proposed system will exhibit lower error floors than previous approaches. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,070 |
1907.06058 | Aggregate-Eliminate-Predict: Detecting Adverse Drug Events from
Heterogeneous Electronic Health Records | We study the problem of detecting adverse drug events in electronic healthcare records. The challenge in this work is to aggregate heterogeneous data types involving diagnosis codes, drug codes, as well as lab measurements. An earlier framework proposed for the same problem demonstrated promising predictive performance for the random forest classifier by using only lab measurements as data features. We extend this framework, by additionally including diagnosis and drug prescription codes, concurrently. In addition, we employ a recursive feature selection mechanism on top, that extracts the top-k most important features. Our experimental evaluation on five medical datasets of adverse drug events and six different classifiers, suggests that the integration of these additional features provides substantial and statistically significant improvements in terms of AUC, while employing medically relevant features. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 138,515 |
1909.00556 | Phrase-Level Class based Language Model for Mandarin Smart Speaker Query
Recognition | The success of speech assistants requires precise recognition of a number of entities on particular contexts. A common solution is to train a class-based n-gram language model and then expand the classes into specific words or phrases. However, when the class has a huge list, e.g., more than 20 million songs, a fully expansion will cause memory explosion. Worse still, the list items in the class need to be updated frequently, which requires a dynamic model updating technique. In this work, we propose to train pruned language models for the word classes to replace the slots in the root n-gram. We further propose to use a novel technique, named Difference Language Model (DLM), to correct the bias from the pruned language models. Once the decoding graph is built, we only need to recalculate the DLM when the entities in word classes are updated. Results show that the proposed method consistently and significantly outperforms the conventional approaches on all datasets, esp. for large lists, which the conventional approaches cannot handle. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 143,667 |
1908.09487 | Stochastic dynamical modeling of turbulent flows | Advanced measurement techniques and high performance computing have made large data sets available for a wide range of turbulent flows that arise in engineering applications. Drawing on this abundance of data, dynamical models can be constructed to reproduce structural and statistical features of turbulent flows, opening the way to the design of effective model-based flow control strategies. This review describes a framework for completing second-order statistics of turbulent flows by models that are based on the Navier-Stokes equations linearized around the turbulent mean velocity. Systems theory and convex optimization are combined to address the inherent uncertainty in the dynamics and the statistics of the flow by seeking a suitable parsimonious correction to the prior linearized model. Specifically, dynamical couplings between states of the linearized model dictate structural constraints on the statistics of flow fluctuations. Thence, colored-in-time stochastic forcing that drives the linearized model is sought to account for and reconcile dynamics with available data (i.e., partially known second order statistics). The number of dynamical degrees of freedom that are directly affected by stochastic excitation is minimized as a measure of model parsimony. The spectral content of the resulting colored-in-time stochastic contribution can alternatively be seen to arise from a low-rank structural perturbation of the linearized dynamical generator, pointing to suitable dynamical corrections that may account for the absence of the nonlinear interactions in the linearized model. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 142,864 |
0909.4807 | Consensus in Correlated Random Topologies: Weights for Finite Time
Horizon | We consider the weight design problem for the consensus algorithm under a finite time horizon. We assume that the underlying network is random where the links fail at each iteration with certain probability and the link failures can be spatially correlated. We formulate a family of weight design criteria (objective functions) that minimize n, n = 1,...,N (out of N possible) largest (slowest) eigenvalues of the matrix that describes the mean squared consensus error dynamics. We show that the objective functions are convex; hence, globally optimal weights (with respect to the design criteria) can be efficiently obtained. Numerical examples on large scale, sparse random networks with spatially correlated link failures show that: 1) weights obtained according to our criteria lead to significantly faster convergence than the choices available in the literature; 2) different design criteria that corresponds to different n, exhibits very interesting tradeoffs: faster transient performance leads to slower long time run performance and vice versa. Thus, n is a valuable degree of freedom and can be appropriately selected for the given time horizon. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,573 |
2005.09727 | Ventral-Dorsal Neural Networks: Object Detection via Selective Attention | Deep Convolutional Neural Networks (CNNs) have been repeatedly proven to perform well on image classification tasks. Object detection methods, however, are still in need of significant improvements. In this paper, we propose a new framework called Ventral-Dorsal Networks (VDNets) which is inspired by the structure of the human visual system. Roughly, the visual input signal is analyzed along two separate neural streams, one in the temporal lobe and the other in the parietal lobe. The coarse functional distinction between these streams is between object recognition -- the "what" of the signal -- and extracting location related information -- the "where" of the signal. The ventral pathway from primary visual cortex, entering the temporal lobe, is dominated by "what" information, while the dorsal pathway, into the parietal lobe, is dominated by "where" information. Inspired by this structure, we propose the integration of a "Ventral Network" and a "Dorsal Network", which are complementary. Information about object identity can guide localization, and location information can guide attention to relevant image regions, improving object recognition. This new dual network framework sharpens the focus of object detection. Our experimental results reveal that the proposed method outperforms state-of-the-art object detection approaches on PASCAL VOC 2007 by 8% (mAP) and PASCAL VOC 2012 by 3% (mAP). Moreover, a comparison of techniques on Yearbook images displays substantial qualitative and quantitative benefits of VDNet. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 177,986 |
2201.06763 | Online Time Series Anomaly Detection with State Space Gaussian Processes | We propose r-ssGPFA, an unsupervised online anomaly detection model for uni- and multivariate time series building on the efficient state space formulation of Gaussian processes. For high-dimensional time series, we propose an extension of Gaussian process factor analysis to identify the common latent processes of the time series, allowing us to detect anomalies efficiently in an interpretable manner. We gain explainability while speeding up computations by imposing an orthogonality constraint on the mapping from the latent to the observed. Our model's robustness is improved by using a simple heuristic to skip Kalman updates when encountering anomalous observations. We investigate the behaviour of our model on synthetic data and show on standard benchmark datasets that our method is competitive with state-of-the-art methods while being computationally cheaper. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 275,823 |
2304.12463 | A Study on Improving Realism of Synthetic Data for Machine Learning | Synthetic-to-real data translation using generative adversarial learning has achieved significant success in improving synthetic data. Yet, limited studies focus on deep evaluation and comparison of adversarial training on general-purpose synthetic data for machine learning. This work aims to train and evaluate a synthetic-to-real generative model that transforms the synthetic renderings into more realistic styles on general-purpose datasets conditioned with unlabeled real-world data. Extensive performance evaluation and comparison have been conducted through qualitative and quantitative metrics and a defined downstream perception task. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 360,220 |
2010.03739 | 3D Convolutional Sequence to Sequence Model for Vertebral Compression
Fractures Identification in CT | An osteoporosis-related fracture occurs every three seconds worldwide, affecting one in three women and one in five men aged over 50. The early detection of at-risk patients facilitates effective and well-evidenced preventative interventions, reducing the incidence of major osteoporotic fractures. In this study, we present an automatic system for identification of vertebral compression fractures on Computed Tomography images, which are often an undiagnosed precursor to major osteoporosis-related fractures. The system integrates a compact 3D representation of the spine, utilizing a Convolutional Neural Network (CNN) for spinal cord detection and a novel end-to-end sequence to sequence 3D architecture. We evaluate several model variants that exploit different representation and classification approaches and present a framework combining an ensemble of models that achieves state of the art results, validated on a large data set, with a patient-level fracture identification of 0.955 Area Under the Curve (AUC). The system proposed has the potential to support osteoporosis clinical management, improve treatment pathways, and to change the course of one of the most burdensome diseases of our generation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 199,510 |
2411.06503 | Diffusion Sampling Correction via Approximately 10 Parameters | Diffusion Probabilistic Models (DPMs) have demonstrated exceptional performance in generative tasks, but this comes at the expense of sampling efficiency. To enhance sampling speed without sacrificing quality, various distillation-based accelerated sampling algorithms have been recently proposed. However, they typically require significant additional training costs and model parameter storage, which limit their practical application. In this work, we propose PCA-based Adaptive Search (PAS), which optimizes existing solvers for DPMs with minimal learnable parameters and training costs. Specifically, we first employ PCA to obtain a few orthogonal unit basis vectors to span the high-dimensional sampling space, which enables us to learn just a set of coordinates to correct the sampling direction; furthermore, based on the observation that the cumulative truncation error exhibits an ``S''-shape, we design an adaptive search strategy that further enhances the sampling efficiency and reduces the number of stored parameters to approximately 10. Extensive experiments demonstrate that PAS can significantly enhance existing fast solvers in a plug-and-play manner with negligible costs. For instance, on CIFAR10, PAS requires only 12 parameters and less than 1 minute of training on a single NVIDIA A100 GPU to optimize the DDIM from 15.69 FID (NFE=10) to 4.37. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 507,140 |
2106.15729 | Probabilistic Control of Heterogeneous Swarms Subject to Graph Temporal
Logic Specifications: A Decentralized and Scalable Approach | We develop a probabilistic control algorithm, $\texttt{GTLProCo}$, for swarms of agents with heterogeneous dynamics and objectives, subject to high-level task specifications. The resulting algorithm not only achieves decentralized control of the swarm but also significantly improves scalability over state-of-the-art existing algorithms. Specifically, we study a setting in which the agents move along the nodes of a graph, and the high-level task specifications for the swarm are expressed in a recently-proposed language called graph temporal logic (GTL). By constraining the distribution of the swarm over the nodes of the graph, GTL can specify a wide range of properties, including safety, progress, and response. $\texttt{GTLProCo}$, agnostic to the number of agents comprising the swarm, controls the density distribution of the swarm in a decentralized and probabilistic manner. To this end, it synthesizes a time-varying Markov chain modeling the time evolution of the density distribution under the GTL constraints. We first identify a subset of GTL, namely reach-avoid specifications, for which we can reduce the synthesis of such a Markov chain to either linear or semi-definite programs. Then, in the general case, we formulate the synthesis of the Markov chain as a mixed-integer nonlinear program (MINLP). We exploit the structure of the problem to provide an efficient sequential mixed-integer linear programming scheme with trust regions to solve the MINLP. We empirically demonstrate that our sequential scheme is at least three orders of magnitude faster than off-the-shelf MINLP solvers and illustrate the effectiveness of $\texttt{GTLProCo}$ in several swarm scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | true | 243,851 |
2104.00101 | High-order Barrier Functions: Robustness, Safety and
Performance-Critical Control | In this paper, we propose a notion of high-order (zeroing) barrier functions that generalizes the concept of zeroing barrier functions and guarantees set forward invariance by checking their higher order derivatives. The proposed formulation guarantees asymptotic stability of the forward invariant set, which is highly favorable for robustness with respect to model perturbations. No forward completeness assumption is needed in our setting in contrast to existing high order barrier function methods. For the case of controlled dynamical systems, we relax the requirement of uniform relative degree and propose a singularity-free control scheme that yields a locally Lipschitz control signal and guarantees safety. Furthermore, the proposed formulation accounts for "performance-critical" control: it guarantees that a subset of the forward invariant set will admit any existing, bounded control law, while still ensuring forward invariance of the set. Finally, a non-trivial case study with rigid-body attitude dynamics and interconnected cell regions as the safe region is investigated. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 227,878 |
2407.12749 | HDLCopilot: Natural Language Exploration of Hardware Designs and
Libraries | Hardware design workflows often involve working with Process Design Kits (PDKs) from various fabrication labs, each containing its own set of standard cell libraries optimized for metrics such as speed, power, or density. These libraries include multiple views for information on timing and electrical properties of cells, cell layout details, and process design rules. Engineers typically navigate between the design and the target technology to make informed decisions on different design scenarios, such as selecting specific gates for area optimization or enhancing critical path speed. Navigating this complex landscape to retrieve specific information about gates or design rules is often time-consuming and error-prone. To address this, we present HDLCopilot, a multi-agent collaborative framework powered by large language models that enables engineers to streamline interactions with hardware design and PDKs through natural language queries. HDLCopilot enables engineers to quickly access relevant information on gates and design rules, evaluate tradeoffs related to area, speed, and power in order to make informed decisions efficiently and accurately. The framework achieves an execution accuracy of 96.33\% on a diverse set of complex natural language queries. HDLCopilot positions itself as a powerful assistant in hardware design workflows, enhancing productivity and reducing potential human errors. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 474,057 |
2006.09288 | Uncovering the Underlying Physics of Degrading System Behavior Through a
Deep Neural Network Framework: The Case of Remaining Useful Life Prognosis | Deep learning (DL) has become an essential tool in prognosis and health management (PHM), commonly used as a regression algorithm for the prognosis of a system's behavior. One particular metric of interest is the remaining useful life (RUL) estimated using monitoring sensor data. Most of these deep learning applications treat the algorithms as black-box functions, giving little to no control of the data interpretation. This becomes an issue if the models break the governing laws of physics or other natural sciences when no constraints are imposed. The latest research efforts have focused on applying complex DL models to achieve a low prediction error rather than studying how the models interpret the behavior of the data and the system itself. In this paper, we propose an open-box approach using a deep neural network framework to explore the physics of degradation through partial differential equations (PDEs). The framework has three stages, and it aims to discover a latent variable and corresponding PDE to represent the health state of the system. Models are trained as a supervised regression and designed to output the RUL as well as a latent variable map that can be used and interpreted as the system's health indicator. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 182,505 |
2205.12601 | Learning Distributions by Generative Adversarial Networks: Approximation
and Generalization | We study how well generative adversarial networks (GAN) learn probability distributions from finite samples by analyzing the convergence rates of these models. Our analysis is based on a new oracle inequality that decomposes the estimation error of GAN into the discriminator and generator approximation errors, generalization error and optimization error. To estimate the discriminator approximation error, we establish error bounds on approximating H\"older functions by ReLU neural networks, with explicit upper bounds on the Lipschitz constant of the network or norm constraint on the weights. For generator approximation error, we show that neural network can approximately transform a low-dimensional source distribution to a high-dimensional target distribution and bound such approximation error by the width and depth of neural network. Combining the approximation results with generalization bounds of neural networks from statistical learning theory, we establish the convergence rates of GANs in various settings, when the error is measured by a collection of integral probability metrics defined through H\"older classes, including the Wasserstein distance as a special case. In particular, for distributions concentrated around a low-dimensional set, we show that the convergence rates of GANs do not depend on the high ambient dimension, but on the lower intrinsic dimension. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 298,623 |
2002.10268 | Using Machine Learning to predict extreme events in the H\'enon map | Machine Learning (ML) inspired algorithms provide a flexible set of tools for analyzing and forecasting chaotic dynamical systems. We here analyze the performance of one algorithm for the prediction of extreme events in the two-dimensional H\'enon map at the classical parameters. The task is to determine whether a trajectory will exceed a threshold after a set number of time steps into the future. This task has a geometric interpretation within the dynamics of the H\'enon map, which we use to gauge the performance of the neural networks that are used in this work. We analyze the dependence of the success rate of the ML models on the prediction time $T$ , the number of training samples $N_T$ and the size of the network $N_p$. We observe that in order to maintain a certain accuracy, $N_T \propto exp(2 h T)$ and $N_p \propto exp(hT)$, where $h$ is the topological entropy. Similar relations between the intrinsic chaotic properties of the dynamics and ML parameters might be observable in other systems as well. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 165,348 |
1807.01909 | Multi-robot Path Planning in Well-formed Infrastructures: Prioritized
Planning vs. Prioritized Wait Adjustment (Preliminary Results) | We study the problem of planning collision-free paths for a group of homogeneous robots. We propose a novel approach for turning the paths that were planned egocentrically by the robots, e.g. without taking other robots' moves into account, into collision-free trajectories and evaluate it empirically. Suggested algorithm is much faster (up to one order of magnitude) than state-of-the-art but this comes at the price of notable drop-down of the solution cost. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | true | false | false | false | 102,157 |
1806.08550 | Multivariable Iterative Learning Control Design Procedures: from
Decentralized to Centralized, Illustrated on an Industrial Printer | Iterative Learning Control (ILC) enables high control performance through learning from measured data, using only limited model knowledge in the form of a nominal parametric model. Robust stability requires robustness to modeling errors, often due to deliberate undermodeling. The aim of this paper is to develop a range of approaches for multivariable ILC, where specific attention is given to addressing interaction. The proposed methods either address the interaction in the nominal model, or as uncertainty, i.e., through robust stability. The result is a range of techniques, including the use of the structured singular value (SSV) and Gershgorin bounds, that provide a different trade-off between modeling requirements, i.e., modeling effort and cost, and achievable performance. This allows control engineers to select the approach that fits the modeling budget and control requirements. This trade-off is demonstrated in a case study on an industrial flatbed printer. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 101,175 |
2004.01343 | Efficient UAV Physical Layer Security based on Deep Learning and
Artificial Noise | Network-connected unmanned aerial vehicle (UAV) communications is a common solution to achieve high-rate image transmission. The broadcast nature of these wireless networks makes this communication vulnerable to eavesdropping. This paper considers the problem of compressed secret image transmission between two nodes, in the presence of a passive eavesdropper. In this paper, we use auto encoder/decoder convolutional neural networks, which by using deep learning algorithms, allow us to compress/decompress images. Also we use network physical layer features to generate high rate artificial noise to secure the data. Using features of the channel with applying artificial noises, reduce the channel capacity of the unauthorized users and prevent eavesdropper from detecting received data. Our simulation experiments show that for received data with SNR fewer than 5 in the authorized node, the MSE is less than 0.05. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 170,883 |
2203.04928 | DISCO: Comprehensive and Explainable Disinformation Detection | Disinformation refers to false information deliberately spread to influence the general public, and the negative impact of disinformation on society can be observed in numerous issues, such as political agendas and manipulating financial markets. In this paper, we identify prevalent challenges and advances related to automated disinformation detection from multiple aspects and propose a comprehensive and explainable disinformation detection framework called DISCO. It leverages the heterogeneity of disinformation and addresses the opaqueness of prediction. Then we provide a demonstration of DISCO on a real-world fake news detection task with satisfactory detection accuracy and explanation. The demo video and source code of DISCO is now publicly available https://github.com/DongqiFu/DISCO. We expect that our demo could pave the way for addressing the limitations of identification, comprehension, and explainability as a whole. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 284,654 |
2404.01114 | A CRISP-DM-based Methodology for Assessing Agent-based Simulation Models
using Process Mining | Agent-based simulation (ABS) models are potent tools for analyzing complex systems. However, understanding and validating ABS models can be a significant challenge. To address this challenge, cutting-edge data-driven techniques offer sophisticated capabilities for analyzing the outcomes of ABS models. One such technique is process mining, which encompasses a range of methods for discovering, monitoring, and enhancing processes by extracting knowledge from event logs. However, applying process mining to event logs derived from ABSs is not trivial, and deriving meaningful insights from the resulting process models adds an additional layer of complexity. Although process mining is invaluable in extracting insights from ABS models, there is a lack of comprehensive methodological guidance for its application in ABS evaluation in the research landscape. In this paper, we propose a methodology, based on the CRoss-Industry Standard Process for Data Mining (CRISP-DM) methodology, to assess ABS models using process mining techniques. We incorporate process mining techniques into the stages of the CRISP-DM methodology, facilitating the analysis of ABS model behaviors and their underlying processes. We demonstrate our methodology using an established agent-based model, Schelling model of segregation. Our results show that our proposed methodology can effectively assess ABS models through produced event logs, potentially paving the way for enhanced agent-based model validity and more insightful decision-making. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 443,257 |
2305.09659 | Double Pessimism is Provably Efficient for Distributionally Robust
Offline Reinforcement Learning: Generic Algorithm and Robust Partial Coverage | In this paper, we study distributionally robust offline reinforcement learning (robust offline RL), which seeks to find an optimal policy purely from an offline dataset that can perform well in perturbed environments. In specific, we propose a generic algorithm framework called Doubly Pessimistic Model-based Policy Optimization ($P^2MPO$), which features a novel combination of a flexible model estimation subroutine and a doubly pessimistic policy optimization step. Notably, the double pessimism principle is crucial to overcome the distributional shifts incurred by (i) the mismatch between the behavior policy and the target policies; and (ii) the perturbation of the nominal model. Under certain accuracy conditions on the model estimation subroutine, we prove that $P^2MPO$ is sample-efficient with robust partial coverage data, which only requires the offline data to have good coverage of the distributions induced by the optimal robust policy and the perturbed models around the nominal model. By tailoring specific model estimation subroutines for concrete examples of RMDPs, including tabular RMDPs, factored RMDPs, kernel and neural RMDPs, we prove that $P^2MPO$ enjoys a $\tilde{\mathcal{O}}(n^{-1/2})$ convergence rate, where $n$ is the dataset size. We highlight that all these examples, except tabular RMDPs, are first identified and proven tractable by this work. Furthermore, we continue our study of robust offline RL in the robust Markov games (RMGs). By extending the double pessimism principle identified for single-agent RMDPs, we propose another algorithm framework that can efficiently find the robust Nash equilibria among players using only robust unilateral (partial) coverage data. To our best knowledge, this work proposes the first general learning principle -- double pessimism -- for robust offline RL and shows that it is provably efficient with general function approximation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 364,724 |
2402.05132 | TexShape: Information Theoretic Sentence Embedding for Language Models | With the exponential growth in data volume and the emergence of data-intensive applications, particularly in the field of machine learning, concerns related to resource utilization, privacy, and fairness have become paramount. This paper focuses on the textual domain of data and addresses challenges regarding encoding sentences to their optimized representations through the lens of information-theory. In particular, we use empirical estimates of mutual information, using the Donsker-Varadhan definition of Kullback-Leibler divergence. Our approach leverages this estimation to train an information-theoretic sentence embedding, called TexShape, for (task-based) data compression or for filtering out sensitive information, enhancing privacy and fairness. In this study, we employ a benchmark language model for initial text representation, complemented by neural networks for information-theoretic compression and mutual information estimations. Our experiments demonstrate significant advancements in preserving maximal targeted information and minimal sensitive information over adverse compression ratios, in terms of predictive accuracy of downstream models that are trained using the compressed data. | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | 427,740 |
2203.03311 | Comprehensive Review of Deep Learning-Based 3D Point Cloud Completion
Processing and Analysis | Point cloud completion is a generation and estimation issue derived from the partial point clouds, which plays a vital role in the applications in 3D computer vision. The progress of deep learning (DL) has impressively improved the capability and robustness of point cloud completion. However, the quality of completed point clouds is still needed to be further enhanced to meet the practical utilization. Therefore, this work aims to conduct a comprehensive survey on various methods, including point-based, convolution-based, graph-based, and generative model-based approaches, etc. And this survey summarizes the comparisons among these methods to provoke further research insights. Besides, this review sums up the commonly used datasets and illustrates the applications of point cloud completion. Eventually, we also discussed possible research trends in this promptly expanding field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,040 |
2412.13542 | Multi-Granularity Open Intent Classification via Adaptive Granular-Ball
Decision Boundary | Open intent classification is critical for the development of dialogue systems, aiming to accurately classify known intents into their corresponding classes while identifying unknown intents. Prior boundary-based methods assumed known intents fit within compact spherical regions, focusing on coarse-grained representation and precise spherical decision boundaries. However, these assumptions are often violated in practical scenarios, making it difficult to distinguish known intent classes from unknowns using a single spherical boundary. To tackle these issues, we propose a Multi-granularity Open intent classification method via adaptive Granular-Ball decision boundary (MOGB). Our MOGB method consists of two modules: representation learning and decision boundary acquiring. To effectively represent the intent distribution, we design a hierarchical representation learning method. This involves iteratively alternating between adaptive granular-ball clustering and nearest sub-centroid classification to capture fine-grained semantic structures within known intent classes. Furthermore, multi-granularity decision boundaries are constructed for open intent classification by employing granular-balls with varying centroids and radii. Extensive experiments conducted on three public datasets demonstrate the effectiveness of our proposed method. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 518,332 |
2501.05651 | A Practical Cross-Layer Approach for ML-Driven Storage Placement in
Warehouse-Scale Computers | Storage systems account for a major portion of the total cost of ownership (TCO) of warehouse-scale computers, and thus have a major impact on the overall system's efficiency. Machine learning (ML)-based methods for solving key problems in storage system efficiency, such as data placement, have shown significant promise. However, there are few known practical deployments of such methods. Studying this problem in the context of real-world hyperscale data center deployments at Google, we identify a number of challenges that we believe cause this lack of practical adoption. Specifically, prior work assumes a monolithic model that resides entirely within the storage layer, an unrealistic assumption in real-world data center deployments. We propose a cross-layer approach that moves ML out of the storage system and performs it in the application running on top of it, co-designed with a scheduling algorithm at the storage layer that consumes predictions from these application-level models. This approach combines small, interpretable models with a co-designed heuristic that adapts to different online environments. We build a proof-of-concept of this approach in a production distributed computation framework at Google. Evaluations in a test deployment and large-scale simulation studies using production traces show improvements of as much as 3.47x in TCO savings compared to state of the art baselines. We believe this work represents a significant step towards more practical ML-driven storage placement in warehouse-scale computers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 523,670 |
2304.11043 | Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training | In the stock market, a successful investment requires a good balance between profits and risks. Based on the learning to rank paradigm, stock recommendation has been widely studied in quantitative finance to recommend stocks with higher return ratios for investors. Despite the efforts to make profits, many existing recommendation approaches still have some limitations in risk control, which may lead to intolerable paper losses in practical stock investing. To effectively reduce risks, we draw inspiration from adversarial learning and propose a novel Split Variational Adversarial Training (SVAT) method for risk-aware stock recommendation. Essentially, SVAT encourages the stock model to be sensitive to adversarial perturbations of risky stock examples and enhances the model's risk awareness by learning from perturbations. To generate representative adversarial examples as risk indicators, we devise a variational perturbation generator to model diverse risk factors. Particularly, the variational architecture enables our method to provide a rough risk quantification for investors, showing an additional advantage of interpretability. Experiments on several real-world stock market datasets demonstrate the superiority of our SVAT method. By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits. All the experimental data and source code are available at https://drive.google.com/drive/folders/14AdM7WENEvIp5x5bV3zV_i4Aev21C9g6?usp=sharing. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 359,645 |
2105.11611 | KnowSR: Knowledge Sharing among Homogeneous Agents in Multi-agent
Reinforcement Learning | Recently, deep reinforcement learning (RL) algorithms have made great progress in multi-agent domain. However, due to characteristics of RL, training for complex tasks would be resource-intensive and time-consuming. To meet this challenge, mutual learning strategy between homogeneous agents is essential, which is under-explored in previous studies, because most existing methods do not consider to use the knowledge of agent models. In this paper, we present an adaptation method of the majority of multi-agent reinforcement learning (MARL) algorithms called KnowSR which takes advantage of the differences in learning between agents. We employ the idea of knowledge distillation (KD) to share knowledge among agents to shorten the training phase. To empirically demonstrate the robustness and effectiveness of KnowSR, we performed extensive experiments on state-of-the-art MARL algorithms in collaborative and competitive scenarios. The results demonstrate that KnowSR outperforms recently reported methodologies, emphasizing the importance of the proposed knowledge sharing for MARL. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 236,760 |
1904.13308 | Method for Searching of an Optimal Scenario of Impact in Cognitive Maps
during Information Operations Recognition | In this paper, we consider the problem of choosing the optimal scenario of the impact between nodes based on of the introduced criteria for the optimality of the impact. Two criteria for the optimality of the impact, which are called the force of impact and the speed of implementation of the scenario, are considered. To obtain a unique solution of the problem, a multi-criterial assessment of the received scenarios using the Pareto principle was applied. Based on the criteria of a force of impact and the speed of implementation of the scenario, the choice of the optimal scenario of impact was justified. The results and advantages of the proposed approach in comparison with the Kosko model are presented. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 129,355 |
2109.00928 | Speaker-Conditioned Hierarchical Modeling for Automated Speech Scoring | Automatic Speech Scoring (ASS) is the computer-assisted evaluation of a candidate's speaking proficiency in a language. ASS systems face many challenges like open grammar, variable pronunciations, and unstructured or semi-structured content. Recent deep learning approaches have shown some promise in this domain. However, most of these approaches focus on extracting features from a single audio, making them suffer from the lack of speaker-specific context required to model such a complex task. We propose a novel deep learning technique for non-native ASS, called speaker-conditioned hierarchical modeling. In our technique, we take advantage of the fact that oral proficiency tests rate multiple responses for a candidate. We extract context vectors from these responses and feed them as additional speaker-specific context to our network to score a particular response. We compare our technique with strong baselines and find that such modeling improves the model's average performance by 6.92% (maximum = 12.86%, minimum = 4.51%). We further show both quantitative and qualitative insights into the importance of this additional context in solving the problem of ASS. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 253,287 |
1609.03675 | Deep Coevolutionary Network: Embedding User and Item Features for
Recommendation | Recommender systems often use latent features to explain the behaviors of users and capture the properties of items. As users interact with different items over time, user and item features can influence each other, evolve and co-evolve over time. The compatibility of user and item's feature further influence the future interaction between users and items. Recently, point process based models have been proposed in the literature aiming to capture the temporally evolving nature of these latent features. However, these models often make strong parametric assumptions about the evolution process of the user and item latent features, which may not reflect the reality, and has limited power in expressing the complex and nonlinear dynamics underlying these processes. To address these limitations, we propose a novel deep coevolutionary network model (DeepCoevolve), for learning user and item features based on their interaction graph. DeepCoevolve use recurrent neural network (RNN) over evolving networks to define the intensity function in point processes, which allows the model to capture complex mutual influence between users and items, and the feature evolution over time. We also develop an efficient procedure for training the model parameters, and show that the learned models lead to significant improvements in recommendation and activity prediction compared to previous state-of-the-arts parametric models. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 60,917 |
2207.00623 | Is this bug severe? A text-cum-graph based model for bug severity
prediction | Repositories of large software systems have become commonplace. This massive expansion has resulted in the emergence of various problems in these software platforms including identification of (i) bug-prone packages, (ii) critical bugs, and (iii) severity of bugs. One of the important goals would be to mine these bugs and recommend them to the developers to resolve them. The first step to this is that one has to accurately detect the extent of severity of the bugs. In this paper, we take up this task of predicting the severity of bugs in the near future. Contextualized neural models built on the text description of a bug and the user comments about the bug help to achieve reasonably good performance. Further information on how the bugs are related to each other in terms of the ways they affect packages can be summarised in the form of a graph and used along with the text to get additional benefits. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 305,822 |
1604.00466 | Automatic Annotation of Structured Facts in Images | Motivated by the application of fact-level image understanding, we present an automatic method for data collection of structured visual facts from images with captions. Example structured facts include attributed objects (e.g., <flower, red>), actions (e.g., <baby, smile>), interactions (e.g., <man, walking, dog>), and positional information (e.g., <vase, on, table>). The collected annotations are in the form of fact-image pairs (e.g.,<man, walking, dog> and an image region containing this fact). With a language approach, the proposed method is able to collect hundreds of thousands of visual fact annotations with accuracy of 83% according to human judgment. Our method automatically collected more than 380,000 visual fact annotations and more than 110,000 unique visual facts from images with captions and localized them in images in less than one day of processing time on standard CPU platforms. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 54,032 |
2406.05132 | 3D-GRAND: A Million-Scale Dataset for 3D-LLMs with Better Grounding and
Less Hallucination | The integration of language and 3D perception is crucial for developing embodied agents and robots that comprehend and interact with the physical world. While large language models (LLMs) have demonstrated impressive language understanding and generation capabilities, their adaptation to 3D environments (3D-LLMs) remains in its early stages. A primary challenge is the absence of large-scale datasets that provide dense grounding between language and 3D scenes. In this paper, we introduce 3D-GRAND, a pioneering large-scale dataset comprising 40,087 household scenes paired with 6.2 million densely-grounded scene-language instructions. Our results show that instruction tuning with 3D-GRAND significantly enhances grounding capabilities and reduces hallucinations in 3D-LLMs. As part of our contributions, we propose a comprehensive benchmark 3D-POPE to systematically evaluate hallucination in 3D-LLMs, enabling fair comparisons among future models. Our experiments highlight a scaling effect between dataset size and 3D-LLM performance, emphasizing the critical role of large-scale 3D-text datasets in advancing embodied AI research. Notably, our results demonstrate early signals for effective sim-to-real transfer, indicating that models trained on large synthetic data can perform well on real-world 3D scans. Through 3D-GRAND and 3D-POPE, we aim to equip the embodied AI community with essential resources and insights, setting the stage for more reliable and better-grounded 3D-LLMs. Project website: https://3d-grand.github.io | false | false | false | false | true | false | true | true | true | false | false | true | false | false | false | false | false | false | 462,006 |
2307.06463 | Efficiently-Verifiable Strong Uniquely Solvable Puzzles and Matrix
Multiplication | We advance the Cohn-Umans framework for developing fast matrix multiplication algorithms. We introduce, analyze, and search for a new subclass of strong uniquely solvable puzzles (SUSP), which we call simplifiable SUSPs. We show that these puzzles are efficiently verifiable, which remains an open question for general SUSPs. We also show that individual simplifiable SUSPs can achieve the same strength of bounds on the matrix multiplication exponent $\omega$ that infinite families of SUSPs can. We report on the construction, by computer search, of larger SUSPs than previously known for small width. This, combined with our tighter analysis, strengthens the upper bound on the matrix multiplication exponent from $2.66$ to $2.505$ obtainable via this computational approach, and nears the results of the handcrafted constructions of Cohn et al. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 379,073 |
1207.6512 | CSS-like Constructions of Asymmetric Quantum Codes | Asymmetric quantum error-correcting codes (AQCs) may offer some advantage over their symmetric counterparts by providing better error-correction for the more frequent error types. The well-known CSS construction of $q$-ary AQCs is extended by removing the $\F_{q}$-linearity requirement as well as the limitation on the type of inner product used. The proposed constructions are called CSS-like constructions and utilize pairs of nested subfield linear codes under one of the Euclidean, trace Euclidean, Hermitian, and trace Hermitian inner products. After establishing some theoretical foundations, best-performing CSS-like AQCs are constructed. Combining some constructions of nested pairs of classical codes and linear programming, many optimal and good pure $q$-ary CSS-like codes for $q \in {2,3,4,5,7,8,9}$ up to reasonable lengths are found. In many instances, removing the $\F_{q}$-linearity and using alternative inner products give us pure AQCs with improved parameters than relying solely on the standard CSS construction. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 17,794 |
2104.03113 | Scaling Scaling Laws with Board Games | The largest experiments in machine learning now require resources far beyond the budget of all but a few institutions. Fortunately, it has recently been shown that the results of these huge experiments can often be extrapolated from the results of a sequence of far smaller, cheaper experiments. In this work, we show that not only can the extrapolation be done based on the size of the model, but on the size of the problem as well. By conducting a sequence of experiments using AlphaZero and Hex, we show that the performance achievable with a fixed amount of compute degrades predictably as the game gets larger and harder. Along with our main result, we further show that the test-time and train-time compute available to an agent can be traded off while maintaining performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 228,976 |
1804.08460 | Mixing Context Granularities for Improved Entity Linking on Question
Answering Data across Entity Categories | The first stage of every knowledge base question answering approach is to link entities in the input question. We investigate entity linking in the context of a question answering task and present a jointly optimized neural architecture for entity mention detection and entity disambiguation that models the surrounding context on different levels of granularity. We use the Wikidata knowledge base and available question answering datasets to create benchmarks for entity linking on question answering data. Our approach outperforms the previous state-of-the-art system on this data, resulting in an average 8% improvement of the final score. We further demonstrate that our model delivers a strong performance across different entity categories. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 95,773 |
2202.12607 | JParaCrawl v3.0: A Large-scale English-Japanese Parallel Corpus | Most current machine translation models are mainly trained with parallel corpora, and their translation accuracy largely depends on the quality and quantity of the corpora. Although there are billions of parallel sentences for a few language pairs, effectively dealing with most language pairs is difficult due to a lack of publicly available parallel corpora. This paper creates a large parallel corpus for English-Japanese, a language pair for which only limited resources are available, compared to such resource-rich languages as English-German. It introduces a new web-based English-Japanese parallel corpus named JParaCrawl v3.0. Our new corpus contains more than 21 million unique parallel sentence pairs, which is more than twice as many as the previous JParaCrawl v2.0 corpus. Through experiments, we empirically show how our new corpus boosts the accuracy of machine translation models on various domains. The JParaCrawl v3.0 corpus will eventually be publicly available online for research purposes. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 282,307 |
2407.13390 | GeometrySticker: Enabling Ownership Claim of Recolorized Neural Radiance
Fields | Remarkable advancements in the recolorization of Neural Radiance Fields (NeRF) have simplified the process of modifying NeRF's color attributes. Yet, with the potential of NeRF to serve as shareable digital assets, there's a concern that malicious users might alter the color of NeRF models and falsely claim the recolorized version as their own. To safeguard against such breaches of ownership, enabling original NeRF creators to establish rights over recolorized NeRF is crucial. While approaches like CopyRNeRF have been introduced to embed binary messages into NeRF models as digital signatures for copyright protection, the process of recolorization can remove these binary messages. In our paper, we present GeometrySticker, a method for seamlessly integrating binary messages into the geometry components of radiance fields, akin to applying a sticker. GeometrySticker can embed binary messages into NeRF models while preserving the effectiveness of these messages against recolorization. Our comprehensive studies demonstrate that GeometrySticker is adaptable to prevalent NeRF architectures and maintains a commendable level of robustness against various distortions. Project page: https://kevinhuangxf.github.io/GeometrySticker/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,361 |
1908.07066 | Asymptotic degree distributions in random threshold graphs | We discuss several limiting degree distributions for a class of random threshold graphs in the many node regime. This analysis is carried out under a weak assumption on the distribution of the underlying fitness variable. This assumption, which is satisfied by the exponential distribution, determines a natural scaling under which the following limiting results are shown: The nodal degree distribution, i.e., the distribution of any node, converges in distribution to a limiting pmf. However, for each $d=0,1, \ldots $, the fraction of nodes with given degree $d$ converges only in distribution to a non-degenerate random variable $\Pi(d)$ (whose distribution depends on $d$),and not in probability to the aforementioned limiting nodal pmf as is customarily expected. The distribution of $\Pi(d)$ is identified only through its characteristic function. Implications of this result include: (i) The empirical node distribution may not be used as a proxy for or as an estimate to the limiting nodal pmf; (ii) Even in homogeneous graphs, the network-wide degree distribution and the nodal degree distribution may capture vastly different information; and (iii) Random threshold graphs with exponential distributed fitness do not provide an alternative scale-free model to the Barab\'asi-Albert model as was argued by some authors; the two models cannot be meaningfully compared in terms of their degree distributions! | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 142,192 |
2402.01339 | Improving Sequential Recommendations with LLMs | The sequential recommendation problem has attracted considerable research attention in the past few years, leading to the rise of numerous recommendation models. In this work, we explore how Large Language Models (LLMs), which are nowadays introducing disruptive effects in many AI-based applications, can be used to build or improve sequential recommendation approaches. Specifically, we design three orthogonal approaches and hybrids of those to leverage the power of LLMs in different ways. In addition, we investigate the potential of each approach by focusing on its comprising technical aspects and determining an array of alternative choices for each one. We conduct extensive experiments on three datasets and explore a large variety of configurations, including different language models and baseline recommendation models, to obtain a comprehensive picture of the performance of each approach. Among other observations, we highlight that initializing state-of-the-art sequential recommendation models such as BERT4Rec or SASRec with embeddings obtained from an LLM can lead to substantial performance gains in terms of accuracy. Furthermore, we find that fine-tuning an LLM for recommendation tasks enables it to learn not only the tasks, but also concepts of a domain to some extent. We also show that fine-tuning OpenAI GPT leads to considerably better performance than fine-tuning Google PaLM 2. Overall, our extensive experiments indicate a huge potential value of leveraging LLMs in future recommendation approaches. We publicly share the code and data of our experiments to ensure reproducibility. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 425,970 |
2403.19841 | Dealing with Missing Modalities in Multimodal Recommendation: a Feature
Propagation-based Approach | Multimodal recommender systems work by augmenting the representation of the products in the catalogue through multimodal features extracted from images, textual descriptions, or audio tracks characterising such products. Nevertheless, in real-world applications, only a limited percentage of products come with multimodal content to extract meaningful features from, making it hard to provide accurate recommendations. To the best of our knowledge, very few attention has been put into the problem of missing modalities in multimodal recommendation so far. To this end, our paper comes as a preliminary attempt to formalise and address such an issue. Inspired by the recent advances in graph representation learning, we propose to re-sketch the missing modalities problem as a problem of missing graph node features to apply the state-of-the-art feature propagation algorithm eventually. Technically, we first project the user-item graph into an item-item one based on co-interactions. Then, leveraging the multimodal similarities among co-interacted items, we apply a modified version of the feature propagation technique to impute the missing multimodal features. Adopted as a pre-processing stage for two recent multimodal recommender systems, our simple approach performs better than other shallower solutions on three popular datasets. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 442,492 |
2003.02401 | GOMP: Grasp-Optimized Motion Planning for Bin Picking | Rapid and reliable robot bin picking is a critical challenge in automating warehouses, often measured in picks-per-hour (PPH). We explore increasing PPH using faster motions based on optimizing over a set of candidate grasps. The source of this set of grasps is two-fold: (1) grasp-analysis tools such as Dex-Net generate multiple candidate grasps, and (2) each of these grasps has a degree of freedom about which a robot gripper can rotate. In this paper, we present Grasp-Optimized Motion Planning (GOMP), an algorithm that speeds up the execution of a bin-picking robot's operations by incorporating robot dynamics and a set of candidate grasps produced by a grasp planner into an optimizing motion planner. We compute motions by optimizing with sequential quadratic programming (SQP) and iteratively updating trust regions to account for the non-convex nature of the problem. In our formulation, we constrain the motion to remain within the mechanical limits of the robot while avoiding obstacles. We further convert the problem to a time-minimization by repeatedly shorting a time horizon of a trajectory until the SQP is infeasible. In experiments with a UR5, GOMP achieves a speedup of 9x over a baseline planner. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 166,928 |
2401.16158 | Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual
Perception | Mobile device agent based on Multimodal Large Language Models (MLLM) is becoming a popular application. In this paper, we introduce Mobile-Agent, an autonomous multi-modal mobile device agent. Mobile-Agent first leverages visual perception tools to accurately identify and locate both the visual and textual elements within the app's front-end interface. Based on the perceived vision context, it then autonomously plans and decomposes the complex operation task, and navigates the mobile Apps through operations step by step. Different from previous solutions that rely on XML files of Apps or mobile system metadata, Mobile-Agent allows for greater adaptability across diverse mobile operating environments in a vision-centric way, thereby eliminating the necessity for system-specific customizations. To assess the performance of Mobile-Agent, we introduced Mobile-Eval, a benchmark for evaluating mobile device operations. Based on Mobile-Eval, we conducted a comprehensive evaluation of Mobile-Agent. The experimental results indicate that Mobile-Agent achieved remarkable accuracy and completion rates. Even with challenging instructions, such as multi-app operations, Mobile-Agent can still complete the requirements. Code and model will be open-sourced at https://github.com/X-PLUG/MobileAgent. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 424,718 |
2210.10442 | Linguistic Rules-Based Corpus Generation for Native Chinese Grammatical
Error Correction | Chinese Grammatical Error Correction (CGEC) is both a challenging NLP task and a common application in human daily life. Recently, many data-driven approaches are proposed for the development of CGEC research. However, there are two major limitations in the CGEC field: First, the lack of high-quality annotated training corpora prevents the performance of existing CGEC models from being significantly improved. Second, the grammatical errors in widely used test sets are not made by native Chinese speakers, resulting in a significant gap between the CGEC models and the real application. In this paper, we propose a linguistic rules-based approach to construct large-scale CGEC training corpora with automatically generated grammatical errors. Additionally, we present a challenging CGEC benchmark derived entirely from errors made by native Chinese speakers in real-world scenarios. Extensive experiments and detailed analyses not only demonstrate that the training data constructed by our method effectively improves the performance of CGEC models, but also reflect that our benchmark is an excellent resource for further development of the CGEC field. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 324,924 |
2103.16909 | Generating Multi-scale Maps from Remote Sensing Images via Series
Generative Adversarial Networks | Considering the success of generative adversarial networks (GANs) for image-to-image translation, researchers have attempted to translate remote sensing images (RSIs) to maps (rs2map) through GAN for cartography. However, these studies involved limited scales, which hinders multi-scale map creation. By extending their method, multi-scale RSIs can be trivially translated to multi-scale maps (multi-scale rs2map translation) through scale-wise rs2map models trained for certain scales (parallel strategy). However, this strategy has two theoretical limitations. First, inconsistency between various spatial resolutions of multi-scale RSIs and object generalization on multi-scale maps (RS-m inconsistency) increasingly complicate the extraction of geographical information from RSIs for rs2map models with decreasing scale. Second, as rs2map translation is cross-domain, generators incur high computation costs to transform the RSI pixel distribution to that on maps. Thus, we designed a series strategy of generators for multi-scale rs2map translation to address these limitations. In this strategy, high-resolution RSIs are inputted to an rs2map model to output large-scale maps, which are translated to multi-scale maps through series multi-scale map translation models. The series strategy avoids RS-m inconsistency as inputs are high-resolution large-scale RSIs, and reduces the distribution gap in multi-scale map generation through similar pixel distributions among multi-scale maps. Our experimental results showed better quality multi-scale map generation with the series strategy, as shown by average increases of 11.69%, 53.78%, 55.42%, and 72.34% in the structural similarity index, edge structural similarity index, intersection over union (road), and intersection over union (water) for data from Mexico City and Tokyo at zoom level 17-13. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 227,742 |
2303.14543 | Topological Pooling on Graphs | Graph neural networks (GNNs) have demonstrated a significant success in various graph learning tasks, from graph classification to anomaly detection. There recently has emerged a number of approaches adopting a graph pooling operation within GNNs, with a goal to preserve graph attributive and structural features during the graph representation learning. However, most existing graph pooling operations suffer from the limitations of relying on node-wise neighbor weighting and embedding, which leads to insufficient encoding of rich topological structures and node attributes exhibited by real-world networks. By invoking the machinery of persistent homology and the concept of landmarks, we propose a novel topological pooling layer and witness complex-based topological embedding mechanism that allow us to systematically integrate hidden topological information at both local and global levels. Specifically, we design new learnable local and global topological representations Wit-TopoPool which allow us to simultaneously extract rich discriminative topological information from graphs. Experiments on 11 diverse benchmark datasets against 18 baseline models in conjunction with graph classification tasks indicate that Wit-TopoPool significantly outperforms all competitors across all datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 354,141 |
2101.02661 | Ask2Transformers: Zero-Shot Domain labelling with Pre-trained Language
Models | In this paper we present a system that exploits different pre-trained Language Models for assigning domain labels to WordNet synsets without any kind of supervision. Furthermore, the system is not restricted to use a particular set of domain labels. We exploit the knowledge encoded within different off-the-shelf pre-trained Language Models and task formulations to infer the domain label of a particular WordNet definition. The proposed zero-shot system achieves a new state-of-the-art on the English dataset used in the evaluation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 214,698 |
2104.09683 | skweak: Weak Supervision Made Easy for NLP | We present skweak, a versatile, Python-based software toolkit enabling NLP developers to apply weak supervision to a wide range of NLP tasks. Weak supervision is an emerging machine learning paradigm based on a simple idea: instead of labelling data points by hand, we use labelling functions derived from domain knowledge to automatically obtain annotations for a given dataset. The resulting labels are then aggregated with a generative model that estimates the accuracy (and possible confusions) of each labelling function. The skweak toolkit makes it easy to implement a large spectrum of labelling functions (such as heuristics, gazetteers, neural models or linguistic constraints) on text data, apply them on a corpus, and aggregate their results in a fully unsupervised fashion. skweak is especially designed to facilitate the use of weak supervision for NLP tasks such as text classification and sequence labelling. We illustrate the use of skweak for NER and sentiment analysis. skweak is released under an open-source license and is available at: https://github.com/NorskRegnesentral/skweak | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 231,312 |
2305.05342 | The Multi-cluster Two-Wave Fading Model | We introduce and characterize the Multi-cluster Two-Wave (MTW) fading model, which generalizes \textit{both} the Durgin's Two-Wave with Diffuse Power (TWDP) and the $\kappa$-$\mu$ models under a common umbrella. The MTW model consists of an arbitrary number of clusters of waves each of which may include one or two dominant (specular) components. The chief probability functions of the MTW fading model are obtained, including the probability density function, the cumulative distribution function and the generalized moment-generating function. \textcolor{black}{The proposed model is empirically validated using channel measurements in the sub-THz band and} a number of applications are exemplified, including the outage probability in noise-limited and interference-limited scenarios and the energy detection probability. %Exact expressions for the outage capacity are also obtained. A composite Inverse Gamma (IG)/MTW model is also investigated, thus extending the proposed propagation model to include shadowing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 363,108 |
2207.02848 | A domain-specific language for describing machine learning datasets | Datasets play a central role in the training and evaluation of machine learning (ML) models. But they are also the root cause of many undesired model behaviors, such as biased predictions. To overcome this situation, the ML community is proposing a data-centric cultural shift where data issues are given the attention they deserve, and more standard practices around the gathering and processing of datasets start to be discussed and established. So far, these proposals are mostly high-level guidelines described in natural language and, as such, they are difficult to formalize and apply to particular datasets. In this sense, and inspired by these proposals, we define a new domain-specific language (DSL) to precisely describe machine learning datasets in terms of their structure, data provenance, and social concerns. We believe this DSL will facilitate any ML initiative to leverage and benefit from this data-centric shift in ML (e.g., selecting the most appropriate dataset for a new project or better replicating other ML results). The DSL is implemented as a Visual Studio Code plugin, and it has been published under an open source license. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 306,640 |
1611.00674 | Fuzzy paraphrases in learning word representations with a lexicon | A synonym of a polysemous word is usually only the paraphrase of one sense among many. When lexicons are used to improve vector-space word representations, such paraphrases are unreliable and bring noise to the vector-space. The prior works use a coefficient to adjust the overall learning of the lexicons. They regard the paraphrases equally. In this paper, we propose a novel approach that regards the paraphrases diversely to alleviate the adverse effects of polysemy. We annotate each paraphrase with a degree of reliability. The paraphrases are randomly eliminated according to the degrees when our model learns word representations. In this way, our approach drops the unreliable paraphrases, keeping more reliable paraphrases at the same time. The experimental results show that the proposed method improves the word vectors. Our approach is an attempt to address the polysemy problem keeping one vector per word. It makes the approach easier to use than the conventional methods that estimate multiple vectors for a word. Our approach also outperforms the prior works in the experiments. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 63,264 |
1810.12483 | Preparing for the Unexpected: Diversity Improves Planning Resilience in
Evolutionary Algorithms | As automatic optimization techniques find their way into industrial applications, the behavior of many complex systems is determined by some form of planner picking the right actions to optimize a given objective function. In many cases, the mapping of plans to objective reward may change due to unforeseen events or circumstances in the real world. In those cases, the planner usually needs some additional effort to adjust to the changed situation and reach its previous level of performance. Whenever we still need to continue polling the planner even during re-planning, it oftentimes exhibits severely lacking performance. In order to improve the planner's resilience to unforeseen change, we argue that maintaining a certain level of diversity amongst the considered plans at all times should be added to the planner's objective. Effectively, we encourage the planner to keep alternative plans to its currently best solution. As an example case, we implement a diversity-aware genetic algorithm using two different metrics for diversity (differing in their generality) and show that the blow in performance due to unexpected change can be severely lessened in the average case. We also analyze the parameter settings necessary for these techniques in order to gain an intuition how they can be incorporated into larger frameworks or process models for software and systems engineering. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 111,778 |
1707.08552 | A Robust Multi-Batch L-BFGS Method for Machine Learning | This paper describes an implementation of the L-BFGS method designed to deal with two adversarial situations. The first occurs in distributed computing environments where some of the computational nodes devoted to the evaluation of the function and gradient are unable to return results on time. A similar challenge occurs in a multi-batch approach in which the data points used to compute function and gradients are purposely changed at each iteration to accelerate the learning process. Difficulties arise because L-BFGS employs gradient differences to update the Hessian approximations, and when these gradients are computed using different data points the updating process can be unstable. This paper shows how to perform stable quasi-Newton updating in the multi-batch setting, studies the convergence properties for both convex and nonconvex functions, and illustrates the behavior of the algorithm in a distributed computing platform on binary classification logistic regression and neural network training problems that arise in machine learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 77,850 |
2307.00908 | Quantum Machine Learning on Near-Term Quantum Devices: Current State of
Supervised and Unsupervised Techniques for Real-World Applications | The past decade has witnessed significant advancements in quantum hardware, encompassing improvements in speed, qubit quantity, and quantum volume-a metric defining the maximum size of a quantum circuit effectively implementable on near-term quantum devices. This progress has led to a surge in Quantum Machine Learning (QML) applications on real hardware, aiming to achieve quantum advantage over classical approaches. This survey focuses on selected supervised and unsupervised learning applications executed on quantum hardware, specifically tailored for real-world scenarios. The exploration includes a thorough analysis of current QML implementation limitations on quantum hardware, covering techniques like encoding, ansatz structure, error mitigation, and gradient methods to address these challenges. Furthermore, the survey evaluates the performance of QML implementations in comparison to classical counterparts. In conclusion, we discuss existing bottlenecks related to applying QML on real quantum devices and propose potential solutions to overcome these challenges in the future. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 377,174 |
2001.07537 | AI Trust in business processes: The need for process-aware explanations | Business processes underpin a large number of enterprise operations including processing loan applications, managing invoices, and insurance claims. There is a large opportunity for infusing AI to reduce cost or provide better customer experience, and the business process management (BPM) literature is rich in machine learning solutions including unsupervised learning to gain insights on clusters of process traces, classification models to predict the outcomes, duration, or paths of partial process traces, extracting business process from documents, and models to recommend how to optimize a business process or navigate decision points. More recently, deep learning models including those from the NLP domain have been applied to process predictions. Unfortunately, very little of these innovations have been applied and adopted by enterprise companies. We assert that a large reason for the lack of adoption of AI models in BPM is that business users are risk-averse and do not implicitly trust AI models. There has, unfortunately, been little attention paid to explaining model predictions to business users with process context. We challenge the BPM community to build on the AI interpretability literature, and the AI Trust community to understand | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 161,062 |
2402.00190 | REACT: Two Datasets for Analyzing Both Human Reactions and Evaluative
Feedback to Robots Over Time | Recent work in Human-Robot Interaction (HRI) has shown that robots can leverage implicit communicative signals from users to understand how they are being perceived during interactions. For example, these signals can be gaze patterns, facial expressions, or body motions that reflect internal human states. To facilitate future research in this direction, we contribute the REACT database, a collection of two datasets of human-robot interactions that display users' natural reactions to robots during a collaborative game and a photography scenario. Further, we analyze the datasets to show that interaction history is an important factor that can influence human reactions to robots. As a result, we believe that future models for interpreting implicit feedback in HRI should explicitly account for this history. REACT opens up doors to this possibility in the future. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 425,512 |
2405.19631 | Leveraging Open-Source Large Language Models for encoding Social
Determinants of Health using an Intelligent Router | Social Determinants of Health (SDOH) play a significant role in patient health outcomes. The Center of Disease Control (CDC) introduced a subset of ICD-10 codes called Z-codes in an attempt to officially recognize and measure SDOH in the health care system. However, these codes are rarely annotated in a patient's Electronic Health Record (EHR), and instead, in many cases, need to be inferred from clinical notes. Previous research has shown that large language models (LLMs) show promise on extracting unstructured data from EHRs. However, with thousands of models to choose from with unique architectures and training sets, it's difficult to choose one model that performs the best on coding tasks. Further, clinical notes contain trusted health information making the use of closed-source language models from commercial vendors difficult, so the identification of open source LLMs that can be run within health organizations and exhibits high performance on SDOH tasks is an urgent problem. Here, we introduce an intelligent routing system for SDOH coding that uses a language model router to direct medical record data to open source LLMs that demonstrate optimal performance on specific SDOH codes. The intelligent routing system exhibits state of the art performance of 97.4% accuracy averaged across 5 codes, including homelessness and food insecurity, on par with closed models such as GPT-4o. In order to train the routing system and validate models, we also introduce a synthetic data generation and validation paradigm to increase the scale of training data without needing privacy protected medical records. Together, we demonstrate an architecture for intelligent routing of inputs to task-optimal language models to achieve high performance across a set of medical coding sub-tasks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 458,976 |
2207.08766 | Word Play for Playing Othello (Reverses) | Language models like OpenAI's Generative Pre-Trained Transformers (GPT-2/3) capture the long-term correlations needed to generate text in a variety of domains (such as language translators) and recently in gameplay (chess, Go, and checkers). The present research applies both the larger (GPT-3) and smaller (GPT-2) language models to explore the complex strategies for the game of Othello (or Reverses). Given the game rules for rapid reversals of fortune, the language model not only represents a candidate predictor of the next move based on previous game moves but also avoids sparse rewards in gameplay. The language model automatically captures or emulates championship-level strategies. The fine-tuned GPT-2 model generates Othello games ranging from 13-71% completion, while the larger GPT-3 model reaches 41% of a complete game. Like previous work with chess and Go, these language models offer a novel way to generate plausible game archives, particularly for comparing opening moves across a larger sample than humanly possible to explore. A primary contribution of these models magnifies (by two-fold) the previous record for player archives (120,000 human games over 45 years from 1977-2022), thus supplying the research community with more diverse and original strategies for sampling with other reinforcement learning techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 308,681 |
2205.13797 | AsyncFedED: Asynchronous Federated Learning with Euclidean Distance
based Adaptive Weight Aggregation | In an asynchronous federated learning framework, the server updates the global model once it receives an update from a client instead of waiting for all the updates to arrive as in the synchronous setting. This allows heterogeneous devices with varied computing power to train the local models without pausing, thereby speeding up the training process. However, it introduces the stale model problem, where the newly arrived update was calculated based on a set of stale weights that are older than the current global model, which may hurt the convergence of the model. In this paper, we present an asynchronous federated learning framework with a proposed adaptive weight aggregation algorithm, referred to as AsyncFedED. To the best of our knowledge this aggregation method is the first to take the staleness of the arrived gradients, measured by the Euclidean distance between the stale model and the current global model, and the number of local epochs that have been performed, into account. Assuming general non-convex loss functions, we prove the convergence of the proposed method theoretically. Numerical results validate the effectiveness of the proposed AsyncFedED in terms of the convergence rate and model accuracy compared to the existing methods for three considered tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 299,094 |
1809.04059 | Neural-Augmented Static Analysis of Android Communication | We address the problem of discovering communication links between applications in the popular Android mobile operating system, an important problem for security and privacy in Android. Any scalable static analysis in this complex setting is bound to produce an excessive amount of false-positives, rendering it impractical. To improve precision, we propose to augment static analysis with a trained neural-network model that estimates the probability that a communication link truly exists. We describe a neural-network architecture that encodes abstractions of communicating objects in two applications and estimates the probability with which a link indeed exists. At the heart of our architecture are type-directed encoders (TDE), a general framework for elegantly constructing encoders of a compound data type by recursively composing encoders for its constituent types. We evaluate our approach on a large corpus of Android applications, and demonstrate that it achieves very high accuracy. Further, we conduct thorough interpretability studies to understand the internals of the learned neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 107,460 |
2403.12188 | PETScML: Second-order solvers for training regression problems in
Scientific Machine Learning | In recent years, we have witnessed the emergence of scientific machine learning as a data-driven tool for the analysis, by means of deep-learning techniques, of data produced by computational science and engineering applications. At the core of these methods is the supervised training algorithm to learn the neural network realization, a highly non-convex optimization problem that is usually solved using stochastic gradient methods. However, distinct from deep-learning practice, scientific machine-learning training problems feature a much larger volume of smooth data and better characterizations of the empirical risk functions, which make them suited for conventional solvers for unconstrained optimization. We introduce a lightweight software framework built on top of the Portable and Extensible Toolkit for Scientific computation to bridge the gap between deep-learning software and conventional solvers for unconstrained minimization. We empirically demonstrate the superior efficacy of a trust region method based on the Gauss-Newton approximation of the Hessian in improving the generalization errors arising from regression tasks when learning surrogate models for a wide range of scientific machine-learning techniques and test cases. All the conventional second-order solvers tested, including L-BFGS and inexact Newton with line-search, compare favorably, either in terms of cost or accuracy, with the adaptive first-order methods used to validate the surrogate models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 439,066 |
1710.04656 | Behavioral Communities and the Atomic Structure of Networks | When people prefer to coordinate their behaviors with their friends -- e.g., choosing whether to adopt a new technology, to protest against a government, to attend university -- divisions within a social network can sustain different behaviors in different parts of the network. We define a society's `behavioral communities' via its network's `atoms': groups of people who adopt the same behavior in every equilibrium. We analyze how the atoms change with the intensity of the peer effects, and characterize the atoms in a prominent class of network models. We show that using knowledge of atoms to seed the diffusion of a behavior significantly increases diffusion compared to seeding based on standard community detection algorithms. We also show how to use observed behaviors to estimate the intensity of peer effects. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 82,507 |
2012.11905 | GANterfactual - Counterfactual Explanations for Medical Non-Experts
using Generative Adversarial Learning | With the ongoing rise of machine learning, the need for methods for explaining decisions made by artificial intelligence systems is becoming a more and more important topic. Especially for image classification tasks, many state-of-the-art tools to explain such classifiers rely on visual highlighting of important areas of the input data. Contrary, counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image in a way such that the classifier would have made a different prediction. By doing so, the users of counterfactual explanation systems are equipped with a completely different kind of explanatory information. However, methods for generating realistic counterfactual explanations for image classifiers are still rare. Especially in medical contexts, where relevant information often consists of textural and structural information, high-quality counterfactual images have the potential to give meaningful insights into decision processes. In this work, we present GANterfactual, an approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques. Additionally, we conduct a user study to evaluate our approach in an exemplary medical use case. Our results show that, in the chosen medical use-case, counterfactual explanations lead to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the-art systems that work with saliency maps, namely LIME and LRP. | true | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 212,777 |
2103.08933 | Reweighting Augmented Samples by Minimizing the Maximal Expected Loss | Data augmentation is an effective technique to improve the generalization of deep neural networks. However, previous data augmentation methods usually treat the augmented samples equally without considering their individual impacts on the model. To address this, for the augmented samples from the same training example, we propose to assign different weights to them. We construct the maximal expected loss which is the supremum over any reweighted loss on augmented samples. Inspired by adversarial training, we minimize this maximal expected loss (MMEL) and obtain a simple and interpretable closed-form solution: more attention should be paid to augmented samples with large loss values (i.e., harder examples). Minimizing this maximal expected loss enables the model to perform well under any reweighting strategy. The proposed method can generally be applied on top of any data augmentation methods. Experiments are conducted on both natural language understanding tasks with token-level data augmentation, and image classification tasks with commonly-used image augmentation techniques like random crop and horizontal flip. Empirical results show that the proposed method improves the generalization performance of the model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 225,030 |
2406.04671 | The Reasonable Person Standard for AI | As AI systems are increasingly incorporated into domains where human behavior has set the norm, a challenge for AI governance and AI alignment research is to regulate their behavior in a way that is useful and constructive for society. One way to answer this question is to ask: how do we govern the human behavior that the models are emulating? To evaluate human behavior, the American legal system often uses the "Reasonable Person Standard." The idea of "reasonable" behavior comes up in nearly every area of law. The legal system often judges the actions of parties with respect to what a reasonable person would have done under similar circumstances. This paper argues that the reasonable person standard provides useful guidelines for the type of behavior we should develop, probe, and stress-test in models. It explains how reasonableness is defined and used in key areas of the law using illustrative cases, how the reasonable person standard could apply to AI behavior in each of these areas and contexts, and how our societal understanding of "reasonable" behavior provides useful technical goals for AI researchers. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 461,788 |
2407.00581 | MasonTigers at SemEval-2024 Task 10: Emotion Discovery and Flip
Reasoning in Conversation with Ensemble of Transformers and Prompting | In this paper, we present MasonTigers' participation in SemEval-2024 Task 10, a shared task aimed at identifying emotions and understanding the rationale behind their flips within monolingual English and Hindi-English code-mixed dialogues. This task comprises three distinct subtasks - emotion recognition in conversation for Hindi-English code-mixed dialogues, emotion flip reasoning for Hindi-English code-mixed dialogues, and emotion flip reasoning for English dialogues. Our team, MasonTigers, contributed to each subtask, focusing on developing methods for accurate emotion recognition and reasoning. By leveraging our approaches, we attained impressive F1-scores of 0.78 for the first task and 0.79 for both the second and third tasks. This performance not only underscores the effectiveness of our methods across different aspects of the task but also secured us the top rank in the first and third subtasks, and the 2nd rank in the second subtask. Through extensive experimentation and analysis, we provide insights into our system's performance and contributions to each subtask. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 468,923 |
2307.08994 | Human Action Recognition in Still Images Using ConViT | Understanding the relationship between different parts of an image is crucial in a variety of applications, including object recognition, scene understanding, and image classification. Despite the fact that Convolutional Neural Networks (CNNs) have demonstrated impressive results in classifying and detecting objects, they lack the capability to extract the relationship between different parts of an image, which is a crucial factor in Human Action Recognition (HAR). To address this problem, this paper proposes a new module that functions like a convolutional layer that uses Vision Transformer (ViT). In the proposed model, the Vision Transformer can complement a convolutional neural network in a variety of tasks by helping it to effectively extract the relationship among various parts of an image. It is shown that the proposed model, compared to a simple CNN, can extract meaningful parts of an image and suppress the misleading parts. The proposed model has been evaluated on the Stanford40 and PASCAL VOC 2012 action datasets and has achieved 95.5% mean Average Precision (mAP) and 91.5% mAP results, respectively, which are promising compared to other state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 380,014 |
2309.11419 | KOSMOS-2.5: A Multimodal Literate Model | The automatic reading of text-intensive images represents a significant advancement toward achieving Artificial General Intelligence (AGI). In this paper we present KOSMOS-2.5, a multimodal literate model for machine reading of text-intensive images. Pre-trained on a large-scale corpus of text-intensive images, KOSMOS-2.5 excels in two distinct yet complementary transcription tasks: (1) generating spatially-aware text blocks, where each block of text is assigned spatial coordinates within the image, and (2) producing structured text output that captures both style and structure in markdown format. This unified multimodal literate capability is achieved through a shared decoder-only autoregressive Transformer architecture and task-specific prompts. Building on this foundation, we fine-tune KOSMOS-2.5 for document understanding tasks, resulting in a document understanding generalist named KOSMOS-2.5-CHAT. Additionally, a large corpus of 357.4 million document pages spanning diverse domains was curated for pre-training. We evaluate KOSMOS-2.5 on two newly proposed benchmarks, OCREval and MarkdownEval, for document-level text recognition and image-to-markdown generation, demonstrating impressive literate capabilities comparable to GPT-4o. KOSMOS-2.5-CHAT achieves performance comparable to other state-of-the-art generalists that are five times larger (1.3B vs. 7B) across nine text-rich visual question answering benchmarks. Models and code have been available at \url{https://aka.ms/kosmos25}. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 393,401 |
2308.06103 | Composable Function-preserving Expansions for Transformer Architectures | Training state-of-the-art neural networks requires a high cost in terms of compute and time. Model scale is recognized to be a critical factor to achieve and improve the state-of-the-art. Increasing the scale of a neural network normally requires restarting from scratch by randomly initializing all the parameters of the model, as this implies a change of architecture's parameters that does not allow for a straightforward transfer of knowledge from smaller size models. In this work, we propose six composable transformations to incrementally increase the size of transformer-based neural networks while preserving functionality, allowing to expand the capacity of the model as needed. We provide proof of exact function preservation under minimal initialization constraints for each transformation. The proposed methods may enable efficient training pipelines for larger and more powerful models by progressively expanding the architecture throughout training. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 385,034 |
1802.09153 | PBGen: Partial Binarization of Deconvolution-Based Generators for Edge
Intelligence | This work explores the binarization of the deconvolution-based generator in a GAN for memory saving and speedup of image construction. Our study suggests that different from convolutional neural networks (including the discriminator) where all layers can be binarized, only some of the layers in the generator can be binarized without significant performance loss. Supported by theoretical analysis and verified by experiments, a direct metric based on the dimension of deconvolution operations is established, which can be used to quickly decide which layers in the generator can be binarized. Our results also indicate that both the generator and the discriminator should be binarized simultaneously for balanced competition and better performance. Experimental results based on CelebA suggest that directly applying state-of-the-art binarization techniques to all the layers of the generator will lead to 2.83$\times$ performance loss measured by sliced Wasserstein distance compared with the original generator, while applying them to selected layers only can yield up to 25.81$\times$ saving in memory consumption, and 1.96$\times$ and 1.32$\times$ speedup in inference and training respectively with little performance loss. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 91,280 |
2205.09016 | A weakly supervised framework for high-resolution crop yield forecasts | Predictor inputs and label data for crop yield forecasting are not always available at the same spatial resolution. We propose a deep learning framework that uses high resolution inputs and low resolution labels to produce crop yield forecasts for both spatial levels. The forecasting model is calibrated by weak supervision from low resolution crop area and yield statistics. We evaluated the framework by disaggregating regional yields in Europe from parent statistical regions to sub-regions for five countries (Germany, Spain, France, Hungary, Italy) and two crops (soft wheat and potatoes). Performance of weakly supervised models was compared with linear trend models and Gradient-Boosted Decision Trees (GBDT). Higher resolution crop yield forecasts are useful to policymakers and other stakeholders. Weakly supervised deep learning methods provide a way to produce such forecasts even in the absence of high resolution yield data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,124 |
2403.18962 | High Recall, Small Data: The Challenges of Within-System Evaluation in a
Live Legal Search System | This paper illustrates some challenges of common ranking evaluation methods for legal information retrieval (IR). We show these challenges with log data from a live legal search system and two user studies. We provide an overview of aspects of legal IR, and the implications of these aspects for the expected challenges of common evaluation methods: test collections based on explicit and implicit feedback, user surveys, and A/B testing. Next, we illustrate the challenges of common evaluation methods using data from a live, commercial, legal search engine. We specifically focus on methods for monitoring the effectiveness of (continuous) changes to document ranking by a single IR system over time. We show how the combination of characteristics in legal IR systems and limited user data can lead to challenges that cause the common evaluation methods discussed to be sub-optimal. In our future work we will therefore focus on less common evaluation methods, such as cost-based evaluation models. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 442,144 |
2211.16965 | Privacy-Preserving Federated Deep Clustering based on GAN | Federated clustering (FC) is an essential extension of centralized clustering designed for the federated setting, wherein the challenge lies in constructing a global similarity measure without the need to share private data. Conventional approaches to FC typically adopt extensions of centralized methods, like K-means and fuzzy c-means. However, these methods are susceptible to non-independent-and-identically-distributed (non-IID) data among clients, leading to suboptimal performance, particularly with high-dimensional data. In this paper, we present a novel approach to address these limitations by proposing a Privacy-Preserving Federated Deep Clustering based on Generative Adversarial Networks (GANs). Each client trains a local generative adversarial network (GAN) locally and uploads the synthetic data to the server. The server applies a deep clustering network on the synthetic data to establish $k$ cluster centroids, which are then downloaded to the clients for cluster assignment. Theoretical analysis demonstrates that the GAN-generated samples, shared among clients, inherently uphold certain privacy guarantees, safeguarding the confidentiality of individual data. Furthermore, extensive experimental evaluations showcase the effectiveness and utility of our proposed method in achieving accurate and privacy-preserving federated clustering. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 333,813 |
2403.03551 | Enhanced Low-Dose CT Image Reconstruction by Domain and Task Shifting
Gaussian Denoisers | Computed tomography from a low radiation dose (LDCT) is challenging due to high noise in the projection data. Popular approaches for LDCT image reconstruction are two-stage methods, typically consisting of the filtered backprojection (FBP) algorithm followed by a neural network for LDCT image enhancement. Two-stage methods are attractive for their simplicity and potential for computational efficiency, typically requiring only a single FBP and a neural network forward pass for inference. However, the best reconstruction quality is currently achieved by unrolled iterative methods (Learned Primal-Dual and ItNet), which are more complex and thus have a higher computational cost for training and inference. We propose a method combining the simplicity and efficiency of two-stage methods with state-of-the-art reconstruction quality. Our strategy utilizes a neural network pretrained for Gaussian noise removal from natural grayscale images, fine-tuned for LDCT image enhancement. We call this method FBP-DTSGD (Domain and Task Shifted Gaussian Denoisers) as the fine-tuning is a task shift from Gaussian denoising to enhancing LDCT images and a domain shift from natural grayscale to LDCT images. An ablation study with three different pretrained Gaussian denoisers indicates that the performance of FBP-DTSGD does not depend on a specific denoising architecture, suggesting future advancements in Gaussian denoising could benefit the method. The study also shows that pretraining on natural images enhances LDCT reconstruction quality, especially with limited training data. Notably, pretraining involves no additional cost, as existing pretrained models are used. The proposed method currently holds the top mean position in the LoDoPaB-CT challenge. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 435,244 |
1910.02449 | Investigation of Channel Estimation Techniques with 1-bit Quantization
and Oversampling for Multiple-Antenna Systems | Large-scale multiple-antenna systems have been identified as a promising technology for the next generation of wireless systems. However, by scaling up the number of receive antennas the energy consumption will also increase. One possible solution is to use low-resolution analog-to-digital converters at the receiver. This paper considers large-scale multiple-antenna uplink systems with 1-bit analog-to-digital converters on each receive antenna. Since oversampling can partially compensate for the information loss caused by the coarse quantization, the received signals are firstly oversampled by a factor M. We then propose a low-resolution aware linear minimum mean-squared error channel estimator for 1-bit oversampled systems. Moreover, we characterize analytically the performance of the proposed channel estimator by deriving an upper bound on the Bayesian Cram\'er-Rao bound. Numerical results are provided to illustrate the performance of the proposed channel estimator. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 148,249 |
2403.16915 | Coarse-Tuning for Ad-hoc Document Retrieval Using Pre-trained Language
Models | Fine-tuning in information retrieval systems using pre-trained language models (PLM-based IR) requires learning query representations and query-document relations, in addition to downstream task-specific learning. This study introduces coarse-tuning as an intermediate learning stage that bridges pre-training and fine-tuning. By learning query representations and query-document relations in coarse-tuning, we aim to reduce the load of fine-tuning and improve the learning effect of downstream IR tasks. We propose Query-Document Pair Prediction (QDPP) for coarse-tuning, which predicts the appropriateness of query-document pairs. Evaluation experiments show that the proposed method significantly improves MRR and/or nDCG@5 in four ad-hoc document retrieval datasets. Furthermore, the results of the query prediction task suggested that coarse-tuning facilitated learning of query representation and query-document relations. | false | false | false | false | true | true | true | false | true | false | false | false | false | false | false | false | false | false | 441,243 |
2404.05555 | On the Convergence of Continual Learning with Adaptive Methods | One of the objectives of continual learning is to prevent catastrophic forgetting in learning multiple tasks sequentially, and the existing solutions have been driven by the conceptualization of the plasticity-stability dilemma. However, the convergence of continual learning for each sequential task is less studied so far. In this paper, we provide a convergence analysis of memory-based continual learning with stochastic gradient descent and empirical evidence that training current tasks causes the cumulative degradation of previous tasks. We propose an adaptive method for nonconvex continual learning (NCCL), which adjusts step sizes of both previous and current tasks with the gradients. The proposed method can achieve the same convergence rate as the SGD method when the catastrophic forgetting term which we define in the paper is suppressed at each iteration. Further, we demonstrate that the proposed algorithm improves the performance of continual learning over existing methods for several image classification tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 445,120 |
1412.6595 | Efficient, Optimal $k$-Leader Selection for Coherent, One-Dimensional
Formations | We study the problem of optimal leader selection in consensus networks with noisy relative information. The objective is to identify the set of $k$ leaders that minimizes the formation's deviation from the desired trajectory established by the leaders. An optimal leader set can be found by an exhaustive search over all possible leader sets; however, this approach is not scalable to large networks. In recent years, several works have proposed approximation algorithms to the $k$-leader selection problem, yet the question of whether there exists an efficient, non-combinatorial method to identify the optimal leader set remains open. This work takes a first step towards answering this question. We show that, in one-dimensional weighted graphs, namely path graphs and ring graphs, the $k$-leader selection problem can be solved in polynomial time (in both $k$ and the network size $n$). We give an $O(n^3)$ solution for optimal $k$-leader selection in path graphs and an $O(kn^3)$ solution for optimal $k$-leader selection in ring graphs. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 38,666 |
2402.10074 | Class-Balanced and Reinforced Active Learning on Graphs | Graph neural networks (GNNs) have demonstrated significant success in various applications, such as node classification, link prediction, and graph classification. Active learning for GNNs aims to query the valuable samples from the unlabeled data for annotation to maximize the GNNs' performance at a lower cost. However, most existing algorithms for reinforced active learning in GNNs may lead to a highly imbalanced class distribution, especially in highly skewed class scenarios. GNNs trained with class-imbalanced labeled data are susceptible to bias toward majority classes, and the lower performance of minority classes may lead to a decline in overall performance. To tackle this issue, we propose a novel class-balanced and reinforced active learning framework for GNNs, namely, GCBR. It learns an optimal policy to acquire class-balanced and informative nodes for annotation, maximizing the performance of GNNs trained with selected labeled nodes. GCBR designs class-balance-aware states, as well as a reward function that achieves trade-off between model performance and class balance. The reinforcement learning algorithm Advantage Actor-Critic (A2C) is employed to learn an optimal policy stably and efficiently. We further upgrade GCBR to GCBR++ by introducing a punishment mechanism to obtain a more class-balanced labeled set. Extensive experiments on multiple datasets demonstrate the effectiveness of the proposed approaches, achieving superior performance over state-of-the-art baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,801 |
1301.3841 | Computational Investigation of Low-Discrepancy Sequences in Simulation
Algorithms for Bayesian Networks | Monte Carlo sampling has become a major vehicle for approximate inference in Bayesian networks. In this paper, we investigate a family of related simulation approaches, known collectively as quasi-Monte Carlo methods based on deterministic low-discrepancy sequences. We first outline several theoretical aspects of deterministic low-discrepancy sequences, show three examples of such sequences, and then discuss practical issues related to applying them to belief updating in Bayesian networks. We propose an algorithm for selecting direction numbers for Sobol sequence. Our experimental results show that low-discrepancy sequences (especially Sobol sequence) significantly improve the performance of simulation algorithms in Bayesian networks compared to Monte Carlo sampling. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 21,153 |
2304.11721 | A Lightweight Constrained Generation Alternative for Query-focused
Summarization | Query-focused summarization (QFS) aims to provide a summary of a document that satisfies information need of a given query and is useful in various IR applications, such as abstractive snippet generation. Current QFS approaches typically involve injecting additional information, e.g. query-answer relevance or fine-grained token-level interaction between a query and document, into a finetuned large language model. However, these approaches often require extra parameters \& training, and generalize poorly to new dataset distributions. To mitigate this, we propose leveraging a recently developed constrained generation model Neurological Decoding (NLD) as an alternative to current QFS regimes which rely on additional sub-architectures and training. We first construct lexical constraints by identifying important tokens from the document using a lightweight gradient attribution model, then subsequently force the generated summary to satisfy these constraints by directly manipulating the final vocabulary likelihood. This lightweight approach requires no additional parameters or finetuning as it utilizes both an off-the-shelf neural retrieval model to construct the constraints and a standard generative language model to produce the QFS. We demonstrate the efficacy of this approach on two public QFS collections achieving near parity with the state-of-the-art model with substantially reduced complexity. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 359,937 |
2107.13720 | Convolutional Transformer based Dual Discriminator Generative
Adversarial Networks for Video Anomaly Detection | Detecting abnormal activities in real-world surveillance videos is an important yet challenging task as the prior knowledge about video anomalies is usually limited or unavailable. Despite that many approaches have been developed to resolve this problem, few of them can capture the normal spatio-temporal patterns effectively and efficiently. Moreover, existing works seldom explicitly consider the local consistency at frame level and global coherence of temporal dynamics in video sequences. To this end, we propose Convolutional Transformer based Dual Discriminator Generative Adversarial Networks (CT-D2GAN) to perform unsupervised video anomaly detection. Specifically, we first present a convolutional transformer to perform future frame prediction. It contains three key components, i.e., a convolutional encoder to capture the spatial information of the input video clips, a temporal self-attention module to encode the temporal dynamics, and a convolutional decoder to integrate spatio-temporal features and predict the future frame. Next, a dual discriminator based adversarial training procedure, which jointly considers an image discriminator that can maintain the local consistency at frame-level and a video discriminator that can enforce the global coherence of temporal dynamics, is employed to enhance the future frame prediction. Finally, the prediction error is used to identify abnormal video frames. Thoroughly empirical studies on three public video anomaly detection datasets, i.e., UCSD Ped2, CUHK Avenue, and Shanghai Tech Campus, demonstrate the effectiveness of the proposed adversarial spatio-temporal modeling framework. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 248,281 |
1706.08789 | Auto-Encoder Guided GAN for Chinese Calligraphy Synthesis | In this paper, we investigate the Chinese calligraphy synthesis problem: synthesizing Chinese calligraphy images with specified style from standard font(eg. Hei font) images (Fig. 1(a)). Recent works mostly follow the stroke extraction and assemble pipeline which is complex in the process and limited by the effect of stroke extraction. We treat the calligraphy synthesis problem as an image-to-image translation problem and propose a deep neural network based model which can generate calligraphy images from standard font images directly. Besides, we also construct a large scale benchmark that contains various styles for Chinese calligraphy synthesis. We evaluate our method as well as some baseline methods on the proposed dataset, and the experimental results demonstrate the effectiveness of our proposed model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 76,052 |
2207.08312 | A Fast, Autonomous, Bipedal Walking Behavior over Rapid Regions | In trying to build humanoid robots that perform useful tasks in a world built for humans, we address the problem of autonomous locomotion. Humanoid robot planning and control algorithms for walking over rough terrain are becoming increasingly capable. At the same time, commercially available depth cameras have been getting more accurate and GPU computing has become a primary tool in AI research. In this paper, we present a newly constructed behavior control system for achieving fast, autonomous, bipedal walking, without pauses or deliberation. We achieve this using a recently published rapid planar regions perception algorithm, a height map based body path planner, an A* footstep planner, and a momentum-based walking controller. We put these elements together to form a behavior control system supported by modern software development practices and simulation tools. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 308,540 |
2411.01625 | Counterfactual explainability of black-box prediction models | It is crucial to be able to explain black-box prediction models to use them effectively and safely in practice. Most existing tools for model explanations are associational rather than causal, and we use two paradoxical examples to show that such explanations are generally inadequate. Motivated by the concept of genetic heritability in twin studies, we propose a new notion called counterfactual explainability for black-box prediction models. Counterfactual explainability has three key advantages: (1) it leverages counterfactual outcomes and extends methods for global sensitivity analysis (such as functional analysis of variance and Sobol's indices) to a causal setting; (2) it is defined not only for the totality of a set of input factors but also for their interactions (indeed, it is a probability measure on a whole ``explanation algebra''); (3) it also applies to dependent input factors whose causal relationship can be modeled by a directed acyclic graph, thus incorporating causal mechanisms into the explanation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 505,157 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.