id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2008.04986 | MOVESTAR: An Open-Source Vehicle Fuel and Emission Model based on USEPA
MOVES | In this paper, we introduce an open-source model "MOVESTAR" to calculate the fuel consumption and pollutant emissions of motor vehicles. This model is developed based on U.S. Environmental Protection Agency's (EPA) Motor Vehicle Emission Simulator (MOVES), which provides an accurate estimate of vehicle emissions under a wide range of user-defined conditions. Originally, MOVES requires users to specify many parameters through its software, including vehicle types, time periods, geographical areas, pollutants, vehicle operating characteristics, and road types. In this paper, MOVESTAR is developed as a simplified version, which only takes the second-by-second vehicle speed data and vehicle type as inputs. To enable easy integration of this model, its source code is provided in various languages, including Python, MATLAB and C++. A case study is introduced in this paper to illustrate the effectiveness of the model in the development of advanced vehicle technology. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 191,374 |
2309.00789 | LinkTransformer: A Unified Package for Record Linkage with Transformer
Language Models | Linking information across sources is fundamental to a variety of analyses in social science, business, and government. While large language models (LLMs) offer enormous promise for improving record linkage in noisy datasets, in many domains approximate string matching packages in popular softwares such as R and Stata remain predominant. These packages have clean, simple interfaces and can be easily extended to a diversity of languages. Our open-source package LinkTransformer aims to extend the familiarity and ease-of-use of popular string matching methods to deep learning. It is a general purpose package for record linkage with transformer LLMs that treats record linkage as a text retrieval problem. At its core is an off-the-shelf toolkit for applying transformer models to record linkage with four lines of code. LinkTransformer contains a rich repository of pre-trained transformer semantic similarity models for multiple languages and supports easy integration of any transformer language model from Hugging Face or OpenAI. It supports standard functionality such as blocking and linking on multiple noisy fields. LinkTransformer APIs also perform other common text data processing tasks, e.g., aggregation, noisy de-duplication, and translation-free cross-lingual linkage. Importantly, LinkTransformer also contains comprehensive tools for efficient model tuning, to facilitate different levels of customization when off-the-shelf models do not provide the required accuracy. Finally, to promote reusability, reproducibility, and extensibility, LinkTransformer makes it easy for users to contribute their custom-trained models to its model hub. By combining transformer language models with intuitive APIs that will be familiar to many users of popular string matching packages, LinkTransformer aims to democratize the benefits of LLMs among those who may be less familiar with deep learning frameworks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 389,421 |
2312.10448 | Resolving Crash Bugs via Large Language Models: An Empirical Study | Crash bugs cause unexpected program behaviors or even termination, requiring high-priority resolution. However, manually resolving crash bugs is challenging and labor-intensive, and researchers have proposed various techniques for their automated localization and repair. ChatGPT, a recent large language model (LLM), has garnered significant attention due to its exceptional performance across various domains. This work performs the first investigation into ChatGPT's capability in resolve real-world crash bugs, focusing on its effectiveness in both localizing and repairing code-related and environment-related crash bugs. Specifically, we initially assess ChatGPT's fundamental ability to resolve crash bugs with basic prompts in a single iteration. We observe that ChatGPT performs better at resolving code-related crash bugs compared to environment-related ones, and its primary challenge in resolution lies in inaccurate localization. Additionally, we explore ChatGPT's potential with various advanced prompts. Furthermore, by stimulating ChatGPT's self-planning, it methodically investigates each potential crash-causing environmental factor through proactive inquiry, ultimately identifying the root cause of the crash. Based on our findings, we propose IntDiagSolver, an interaction methodology designed to facilitate precise crash bug resolution through continuous interaction with LLMs. Evaluating IntDiagSolver on multiple LLMs reveals consistent enhancement in the accuracy of crash bug resolution, including ChatGPT, Claude, and CodeLlama. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 416,175 |
2411.01391 | Differentiable Quantum Computing for Large-scale Linear Control | As industrial models and designs grow increasingly complex, the demand for optimal control of large-scale dynamical systems has significantly increased. However, traditional methods for optimal control incur significant overhead as problem dimensions grow. In this paper, we introduce an end-to-end quantum algorithm for linear-quadratic control with provable speedups. Our algorithm, based on a policy gradient method, incorporates a novel quantum subroutine for solving the matrix Lyapunov equation. Specifically, we build a quantum-assisted differentiable simulator for efficient gradient estimation that is more accurate and robust than classical methods relying on stochastic approximation. Compared to the classical approaches, our method achieves a super-quadratic speedup. To the best of our knowledge, this is the first end-to-end quantum application to linear control problems with provable quantum advantage. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 505,047 |
2205.00176 | Building a Role Specified Open-Domain Dialogue System Leveraging
Large-Scale Language Models | Recent open-domain dialogue models have brought numerous breakthroughs. However, building a chat system is not scalable since it often requires a considerable volume of human-human dialogue data, especially when enforcing features such as persona, style, or safety. In this work, we study the challenge of imposing roles on open-domain dialogue systems, with the goal of making the systems maintain consistent roles while conversing naturally with humans. To accomplish this, the system must satisfy a role specification that includes certain conditions on the stated features as well as a system policy on whether or not certain types of utterances are allowed. For this, we propose an efficient data collection framework leveraging in-context few-shot learning of large-scale language models for building role-satisfying dialogue dataset from scratch. We then compare various architectures for open-domain dialogue systems in terms of meeting role specifications while maintaining conversational abilities. Automatic and human evaluations show that our models return few out-of-bounds utterances, keeping competitive performance on general metrics. We release a Korean dialogue dataset we built for further research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 294,156 |
2203.03888 | ART-Point: Improving Rotation Robustness of Point Cloud Classifiers via
Adversarial Rotation | Point cloud classifiers with rotation robustness have been widely discussed in the 3D deep learning community. Most proposed methods either use rotation invariant descriptors as inputs or try to design rotation equivariant networks. However, robust models generated by these methods have limited performance under clean aligned datasets due to modifications on the original classifiers or input space. In this study, for the first time, we show that the rotation robustness of point cloud classifiers can also be acquired via adversarial training with better performance on both rotated and clean datasets. Specifically, our proposed framework named ART-Point regards the rotation of the point cloud as an attack and improves rotation robustness by training the classifier on inputs with Adversarial RoTations. We contribute an axis-wise rotation attack that uses back-propagated gradients of the pre-trained model to effectively find the adversarial rotations. To avoid model over-fitting on adversarial inputs, we construct rotation pools that leverage the transferability of adversarial rotations among samples to increase the diversity of training data. Moreover, we propose a fast one-step optimization to efficiently reach the final robust model. Experiments show that our proposed rotation attack achieves a high success rate and ART-Point can be used on most existing classifiers to improve the rotation robustness while showing better performance on clean datasets than state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,264 |
2304.01999 | Revisiting the Evaluation of Image Synthesis with GANs | A good metric, which promises a reliable comparison between solutions, is essential for any well-defined task. Unlike most vision tasks that have per-sample ground-truth, image synthesis tasks target generating unseen data and hence are usually evaluated through a distributional distance between one set of real samples and another set of generated samples. This study presents an empirical investigation into the evaluation of synthesis performance, with generative adversarial networks (GANs) as a representative of generative models. In particular, we make in-depth analyses of various factors, including how to represent a data point in the representation space, how to calculate a fair distance using selected samples, and how many instances to use from each set. Extensive experiments conducted on multiple datasets and settings reveal several important findings. Firstly, a group of models that include both CNN-based and ViT-based architectures serve as reliable and robust feature extractors for measurement evaluation. Secondly, Centered Kernel Alignment (CKA) provides a better comparison across various extractors and hierarchical layers in one model. Finally, CKA is more sample-efficient and enjoys better agreement with human judgment in characterizing the similarity between two internal data correlations. These findings contribute to the development of a new measurement system, which enables a consistent and reliable re-evaluation of current state-of-the-art generative models. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 356,284 |
2106.06010 | Machine Learning Framework for Sensing and Modeling Interference in IoT
Frequency Bands | Spectrum scarcity has surfaced as a prominent concern in wireless radio communications with the emergence of new technologies over the past few years. As a result, there is growing need for better understanding of the spectrum occupancy with newly emerging access technologies supporting the Internet of Things. In this paper, we present a framework to capture and model the traffic behavior of short-time spectrum occupancy for IoT applications in the shared bands to determine the existing interference. The proposed capturing method utilizes a software defined radio to monitor the short bursts of IoT transmissions by capturing the time series data which is converted to power spectral density to extract the observed occupancy. Furthermore, we propose the use of an unsupervised machine learning technique to enhance conventionally implemented energy detection methods. Our experimental results show that the temporal and frequency behavior of the spectrum can be well-captured using the combination of two models, namely, semi-Markov chains and a Poisson-distribution arrival rate. We conduct an extensive measurement campaign in different urban environments and incorporate the spatial effect on the IoT shared spectrum. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 240,312 |
2005.11592 | Revisiting Street-to-Aerial View Image Geo-localization and Orientation
Estimation | Street-to-aerial image geo-localization, which matches a query street-view image to the GPS-tagged aerial images in a reference set, has attracted increasing attention recently. In this paper, we revisit this problem and point out the ignored issue about image alignment information. We show that the performance of a simple Siamese network is highly dependent on the alignment setting and the comparison of previous works can be unfair if they have different assumptions. Instead of focusing on the feature extraction under the alignment assumption, we show that improvements in metric learning techniques significantly boost the performance regardless of the alignment. Without leveraging the alignment information, our pipeline outperforms previous works on both panorama and cropped datasets. Furthermore, we conduct visualization to help understand the learned model and the effect of alignment information using Grad-CAM. With our discovery on the approximate rotation-invariant activation maps, we propose a novel method to estimate the orientation/alignment between a pair of cross-view images with unknown alignment information. It achieves state-of-the-art results on the CVUSA dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 178,506 |
2203.03103 | Prediction of transport property via machine learning molecular
movements | Molecular dynamics (MD) simulations are increasingly being combined with machine learning (ML) to predict material properties. The molecular configurations obtained from MD are represented by multiple features, such as thermodynamic properties, and are used as the ML input. However, to accurately find the input--output patterns, ML requires a sufficiently sized dataset that depends on the complexity of the ML model. Generating such a large dataset from MD simulations is not ideal because of their high computation cost. In this study, we present a simple supervised ML method to predict the transport properties of materials. To simplify the model, an unsupervised ML method obtains an efficient representation of molecular movements. This method was applied to predict the viscosity of lubricant molecules in confinement with shear flow. Furthermore, simplicity facilitates the interpretation of the model to understand the molecular mechanics of viscosity. We revealed two types of molecular mechanisms that contribute to low viscosity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,971 |
2501.15427 | OpenCharacter: Training Customizable Role-Playing LLMs with Large-Scale
Synthetic Personas | Customizable role-playing in large language models (LLMs), also known as character generalization, is gaining increasing attention for its versatility and cost-efficiency in developing and deploying role-playing dialogue agents. This study explores a large-scale data synthesis approach to equip LLMs with character generalization capabilities. We begin by synthesizing large-scale character profiles using personas from Persona Hub and then explore two strategies: response rewriting and response generation, to create character-aligned instructional responses. To validate the effectiveness of our synthetic instruction tuning data for character generalization, we perform supervised fine-tuning (SFT) using the LLaMA-3 8B model. Our best-performing model strengthens the original LLaMA-3 8B Instruct model and achieves performance comparable to GPT-4o models on role-playing dialogue. We release our synthetic characters and instruction-tuning dialogues to support public research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 527,550 |
2312.17650 | Grasping, Part Identification, and Pose Refinement in One Shot with a
Tactile Gripper | The rise in additive manufacturing comes with unique opportunities and challenges. Rapid changes to part design and massive part customization distinctive to 3D-Print (3DP) can be easily achieved. Customized parts that are unique, yet exhibit similar features such as dental moulds, shoe insoles, or engine vanes could be industrially manufactured with 3DP. However, the opportunity for massive part customization comes with unique challenges for the existing production paradigm of robotics applications, as the current robotics paradigm for part identification and pose refinement is repetitive, where data-driven and object-dependent approaches are often used. Thus, a bottleneck exists in robotics applications for 3DP parts where massive customization is involved, as it is difficult for feature-based deep learning approaches to distinguish between similar parts such as shoe insoles belonging to different people. As such, we propose a method that augments patterns on 3DP parts so that grasping, part identification, and pose refinement can be executed in one shot with a tactile gripper. We also experimentally evaluate our approach from three perspectives, including real insertion tasks that mimic robotic sorting and packing, and achieved excellent classification results, a high insertion success rate of 95%, and a sub-millimeter pose refinement accuracy. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 418,819 |
cs/9505102 | Adaptive Load Balancing: A Study in Multi-Agent Learning | We study the process of multi-agent reinforcement learning in the context of load balancing in a distributed system, without use of either central coordination or explicit communication. We first define a precise framework in which to study adaptive load balancing, important features of which are its stochastic nature and the purely local information available to individual agents. Given this framework, we show illuminating results on the interplay between basic adaptive behavior parameters and their effect on system efficiency. We then investigate the properties of adaptive load balancing in heterogeneous populations, and address the issue of exploration vs. exploitation in that context. Finally, we show that naive use of communication may not improve, and might even harm system efficiency. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,307 |
2308.01521 | PPI-NET: End-to-End Parametric Primitive Inference | In engineering applications, line, circle, arc, and point are collectively referred to as primitives, and they play a crucial role in path planning, simulation analysis, and manufacturing. When designing CAD models, engineers typically start by sketching the model's orthographic view on paper or a whiteboard and then translate the design intent into a CAD program. Although this design method is powerful, it often involves challenging and repetitive tasks, requiring engineers to perform numerous similar operations in each design. To address this conversion process, we propose an efficient and accurate end-to-end method that avoids the inefficiency and error accumulation issues associated with using auto-regressive models to infer parametric primitives from hand-drawn sketch images. Since our model samples match the representation format of standard CAD software, they can be imported into CAD software for solving, editing, and applied to downstream design tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 383,275 |
2303.07616 | The Life Cycle of Knowledge in Big Language Models: A Survey | Knowledge plays a critical role in artificial intelligence. Recently, the extensive success of pre-trained language models (PLMs) has raised significant attention about how knowledge can be acquired, maintained, updated and used by language models. Despite the enormous amount of related studies, there still lacks a unified view of how knowledge circulates within language models throughout the learning, tuning, and application processes, which may prevent us from further understanding the connections between current progress or realizing existing limitations. In this survey, we revisit PLMs as knowledge-based systems by dividing the life circle of knowledge in PLMs into five critical periods, and investigating how knowledge circulates when it is built, maintained and used. To this end, we systematically review existing studies of each period of the knowledge life cycle, summarize the main challenges and current limitations, and discuss future directions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 351,320 |
2502.12851 | MeMo: Towards Language Models with Associative Memory Mechanisms | Memorization is a fundamental ability of Transformer-based Large Language Models, achieved through learning. In this paper, we propose a paradigm shift by designing an architecture to memorize text directly, bearing in mind the principle that memorization precedes learning. We introduce MeMo, a novel architecture for language modeling that explicitly memorizes sequences of tokens in layered associative memories. By design, MeMo offers transparency and the possibility of model editing, including forgetting texts. We experimented with the MeMo architecture, showing the memorization power of the one-layer and the multi-layer configurations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 535,077 |
2405.16923 | SA-GS: Semantic-Aware Gaussian Splatting for Large Scene Reconstruction
with Geometry Constrain | With the emergence of Gaussian Splats, recent efforts have focused on large-scale scene geometric reconstruction. However, most of these efforts either concentrate on memory reduction or spatial space division, neglecting information in the semantic space. In this paper, we propose a novel method, named SA-GS, for fine-grained 3D geometry reconstruction using semantic-aware 3D Gaussian Splats. Specifically, we leverage prior information stored in large vision models such as SAM and DINO to generate semantic masks. We then introduce a geometric complexity measurement function to serve as soft regularization, guiding the shape of each Gaussian Splat within specific semantic areas. Additionally, we present a method that estimates the expected number of Gaussian Splats in different semantic areas, effectively providing a lower bound for Gaussian Splats in these areas. Subsequently, we extract the point cloud using a novel probability density-based extraction method, transforming Gaussian Splats into a point cloud crucial for downstream tasks. Our method also offers the potential for detailed semantic inquiries while maintaining high image-based reconstruction results. We provide extensive experiments on publicly available large-scale scene reconstruction datasets with highly accurate point clouds as ground truth and our novel dataset. Our results demonstrate the superiority of our method over current state-of-the-art Gaussian Splats reconstruction methods by a significant margin in terms of geometric-based measurement metrics. Code and additional results will soon be available on our project page. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 457,680 |
2203.10821 | Sem2NeRF: Converting Single-View Semantic Masks to Neural Radiance
Fields | Image translation and manipulation have gain increasing attention along with the rapid development of deep generative models. Although existing approaches have brought impressive results, they mainly operated in 2D space. In light of recent advances in NeRF-based 3D-aware generative models, we introduce a new task, Semantic-to-NeRF translation, that aims to reconstruct a 3D scene modelled by NeRF, conditioned on one single-view semantic mask as input. To kick-off this novel task, we propose the Sem2NeRF framework. In particular, Sem2NeRF addresses the highly challenging task by encoding the semantic mask into the latent code that controls the 3D scene representation of a pre-trained decoder. To further improve the accuracy of the mapping, we integrate a new region-aware learning strategy into the design of both the encoder and the decoder. We verify the efficacy of the proposed Sem2NeRF and demonstrate that it outperforms several strong baselines on two benchmark datasets. Code and video are available at https://donydchen.github.io/sem2nerf/ | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 286,700 |
1801.07386 | Learning Networks from Random Walk-Based Node Similarities | Digital presence in the world of online social media entails significant privacy risks. In this work we consider a privacy threat to a social network in which an attacker has access to a subset of random walk-based node similarities, such as effective resistances (i.e., commute times) or personalized PageRank scores. Using these similarities, the attacker's goal is to infer as much information as possible about the underlying network, including any remaining unknown pairwise node similarities and edges. For the effective resistance metric, we show that with just a small subset of measurements, the attacker can learn a large fraction of edges in a social network, even when the measurements are noisy. We also show that it is possible to learn a graph which accurately matches the underlying network on all other effective resistances. This second observation is interesting from a data mining perspective, since it can be expensive to accurately compute all effective resistances. As an alternative, our graphs learned from just a subset of approximate effective resistances can be used as surrogates in a wide range of applications that use effective resistances to probe graph structure, including for graph clustering, node centrality evaluation, and anomaly detection. We obtain our results by formalizing the graph learning objective mathematically, using two optimization problems. One formulation is convex and can be solved provably in polynomial time. The other is not, but we solve it efficiently with projected gradient and coordinate descent. We demonstrate the effectiveness of these methods on a number of social networks obtained from Facebook. We also discuss how our methods can be generalized to other random walk-based similarities, such as personalized PageRank. Our code is available at https://github.com/cnmusco/graph-similarity-learning. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 88,778 |
2301.06366 | Evaluating clinical diversity and plausibility of synthetic capsule
endoscopic images | Wireless Capsule Endoscopy (WCE) is being increasingly used as an alternative imaging modality for complete and non-invasive screening of the gastrointestinal tract. Although this is advantageous in reducing unnecessary hospital admissions, it also demands that a WCE diagnostic protocol be in place so larger populations can be effectively screened. This calls for training and education protocols attuned specifically to this modality. Like training in other modalities such as traditional endoscopy, CT, MRI, etc., a WCE training protocol would require an atlas comprising of a large corpora of images that show vivid descriptions of pathologies and abnormalities, ideally observed over a period of time. Since such comprehensive atlases are presently lacking in WCE, in this work, we propose a deep learning method for utilizing already available studies across different institutions for the creation of a realistic WCE atlas using StyleGAN. We identify clinically relevant attributes in WCE such that synthetic images can be generated with selected attributes on cue. Beyond this, we also simulate several disease progression scenarios. The generated images are evaluated for realism and plausibility through three subjective online experiments with the participation of eight gastroenterology experts from three geographical locations and a variety of years of experience. The results from the experiments indicate that the images are highly realistic and the disease scenarios plausible. The images comprising the atlas are available publicly for use in training applications as well as supplementing real datasets for deep learning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 340,628 |
1911.11759 | Password-conditioned Anonymization and Deanonymization with Face
Identity Transformers | Cameras are prevalent in our daily lives, and enable many useful systems built upon computer vision technologies such as smart cameras and home robots for service applications. However, there is also an increasing societal concern as the captured images/videos may contain privacy-sensitive information (e.g., face identity). We propose a novel face identity transformer which enables automated photo-realistic password-based anonymization as well as deanonymization of human faces appearing in visual data. Our face identity transformer is trained to (1) remove face identity information after anonymization, (2) make the recovery of the original face possible when given the correct password, and (3) return a wrong--but photo-realistic--face given a wrong password. Extensive experiments show that our approach enables multimodal password-conditioned face anonymizations and deanonymizations, without sacrificing privacy compared to existing anonymization approaches. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 155,217 |
2410.11853 | GeoLife+: Large-Scale Simulated Trajectory Datasets Calibrated to the
GeoLife Dataset | Analyzing individual human trajectory data helps our understanding of human mobility and finds many commercial and academic applications. There are two main approaches to accessing trajectory data for research: one involves using real-world datasets like GeoLife, while the other employs simulations to synthesize data. Real-world data provides insights from real human activities, but such data is generally sparse due to voluntary participation. Conversely, simulated data can be more comprehensive but may capture unrealistic human behavior. In this Data and Resource paper, we combine the benefit of both by leveraging the statistical features of real-world data and the comprehensiveness of simulated data. Specifically, we extract features from the real-world GeoLife dataset such as the average number of individual daily trips, average radius of gyration, and maximum and minimum trip distances. We calibrate the Pattern of Life Simulation, a realistic simulation of human mobility, to reproduce these features. Therefore, we use a genetic algorithm to calibrate the parameters of the simulation to mimic the GeoLife features. For this calibration, we simulated numerous random simulation settings, measured the similarity of generated trajectories to GeoLife, and iteratively (over many generations) combined parameter settings of trajectory datasets most similar to GeoLife. Using the calibrated simulation, we simulate large trajectory datasets that we call GeoLife+, where + denotes the Kleene Plus, indicating unlimited replication with at least one occurrence. We provide simulated GeoLife+ data with 182, 1k, and 5k over 5 years, 10k, and 50k over a year and 100k users over 6 months of simulation lifetime. | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | true | false | 498,746 |
2309.08813 | Control Barrier Function for Linearizable Systems with High Relative
Degrees from Signal Temporal Logics: A Reference Governor Approach | This paper considers the safety-critical navigation problem with Signal Temporal Logic (STL) tasks. We developed an explicit reference governor-guided control barrier function (ERG-guided CBF) method that enables the application of first-order CBFs to high-order linearizable systems. This method significantly reduces the conservativeness of the existing CBF approaches for high-order systems. Furthermore, our framework provides safety-critical guarantees in the sense of obstacle avoidance by constructing the margin of safety and updating direction of safe evolution in the agent's state space. To improve control performance and enhance STL satisfaction, we employ efficient gradient-based methods for iteratively learning optimal parameters of ERG-guided CBF. We validate the algorithm through both high-order linear and nonlinear systems. A video demonstration can be found on: \url{https://youtu.be/ZRmsA2FeFR4} | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 392,331 |
2410.05357 | Model-GLUE: Democratized LLM Scaling for A Large Model Zoo in the Wild | As Large Language Models (LLMs) excel across tasks and specialized domains, scaling LLMs based on existing models has garnered significant attention, which faces the challenge of decreasing performance when combining disparate models. Various techniques have been proposed for the aggregation of pre-trained LLMs, including model merging, Mixture-of-Experts, and stacking. Despite their merits, a comprehensive comparison and synergistic application of them to a diverse model zoo is yet to be adequately addressed. In light of this research gap, this paper introduces Model-GLUE, a holistic LLM scaling guideline. First, our work starts with a benchmarking of existing LLM scaling techniques, especially selective merging, and variants of mixture. Utilizing the insights from the benchmark results, we formulate an optimal strategy for the selection and aggregation of a heterogeneous model zoo characterizing different architectures and initialization.Our methodology involves the clustering of mergeable models and optimal merging strategy selection, and the integration of clusters through a model mixture. Finally, evidenced by our experiments on a diverse Llama-2-based model zoo, Model-GLUE shows an average performance enhancement of 5.61%, achieved without additional training. Codes are available at: https://github.com/Model-GLUE/Model-GLUE. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 495,703 |
2004.02557 | Distinguish Confusing Law Articles for Legal Judgment Prediction | Legal Judgment Prediction (LJP) is the task of automatically predicting a law case's judgment results given a text describing its facts, which has excellent prospects in judicial assistance systems and convenient services for the public. In practice, confusing charges are frequent, because law cases applicable to similar law articles are easily misjudged. For addressing this issue, the existing method relies heavily on domain experts, which hinders its application in different law systems. In this paper, we present an end-to-end model, LADAN, to solve the task of LJP. To distinguish confusing charges, we propose a novel graph neural network to automatically learn subtle differences between confusing law articles and design a novel attention mechanism that fully exploits the learned differences to extract compelling discriminative features from fact descriptions attentively. Experiments conducted on real-world datasets demonstrate the superiority of our LADAN. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 171,260 |
2105.05728 | Early prediction of respiratory failure in the intensive care unit | The development of respiratory failure is common among patients in intensive care units (ICU). Large data quantities from ICU patient monitoring systems make timely and comprehensive analysis by clinicians difficult but are ideal for automatic processing by machine learning algorithms. Early prediction of respiratory system failure could alert clinicians to patients at risk of respiratory failure and allow for early patient reassessment and treatment adjustment. We propose an early warning system that predicts moderate/severe respiratory failure up to 8 hours in advance. Our system was trained on HiRID-II, a data-set containing more than 60,000 admissions to a tertiary care ICU. An alarm is typically triggered several hours before the beginning of respiratory failure. Our system outperforms a clinical baseline mimicking traditional clinical decision-making based on pulse-oximetric oxygen saturation and the fraction of inspired oxygen. To provide model introspection and diagnostics, we developed an easy-to-use web browser-based system to explore model input data and predictions visually. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 234,904 |
2406.16606 | Cherry on the Cake: Fairness is NOT an Optimization Problem | In Fair AI literature, the practice of maliciously creating unfair models that nevertheless satisfy fairness constraints is known as "cherry-picking". A cherry-picking model is a model that makes mistakes on purpose, selecting bad individuals from a minority class instead of better candidates from the same minority. The model literally cherry-picks whom to select to superficially meet the fairness constraints while making minimal changes to the unfair model. This practice has been described as "blatantly unfair" and has a negative impact on already marginalized communities, undermining the intended purpose of fairness measures specifically designed to protect these communities. A common assumption is that cherry-picking arises solely from malicious intent and that models designed only to optimize fairness metrics would avoid this behavior. We show that this is not the case: models optimized to minimize fairness metrics while maximizing performance are often forced to cherry-pick to some degree. In other words, cherry-picking might be an inevitable outcome of the optimization process itself. To demonstrate this, we use tools from fair cake-cutting, a mathematical subfield that studies the problem of fairly dividing a resource, referred to as the "cake," among a number of participants. This concept is connected to supervised multi-label classification: any dataset can be thought of as a cake that needs to be distributed among different labels, and the model is the function that divides the cake. We adapt these classical results for machine learning and demonstrate how this connection can be prolifically used for fairness and classification in general. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | true | 467,196 |
2310.09769 | Cell-Free Massive MIMO Surveillance Systems | Wireless surveillance, in which untrusted communications links are proactively monitored by legitimate agencies, has started to garner a lot of interest for enhancing the national security. In this paper, we propose a new cell-free massive multiple-input multiple-output (CF-mMIMO) wireless surveillance system, where a large number of distributed multi-antenna aided legitimate monitoring nodes (MNs) embark on either observing or jamming untrusted communication links. To facilitate concurrent observing and jamming, a subset of the MNs is selected for monitoring the untrusted transmitters (UTs), while the remaining MNs are selected for jamming the untrusted receivers (URs). We analyze the performance of CF-mMIMO wireless surveillance and derive a closed-form expression for the monitoring success probability of MNs. We then propose a greedy algorithm for the observing vs, jamming mode assignment of MNs, followed by the conception of a jamming transmit power allocation algorithm for maximizing the minimum monitoring success probability concerning all the UT and UR pairs based on the associated long-term channel state information knowledge. In conclusion, our proposed CF-mMIMO system is capable of significantly improving the performance of the MNs compared to that of the state-of-the-art baseline. In scenarios of a mediocre number of MNs, our proposed scheme provides an 11-fold improvement in the minimum monitoring success probability compared to its co-located mMIMO benchmarker. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 399,940 |
2205.15062 | A Transistor Operations Model for Deep Learning Energy Consumption
Scaling Law | Deep Learning (DL) has transformed the automation of a wide range of industries and finds increasing ubiquity in society. The high complexity of DL models and its widespread adoption has led to global energy consumption doubling every 3-4 months. Currently, the relationship between the DL model configuration and energy consumption is not well established. At a general computational energy model level, there is both strong dependency to both the hardware architecture (e.g. generic processors with different configuration of inner components- CPU and GPU, programmable integrated circuits - FPGA), as well as different interacting energy consumption aspects (e.g., data movement, calculation, control). At the DL model level, we need to translate non-linear activation functions and its interaction with data into calculation tasks. Current methods mainly linearize nonlinear DL models to approximate its theoretical FLOPs and MACs as a proxy for energy consumption. Yet, this is inaccurate (est. 93\% accuracy) due to the highly nonlinear nature of many convolutional neural networks (CNNs) for example. In this paper, we develop a bottom-level Transistor Operations (TOs) method to expose the role of non-linear activation functions and neural network structure in energy consumption. We translate a range of feedforward and CNN models into ALU calculation tasks and then TO steps. This is then statistically linked to real energy consumption values via a regression model for different hardware configurations and data sets. We show that our proposed TOs method can achieve a 93.61% - 99.51% precision in predicting its energy consumption. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 299,589 |
2404.08483 | Semantic Communication for Cooperative Multi-Task Processing over
Wireless Networks | In this paper, we investigated semantic communication for multi-task processing using an information-theoretic approach. We introduced the concept of a "semantic source", allowing multiple semantic interpretations from a single observation. We formulated an end-to-end optimization problem taking into account the communication channel, maximizing mutual information (infomax) to design the semantic encoding and decoding process exploiting the statistical relations between semantic variables. To solve the problem we perform data-driven deep learning employing variational approximation techniques. Our semantic encoder is divided into a common unit and multiple specific units to facilitate cooperative multi-task processing. Simulation results demonstrate the effectiveness of our proposed semantic source and system design when statistical relationships exist, comparing cooperative task processing with independent task processing. However, our findings highlight that cooperative multi-tasking is not always beneficial, emphasizing the importance of statistical relationships between tasks and indicating the need for further investigation into the semantically processing of multiple tasks. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 446,260 |
2105.04194 | The Modulo Radon Transform: Theory, Algorithms and Applications | Recently, experiments have been reported where researchers were able to perform high dynamic range (HDR) tomography in a heuristic fashion, by fusing multiple tomographic projections. This approach to HDR tomography has been inspired by HDR photography and inherits the same disadvantages. Taking a computational imaging approach to the HDR tomography problem, we here suggest a new model based on the Modulo Radon Transform (MRT), which we rigorously introduce and analyze. By harnessing a joint design between hardware and algorithms, we present a single-shot HDR tomography approach, which to our knowledge, is the only approach that is backed by mathematical guarantees. On the hardware front, instead of recording the Radon Transform projections that my potentially saturate, we propose to measure modulo values of the same. This ensures that the HDR measurements are folded into a lower dynamic range. On the algorithmic front, our recovery algorithms reconstruct the HDR images from folded measurements. Beyond mathematical aspects such as injectivity and inversion of the MRT for different scenarios including band-limited and approximately compactly supported images, we also provide a first proof-of-concept demonstration. To do so, we implement MRT by experimentally folding tomographic measurements available as an open source data set using our custom designed modulo hardware. Our reconstruction clearly shows the advantages of our approach for experimental data. In this way, our MRT based solution paves a path for HDR acquisition in a number of related imaging problems. | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | 234,421 |
2406.09990 | Task segmentation based on transition state clustering for surgical
robot assistance | Understanding surgical tasks represents an important challenge for autonomy in surgical robotic systems. To achieve this, we propose an online task segmentation framework that uses hierarchical transition state clustering to activate predefined robot assistance. Our approach involves performing a first clustering on visual features and a subsequent clustering on robot kinematic features for each visual cluster. This enables to capture relevant task transition information on each modality independently. The approach is implemented for a pick-and-place task commonly found in surgical training. The validation of the transition segmentation showed high accuracy and fast computation time. We have integrated the transition recognition module with predefined robot-assisted tool positioning. The complete framework has shown benefits in reducing task completion time and cognitive workload. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 464,181 |
2310.14811 | Multi-criteria Optimization of Workflow-Based Assembly Tasks in
Manufacturing | Industrial manufacturing is currently amidst it's fourth great revolution, pushing towards the digital transformation of production processes. One key element of this transformation is the formalization and digitization of processes, creating an increased potential to monitor, understand and optimize existing processes. However, one major obstacle in this process is the increased diversification and specialisation, resulting in the dependency on multiple experts, which are rarely amalgamated in small to medium sized companies. To mitigate this issue, this paper presents a novel approach for multi-criteria optimization of workflow-based assembly tasks in manufacturing by combining a workflow modeling framework and the HeuristicLab optimization framework. For this endeavour, a new generic problem definition is implemented in HeuristicLab, enabling the optimization of arbitrary workflows represented with the modeling framework. The resulting Pareto front of the multi-criteria optimization provides the decision makers a set of optimal workflows from which they can choose to optimally fit the current demands. The advantages of the herein presented approach are highlighted with a real world use case from an ongoing research project. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 402,033 |
2410.20147 | GFlowNet Fine-tuning for Diverse Correct Solutions in Mathematical
Reasoning Tasks | Mathematical reasoning problems are among the most challenging, as they typically require an understanding of fundamental laws to solve. The laws are universal, but the derivation of the final answer changes depending on how a problem is approached. When training large language models (LLMs), learning the capability of generating such multiple solutions is essential to accelerate their use in mathematical education. To this end, we train LLMs using generative flow network (GFlowNet). Different from reward-maximizing reinforcement learning (RL), GFlowNet fine-tuning seeks to find diverse solutions by training the LLM whose distribution is proportional to a reward function. In numerical experiments, we evaluate GFlowNet fine-tuning and reward-maximizing RL in terms of accuracy and diversity. The results show that GFlowNet fine-tuning derives correct final answers from diverse intermediate reasoning steps, indicating the improvement of the capability of alternative solution generation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 502,670 |
2308.15361 | Thermodynamics-inspired Macroscopic States of Bounded Swarms | The collective behavior of swarms is extremely difficult to estimate or predict, even when the local agent rules are known and simple. The presented work seeks to leverage the similarities between fluids and swarm systems to generate a thermodynamics-inspired characterization of the collective behavior of robotic swarms. While prior works have borrowed tools from fluid dynamics to design swarming behaviors, they have usually avoided the task of generating a fluids-inspired macroscopic state (or macrostate) description of the swarm. This work will bridge the gap by seeking to answer the following question: is it possible to generate a small set of thermodynamics-inspired macroscopic properties that may later be used to quantify all possible collective behaviors of swarm systems? In this paper, we present three macroscopic properties analogous to pressure, temperature, and density of a gas, to describe the behavior of a swarm that is governed by only attractive and repulsive agent interactions. These properties are made to satisfy an equation similar to the ideal gas law, and also generalized to satisfy the virial equation of state for real gases. Finally, we investigate how swarm specifications such as density and average agent velocity affect the system macrostate. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 388,656 |
0911.4507 | On Feasibility of Interference Alignment in MIMO Interference Networks | We explore the feasibility of interference alignment in signal vector space -- based only on beamforming -- for K-user MIMO interference channels. Our main contribution is to relate the feasibility issue to the problem of determining the solvability of a multivariate polynomial system, considered extensively in algebraic geometry. It is well known, e.g. from Bezout's theorem, that generic polynomial systems are solvable if and only if the number of equations does not exceed the number of variables. Following this intuition, we classify signal space interference alignment problems as either proper or improper based on the number of equations and variables. Rigorous connections between feasible and proper systems are made through Bernshtein's theorem for the case where each transmitter uses only one beamforming vector. The multi-beam case introduces dependencies among the coefficients of a polynomial system so that the system is no longer generic in the sense required by both theorems. In this case, we show that the connection between feasible and proper systems can be further strengthened (since the equivalency between feasible and proper systems does not always hold) by including standard information theoretic outer bounds in the feasibility analysis. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 5,003 |
1308.0273 | Learning Robust Subspace Clustering | We propose a low-rank transformation-learning framework to robustify subspace clustering. Many high-dimensional data, such as face images and motion sequences, lie in a union of low-dimensional subspaces. The subspace clustering problem has been extensively studied in the literature to partition such high-dimensional data into clusters corresponding to their underlying low-dimensional subspaces. However, low-dimensional intrinsic structures are often violated for real-world observations, as they can be corrupted by errors or deviate from ideal models. We propose to address this by learning a linear transformation on subspaces using matrix rank, via its convex surrogate nuclear norm, as the optimization criteria. The learned linear transformation restores a low-rank structure for data from the same subspace, and, at the same time, forces a high-rank structure for data from different subspaces. In this way, we reduce variations within the subspaces, and increase separations between the subspaces for more accurate subspace clustering. This proposed learned robust subspace clustering framework significantly enhances the performance of existing subspace clustering methods. To exploit the low-rank structures of the transformed subspaces, we further introduce a subspace clustering technique, called Robust Sparse Subspace Clustering, which efficiently combines robust PCA with sparse modeling. We also discuss the online learning of the transformation, and learning of the transformation while simultaneously reducing the data dimensionality. Extensive experiments using public datasets are presented, showing that the proposed approach significantly outperforms state-of-the-art subspace clustering methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 26,211 |
1711.09604 | Randomized Physics-based Motion Planning for Grasping in Cluttered and
Uncertain Environments | Planning motions to grasp an object in cluttered and uncertain environments is a challenging task, particularly when a collision-free trajectory does not exist and objects obstructing the way are required to be carefully grasped and moved out. This paper takes a different approach and proposes to address this problem by using a randomized physics-based motion planner that permits robot-object and object-object interactions. The main idea is to avoid an explicit high-level reasoning of the task by providing the motion planner with a physics engine to evaluate possible complex multi-body dynamical interactions. The approach is able to solve the problem in complex scenarios, also considering uncertainty in the objects pose and in the contact dynamics. The work enhances the state validity checker, the control sampler and the tree exploration strategy of a kinodynamic motion planner called KPIECE. The enhanced algorithm, called p-KPIECE, has been validated in simulation and with real experiments. The results have been compared with an ontological physics-based motion planner and with task and motion planning approaches, resulting in a significant improvement in terms of planning time, success rate and quality of the solution path. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 85,448 |
1909.06335 | Measuring the Effects of Non-Identical Data Distribution for Federated
Visual Classification | Federated Learning enables visual models to be trained in a privacy-preserving way using real-world data from mobile devices. Given their distributed nature, the statistics of the data across these devices is likely to differ significantly. In this work, we look at the effect such non-identical data distributions has on visual classification via Federated Learning. We propose a way to synthesize datasets with a continuous range of identicalness and provide performance measures for the Federated Averaging algorithm. We show that performance degrades as distributions differ more, and propose a mitigation strategy via server momentum. Experiments on CIFAR-10 demonstrate improved classification performance over a range of non-identicalness, with classification accuracy improved from 30.1% to 76.9% in the most skewed settings. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 145,348 |
2310.02473 | Prompting-based Temporal Domain Generalization | Machine learning traditionally assumes that the training and testing data are distributed independently and identically. However, in many real-world settings, the data distribution can shift over time, leading to poor generalization of trained models in future time periods. This paper presents a novel prompting-based approach to temporal domain generalization that is parameter-efficient, time-efficient, and does not require access to future data during training. Our method adapts a trained model to temporal drift by learning global prompts, domain-specific prompts, and drift-aware prompts that capture underlying temporal dynamics. Experiments on classification, regression, and time series forecasting tasks demonstrate the generality of the proposed approach. The code repository will be publicly shared. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 396,861 |
1604.03044 | Wisdom of the Crowd or Wisdom of a Few? An Analysis of Users' Content
Generation | In this paper we analyze how user generated content (UGC) is created, challenging the well known {\it wisdom of crowds} concept. Although it is known that user activity in most settings follow a power law, that is, few people do a lot, while most do nothing, there are few studies that characterize well this activity. In our analysis of datasets from two different social networks, Facebook and Twitter, we find that a small percentage of active users and much less of all users represent 50\% of the UGC. We also analyze the dynamic behavior of the generation of this content to find that the set of most active users is quite stable in time. Moreover, we study the social graph, finding that those active users are highly connected among them. This implies that most of the wisdom comes from a few users, challenging the independence assumption needed to have a wisdom of crowds. We also address the content that is never seen by any people, which we call digital desert, that challenges the assumption that the content of every person should be taken in account in a collective decision. We also compare our results with Wikipedia data and we address the quality of UGC content using an Amazon dataset. At the end our results are not surprising, as the Web is a reflection of our own society, where economical or political power also is in the hands of minorities. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 54,431 |
1806.10692 | ActiveRemediation: The Search for Lead Pipes in Flint, Michigan | We detail our ongoing work in Flint, Michigan to detect pipes made of lead and other hazardous metals. After elevated levels of lead were detected in residents' drinking water, followed by an increase in blood lead levels in area children, the state and federal governments directed over $125 million to replace water service lines, the pipes connecting each home to the water system. In the absence of accurate records, and with the high cost of determining buried pipe materials, we put forth a number of predictive and procedural tools to aid in the search and removal of lead infrastructure. Alongside these statistical and machine learning approaches, we describe our interactions with government officials in recommending homes for both inspection and replacement, with a focus on the statistical model that adapts to incoming information. Finally, in light of discussions about increased spending on infrastructure development by the federal government, we explore how our approach generalizes beyond Flint to other municipalities nationwide. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 101,575 |
2009.01382 | Application of Transformer Impedance Correction Tables in Power Flow
Studies | Phase Shifting Transformers (PST) are used to control or block certain flows of real power through phase angle regulation across the device. Its functionality is crucial to special situations such as eliminating loop flow through an area and balancing real power flow between parallel paths. Impedance correction tables are used to model that the impedance of phase shifting transformers often vary as a function of their phase angle shift. The focus of this paper is to consider the modeling errors if the impact of this changing impedance is ignored. The simulations are tested through different scenarios using a 37-bus test case and a 10,000-bus synthetic power grid. The results verify the important role of impedance correction factor to get more accurate and optimal power solutions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 194,289 |
1909.04203 | Novel diffusion-derived distance measures for graphs | We define a new family of similarity and distance measures on graphs, and explore their theoretical properties in comparison to conventional distance metrics. These measures are defined by the solution(s) to an optimization problem which attempts find a map minimizing the discrepancy between two graph Laplacian exponential matrices, under norm-preserving and sparsity constraints. Variants of the distance metric are introduced to consider such optimized maps under sparsity constraints as well as fixed time-scaling between the two Laplacians. The objective function of this optimization is multimodal and has discontinuous slope, and is hence difficult for univariate optimizers to solve. We demonstrate a novel procedure for efficiently calculating these optima for two of our distance measure variants. We present numerical experiments demonstrating that (a) upper bounds of our distance metrics can be used to distinguish between lineages of related graphs; (b) our procedure is faster at finding the required optima, by as much as a factor of 10^3; and (c) the upper bounds satisfy the triangle inequality exactly under some assumptions and approximately under others. We also derive an upper bound for the distance between two graph products, in terms of the distance between the two pairs of factors. Additionally, we present several possible applications, including the construction of infinite "graph limits" by means of Cauchy sequences of graphs related to one another by our distance measure. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 144,732 |
2201.00762 | Execute Order 66: Targeted Data Poisoning for Reinforcement Learning | Data poisoning for reinforcement learning has historically focused on general performance degradation, and targeted attacks have been successful via perturbations that involve control of the victim's policy and rewards. We introduce an insidious poisoning attack for reinforcement learning which causes agent misbehavior only at specific target states - all while minimally modifying a small fraction of training observations without assuming any control over policy or reward. We accomplish this by adapting a recent technique, gradient alignment, to reinforcement learning. We test our method and demonstrate success in two Atari games of varying difficulty. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 274,061 |
1907.01968 | Depth Growing for Neural Machine Translation | While very deep neural networks have shown effectiveness for computer vision and text classification applications, how to increase the network depth of neural machine translation (NMT) models for better translation quality remains a challenging problem. Directly stacking more blocks to the NMT model results in no improvement and even reduces performance. In this work, we propose an effective two-stage approach with three specially designed components to construct deeper NMT models, which result in significant improvements over the strong Transformer baselines on WMT$14$ English$\to$German and English$\to$French translation tasks\footnote{Our code is available at \url{https://github.com/apeterswu/Depth_Growing_NMT}}. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 137,486 |
2404.16866 | Annotation-guided Protein Design with Multi-Level Domain Alignment | The core challenge of de novo protein design lies in creating proteins with specific functions or properties, guided by certain conditions. Current models explore to generate protein using structural and evolutionary guidance, which only provide indirect conditions concerning functions and properties. However, textual annotations of proteins, especially the annotations for protein domains, which directly describe the protein's high-level functionalities, properties, and their correlation with target amino acid sequences, remain unexplored in the context of protein design tasks. In this paper, we propose Protein-Annotation Alignment Generation, PAAG, a multi-modality protein design framework that integrates the textual annotations extracted from protein database for controllable generation in sequence space. Specifically, within a multi-level alignment module, PAAG can explicitly generate proteins containing specific domains conditioned on the corresponding domain annotations, and can even design novel proteins with flexible combinations of different kinds of annotations. Our experimental results underscore the superiority of the aligned protein representations from PAAG over 7 prediction tasks. Furthermore, PAAG demonstrates a significant increase in generation success rate (24.7% vs 4.7% in zinc finger, and 54.3% vs 22.0% in the immunoglobulin domain) in comparison to the existing model. We anticipate that PAAG will broaden the horizons of protein design by leveraging the knowledge from between textual annotation and proteins. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 449,646 |
1909.10719 | Analysis of a Model for Generating Weakly Scale-free Networks | It is commonly believed that real networks are scale-free and fraction of nodes $P(k)$ with degree $k$ satisfies the power law $P(k) \propto k^{-\gamma} \text{ for } k > k_{min} > 0$. Preferential attachment is the mechanism that has been considered responsible for such organization of these networks. In many real networks, degree distribution before the $k_{min}$ varies very slowly to the extent of being uniform as compared with the degree distribution for $k > k_{min}$ . In this paper, we proposed a model that describe this particular degree distribution for the whole range of $k>0$. We adopt a two step approach. In the first step, at every time stamp we add a new node to the network and attach it with an existing node using preferential attachment method. In the second step, we add edges between existing pairs of nodes with the node selection based on the uniform probability distribution. Our approach generates weakly scale-free networks that closely follow the degree distribution of real-world networks. We perform comprehensive mathematical analysis of the model in the discrete domain and compare the degree distribution generated by these models with that of real-world networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 146,617 |
1702.03555 | Agreeing to Cross: How Drivers and Pedestrians Communicate | The contribution of this paper is twofold. The first is a novel dataset for studying behaviors of traffic participants while crossing. Our dataset contains more than 650 samples of pedestrian behaviors in various street configurations and weather conditions. These examples were selected from approx. 240 hours of driving in the city, suburban and urban roads. The second contribution is an analysis of our data from the point of view of joint attention. We identify what types of non-verbal communication cues road users use at the point of crossing, their responses, and under what circumstances the crossing event takes place. It was found that in more than 90% of the cases pedestrians gaze at the approaching cars prior to crossing in non-signalized crosswalks. The crossing action, however, depends on additional factors such as time to collision (TTC), explicit driver's reaction or structure of the crosswalk. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 68,154 |
2412.18063 | LMRPA: Large Language Model-Driven Efficient Robotic Process Automation
for OCR | This paper introduces LMRPA, a novel Large Model-Driven Robotic Process Automation (RPA) model designed to greatly improve the efficiency and speed of Optical Character Recognition (OCR) tasks. Traditional RPA platforms often suffer from performance bottlenecks when handling high-volume repetitive processes like OCR, leading to a less efficient and more time-consuming process. LMRPA allows the integration of Large Language Models (LLMs) to improve the accuracy and readability of extracted text, overcoming the challenges posed by ambiguous characters and complex text structures.Extensive benchmarks were conducted comparing LMRPA to leading RPA platforms, including UiPath and Automation Anywhere, using OCR engines like Tesseract and DocTR. The results are that LMRPA achieves superior performance, cutting the processing times by up to 52\%. For instance, in Batch 2 of the Tesseract OCR task, LMRPA completed the process in 9.8 seconds, where UiPath finished in 18.1 seconds and Automation Anywhere finished in 18.7 seconds. Similar improvements were observed with DocTR, where LMRPA outperformed other automation tools conducting the same process by completing tasks in 12.7 seconds, while competitors took over 20 seconds to do the same. These findings highlight the potential of LMRPA to revolutionize OCR-driven automation processes, offering a more efficient and effective alternative solution to the existing state-of-the-art RPA models. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 520,236 |
cs/0605051 | A General Method for Finding Low Error Rates of LDPC Codes | This paper outlines a three-step procedure for determining the low bit error rate performance curve of a wide class of LDPC codes of moderate length. The traditional method to estimate code performance in the higher SNR region is to use a sum of the contributions of the most dominant error events to the probability of error. These dominant error events will be both code and decoder dependent, consisting of low-weight codewords as well as non-codeword events if ML decoding is not used. For even moderate length codes, it is not feasible to find all of these dominant error events with a brute force search. The proposed method provides a convenient way to evaluate very low bit error rate performance of an LDPC code without requiring knowledge of the complete error event weight spectrum or resorting to a Monte Carlo simulation. This new method can be applied to various types of decoding such as the full belief propagation version of the message passing algorithm or the commonly used min-sum approximation to belief propagation. The proposed method allows one to efficiently see error performance at bit error rates that were previously out of reach of Monte Carlo methods. This result will provide a solid foundation for the analysis and design of LDPC codes and decoders that are required to provide a guaranteed very low bit error rate performance at certain SNRs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 539,446 |
2401.14131 | Equivariant Manifold Neural ODEs and Differential Invariants | In this paper, we develop a manifestly geometric framework for equivariant manifold neural ordinary differential equations (NODEs) and use it to analyse their modelling capabilities for symmetric data. First, we consider the action of a Lie group $G$ on a smooth manifold $M$ and establish the equivalence between equivariance of vector fields, symmetries of the corresponding Cauchy problems, and equivariance of the associated NODEs. We also propose a novel formulation, based on Lie theory for symmetries of differential equations, of the equivariant manifold NODEs in terms of the differential invariants of the action of $G$ on $M$, which provides an efficient parameterisation of the space of equivariant vector fields in a way that is agnostic to both the manifold $M$ and the symmetry group $G$. Second, we construct augmented manifold NODEs, through embeddings into flows on the tangent bundle $TM$, and show that they are universal approximators of diffeomorphisms on any connected $M$. Furthermore, we show that universality persists in the equivariant case and that the augmented equivariant manifold NODEs can be incorporated into the geometric framework using higher-order differential invariants. Finally, we consider the induced action of $G$ on different fields on $M$ and show how it can be used to generalise previous work, on, e.g., continuous normalizing flows, to equivariant models in any geometry. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 423,982 |
1508.07503 | Two-state Markov-chain Poisson nature of individual cellphone call
statistics | Humans are heterogenous and the behaviors of individuals could be different from that at the population level. We conduct an in-depth study of the temporal patterns of cellphone conversation activities of 73'339 anonymous cellphone users with the same truncated Weibull distribution of inter-call durations. We find that the individual call events exhibit a pattern of bursts, in which high activity periods are alternated with low activity periods. Surprisingly, the number of events in high activity periods are found to conform to a power-law distribution at the population level, but follow an exponential distribution at the individual level, which is a hallmark of absence of memory in individual call activities. Such exponential distribution is also observed for the number of events in low activity periods. Together with the exponential distributions of inter-call durations within bursts and of the intervals between consecutive bursts, we demonstrate that the individual call activities are driven by two independent Poisson processes, which can be combined within a minimal model in terms of a two-state first-order Markov chain giving very good agreement with the empirical distributions using the parameters estimated from real data for about half of the individuals in our sample. By measuring directly the distributions of call rates across the population, which exhibit power-law tails, we explain the difference with previous population level studies, purporting the existence of power-law distributions, via the "Superposition of Distributions" mechanism: The superposition of many exponential distributions of activities with a power-law distribution of their characteristic scales leads to a power-law distribution of the activities at the population level. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 46,409 |
2302.05496 | MaskSketch: Unpaired Structure-guided Masked Image Generation | Recent conditional image generation methods produce images of remarkable diversity, fidelity and realism. However, the majority of these methods allow conditioning only on labels or text prompts, which limits their level of control over the generation result. In this paper, we introduce MaskSketch, an image generation method that allows spatial conditioning of the generation result using a guiding sketch as an extra conditioning signal during sampling. MaskSketch utilizes a pre-trained masked generative transformer, requiring no model training or paired supervision, and works with input sketches of different levels of abstraction. We show that intermediate self-attention maps of a masked generative transformer encode important structural information of the input image, such as scene layout and object shape, and we propose a novel sampling method based on this observation to enable structure-guided generation. Our results show that MaskSketch achieves high image realism and fidelity to the guiding structure. Evaluated on standard benchmark datasets, MaskSketch outperforms state-of-the-art methods for sketch-to-image translation, as well as unpaired image-to-image translation approaches. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 345,062 |
1507.07075 | A Study of Morphological Filtering Using Graph and Hypergraphs | Mathematical morphology (MM) helps to describe and analyze shapes using set theory. MM can be effectively applied to binary images which are treated as sets. Basic morphological operators defined can be used as an effective tool in image processing. Morphological operators are also developed based on graph and hypergraph. These operators have found better performance and applications in image processing. Bino et al. [8], [9] developed the theory of morphological operators on hypergraph. A hypergraph structure is considered and basic morphological operation erosion/dilation is defined. Several new operators opening/closing and filtering are also defined on the hypergraphs. Hypergraph based filtering have found comparatively better performance with morphological filters based on graph. In this paper we evaluate the effectiveness of hypergraph based ASF on binary images. Experimental results shows that hypergraph based ASF filters have outperformed graph based ASF. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 45,441 |
2412.08835 | Grothendieck Graph Neural Networks Framework: An Algebraic Platform for
Crafting Topology-Aware GNNs | Due to the structural limitations of Graph Neural Networks (GNNs), in particular with respect to conventional neighborhoods, alternative aggregation strategies have recently been investigated. This paper investigates graph structure in message passing, aimed to incorporate topological characteristics. While the simplicity of neighborhoods remains alluring, we propose a novel perspective by introducing the concept of 'cover' as a generalization of neighborhoods. We design the Grothendieck Graph Neural Networks (GGNN) framework, offering an algebraic platform for creating and refining diverse covers for graphs. This framework translates covers into matrix forms, such as the adjacency matrix, expanding the scope of designing GNN models based on desired message-passing strategies. Leveraging algebraic tools, GGNN facilitates the creation of models that outperform traditional approaches. Based on the GGNN framework, we propose Sieve Neural Networks (SNN), a new GNN model that leverages the notion of sieves from category theory. SNN demonstrates outstanding performance in experiments, particularly on benchmarks designed to test the expressivity of GNNs, and exemplifies the versatility of GGNN in generating novel architectures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 516,240 |
1401.3995 | $Y$-$\Delta$ Product in 3-Way $\Delta$ and Y-Channels for Cyclic
Interference and Signal Alignment | In a full-duplex 3-way $\Delta$ channel, three transceivers communicate to each other, so that a number of six messages is exchanged. In a $Y$-channel, however, these three transceivers are connected to an intermediate full-duplex relay. Loop-back self-interference is suppressed perfectly. The relay forwards network-coded messages to their dedicated users by means of interference alignment (IA) and signal alignment. A conceptual channel model with cyclic shifts described by a polynomial ring is considered for these two related channels. The maximally achievable rates in terms of the degrees of freedom measure are derived. We observe that the Y-channel and the 3-way $\Delta$ channel provide a $Y$-$\Delta$ product relationship. Moreover, we briefly discuss how this analysis relates to spatial IA and MIMO IA. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,028 |
1803.11347 | Learning to Adapt in Dynamic, Real-World Environments Through
Meta-Reinforcement Learning | Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time. Given that it is impractical to train separate policies to accommodate all situations the agent may see in the real world, this work proposes to learn how to quickly and effectively adapt online to new tasks. To enable sample-efficient learning, we consider learning online adaptation in the context of model-based reinforcement learning. Our approach uses meta-learning to train a dynamics model prior such that, when combined with recent data, this prior can be rapidly adapted to the local context. Our experiments demonstrate online adaptation for continuous control tasks on both simulated and real-world agents. We first show simulated agents adapting their behavior online to novel terrains, crippled body parts, and highly-dynamic environments. We also illustrate the importance of incorporating online adaptation into autonomous agents that operate in the real world by applying our method to a real dynamic legged millirobot. We demonstrate the agent's learned ability to quickly adapt online to a missing leg, adjust to novel terrains and slopes, account for miscalibration or errors in pose estimation, and compensate for pulling payloads. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 93,875 |
1111.6116 | AstroDAbis: Annotations and Cross-Matches for Remote Catalogues | Astronomers are good at sharing data, but poorer at sharing knowledge. Almost all astronomical data ends up in open archives, and access to these is being simplified by the development of the global Virtual Observatory (VO). This is a great advance, but the fundamental problem remains that these archives contain only basic observational data, whereas all the astrophysical interpretation of that data -- which source is a quasar, which a low-mass star, and which an image artefact -- is contained in journal papers, with very little linkage back from the literature to the original data archives. It is therefore currently impossible for an astronomer to pose a query like "give me all sources in this data archive that have been identified as quasars" and this limits the effective exploitation of these archives, as the user of an archive has no direct means of taking advantage of the knowledge derived by its previous users. The AstroDAbis service aims to address this, in a prototype service enabling astronomers to record annotations and cross-identifications in the AstroDAbis service, annotating objects in other catalogues. We have deployed two interfaces to the annotations, namely one astronomy-specific one using the TAP protocol}, and a second exploiting generic Linked Open Data (LOD) and RDF techniques. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 13,183 |
1706.09767 | Speaker Identification in each of the Neutral and Shouted Talking
Environments based on Gender-Dependent Approach Using SPHMMs | It is well known that speaker identification performs extremely well in the neutral talking environments; however, the identification performance is declined sharply in the shouted talking environments. This work aims at proposing, implementing and testing a new approach to enhance the declined performance in the shouted talking environments. The new proposed approach is based on gender-dependent speaker identification using Suprasegmental Hidden Markov Models (SPHMMs) as classifiers. This proposed approach has been tested on two different and separate speech databases: our collected database and the Speech Under Simulated and Actual Stress (SUSAS) database. The results of this work show that gender-dependent speaker identification based on SPHMMs outperforms gender-independent speaker identification based on the same models and gender-dependent speaker identification based on Hidden Markov Models (HMMs) by about 6% and 8%, respectively. The results obtained based on the proposed approach are close to those obtained in subjective evaluation by human judges. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 76,194 |
2002.09907 | Performance Analysis of Intelligent Reflecting Surface Assisted NOMA
Networks | Intelligent reflecting surface (IRS) is a promising technology to enhance the coverage and performance of wireless networks. We consider the application of IRS to non-orthogonal multiple access (NOMA), where a base station transmits superposed signals to multiple users by the virtue of an IRS. The performance of an IRS-assisted NOMA networks with imperfect successive interference cancellation (ipSIC) and perfect successive interference cancellation (pSIC) is investigated by invoking 1-bit coding scheme. In particular, we derive new exact and asymptotic expressions for both outage probability and ergodic rate of the m-th user with ipSIC/pSIC. Based on analytical results, the diversity order of the m-th user with pSIC is in connection with the number of reflecting elements and channel ordering. The high signal-to-noise radio (SNR) slope of ergodic rate for the $m$-th user is obtained. The throughput and energy efficiency of non-orthogonal users for IRS-NOMA are discussed both in delay-limited and delay-tolerant transmission modes. Additionally, we derive new exact expressions of outage probability and ergodic rate for IRS-assisted orthogonal multiple access (IRS-OMA). Numerical results are presented to substantiate our analyses and demonstrate that: i) The outage behaviors of IRS-NOMA are superior to that of IRS-OMA and relaying schemes; ii) With increasing the number of reflecting elements, IRS-NOMA is capable of achieving enhanced outage performance; and iii) The M-th user has a larger ergodic rate compared to IRS-OMA and benchmarks. However, the ergodic performance of the $m$-th user exceeds relaying schemes in the low SNR regime. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 165,226 |
1907.00835 | UltraSuite: A Repository of Ultrasound and Acoustic Data from Child
Speech Therapy Sessions | We introduce UltraSuite, a curated repository of ultrasound and acoustic data, collected from recordings of child speech therapy sessions. This release includes three data collections, one from typically developing children and two from children with speech sound disorders. In addition, it includes a set of annotations, some manual and some automatically produced, and software tools to process, transform and visualise the data. | false | false | true | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 137,156 |
2109.02394 | Less is More: Lighter and Faster Deep Neural Architecture for Tomato
Leaf Disease Classification | To ensure global food security and the overall profit of stakeholders, the importance of correctly detecting and classifying plant diseases is paramount. In this connection, the emergence of deep learning-based image classification has introduced a substantial number of solutions. However, the applicability of these solutions in low-end devices requires fast, accurate, and computationally inexpensive systems. This work proposes a lightweight transfer learning-based approach for detecting diseases from tomato leaves. It utilizes an effective preprocessing method to enhance the leaf images with illumination correction for improved classification. Our system extracts features using a combined model consisting of a pretrained MobileNetV2 architecture and a classifier network for effective prediction. Traditional augmentation approaches are replaced by runtime augmentation to avoid data leakage and address the class imbalance issue. Evaluation on tomato leaf images from the PlantVillage dataset shows that the proposed architecture achieves 99.30% accuracy with a model size of 9.60MB and 4.87M floating-point operations, making it a suitable choice for real-life applications in low-end devices. Our codes and models are available at https://github.com/redwankarimsony/project-tomato. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 253,733 |
2412.15166 | Human-Humanoid Robots Cross-Embodiment Behavior-Skill Transfer Using
Decomposed Adversarial Learning from Demonstration | Humanoid robots are envisioned as embodied intelligent agents capable of performing a wide range of human-level loco-manipulation tasks, particularly in scenarios requiring strenuous and repetitive labor. However, learning these skills is challenging due to the high degrees of freedom of humanoid robots, and collecting sufficient training data for humanoid is a laborious process. Given the rapid introduction of new humanoid platforms, a cross-embodiment framework that allows generalizable skill transfer is becoming increasingly critical. To address this, we propose a transferable framework that reduces the data bottleneck by using a unified digital human model as a common prototype and bypassing the need for re-training on every new robot platform. The model learns behavior primitives from human demonstrations through adversarial imitation, and the complex robot structures are decomposed into functional components, each trained independently and dynamically coordinated. Task generalization is achieved through a human-object interaction graph, and skills are transferred to different robots via embodiment-specific kinematic motion retargeting and dynamic fine-tuning. Our framework is validated on five humanoid robots with diverse configurations, demonstrating stable loco-manipulation and highlighting its effectiveness in reducing data requirements and increasing the efficiency of skill transfer across platforms. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 518,963 |
1610.01741 | Combining Generative and Discriminative Neural Networks for Sleep Stages
Classification | Sleep stages pattern provides important clues in diagnosing the presence of sleep disorder. By analyzing sleep stages pattern and extracting its features from EEG, EOG, and EMG signals, we can classify sleep stages. This study presents a novel classification model for predicting sleep stages with a high accuracy. The main idea is to combine the generative capability of Deep Belief Network (DBN) with a discriminative ability and sequence pattern recognizing capability of Long Short-term Memory (LSTM). We use DBN that is treated as an automatic higher level features generator. The input to DBN is 28 "handcrafted" features as used in previous sleep stages studies. We compared our method with other techniques which combined DBN with Hidden Markov Model (HMM).In this study, we exploit the sequence or time series characteristics of sleep dataset. To the best of our knowledge, most of the present sleep analysis from polysomnogram relies only on single instanced label (nonsequence) for classification. In this study, we used two datasets: an open data set that is treated as a benchmark; the other dataset is our sleep stages dataset (available for download) to verify the results further. Our experiments showed that the combination of DBN with LSTM gives better overall accuracy 98.75\% (Fscore=0.9875) for benchmark dataset and 98.94\% (Fscore=0.9894) for MKG dataset. This result is better than the state of the art of sleep stages classification that was 91.31\%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 62,010 |
2309.06382 | Ensemble Mask Networks | Can an $\mathbb{R}^n\rightarrow \mathbb{R}^n$ feedforward network learn matrix-vector multiplication? This study introduces two mechanisms - flexible masking to take matrix inputs, and a unique network pruning to respect the mask's dependency structure. Networks can approximate fixed operations such as matrix-vector multiplication $\phi(A,x) \rightarrow Ax$, motivating the mechanisms introduced with applications towards litmus-testing dependencies or interaction order in graph-based models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 391,404 |
2112.01890 | Fast Direct Stereo Visual SLAM | We propose a novel approach for fast and accurate stereo visual Simultaneous Localization and Mapping (SLAM) independent of feature detection and matching. We extend monocular Direct Sparse Odometry (DSO) to a stereo system by optimizing the scale of the 3D points to minimize photometric error for the stereo configuration, which yields a computationally efficient and robust method compared to conventional stereo matching. We further extend it to a full SLAM system with loop closure to reduce accumulated errors. With the assumption of forward camera motion, we imitate a LiDAR scan using the 3D points obtained from the visual odometry and adapt a LiDAR descriptor for place recognition to facilitate more efficient detection of loop closures. Afterward, we estimate the relative pose using direct alignment by minimizing the photometric error for potential loop closures. Optionally, further improvement over direct alignment is achieved by using the Iterative Closest Point (ICP) algorithm. Lastly, we optimize a pose graph to improve SLAM accuracy globally. By avoiding feature detection or matching in our SLAM system, we ensure high computational efficiency and robustness. Thorough experimental validations on public datasets demonstrate its effectiveness compared to the state-of-the-art approaches. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 269,657 |
1202.2187 | Museum: Multidimensional web page segment evaluation model | The evaluation of a web page with respect to a query is a vital task in the web information retrieval domain. This paper proposes the evaluation of a web page as a bottom-up process from the segment level to the page level. A model for evaluating the relevancy is proposed incorporating six different dimensions. An algorithm for evaluating the segments of a web page, using the above mentioned six dimensions is proposed. The benefits of fine-granining the evaluation process to the segment level instead of the page level are explored. The proposed model can be incorporated for various tasks like web page personalization, result re-ranking, mobile device page rendering etc. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 14,257 |
2105.03869 | Trajectory Prediction for Autonomous Driving with Topometric Map | State-of-the-art autonomous driving systems rely on high definition (HD) maps for localization and navigation. However, building and maintaining HD maps is time-consuming and expensive. Furthermore, the HD maps assume structured environment such as the existence of major road and lanes, which are not present in rural areas. In this work, we propose an end-to-end transformer networks based approach for map-less autonomous driving. The proposed model takes raw LiDAR data and noisy topometric map as input and produces precise local trajectory for navigation. We demonstrate the effectiveness of our method in real-world driving data, including both urban and rural areas. The experimental results show that the proposed method outperforms state-of-the-art multimodal methods and is robust to the perturbations of the topometric map. The code of the proposed method is publicly available at \url{https://github.com/Jiaolong/trajectory-prediction}. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 234,303 |
1909.05189 | ORES: Lowering Barriers with Participatory Machine Learning in Wikipedia | Algorithmic systems---from rule-based bots to machine learning classifiers---have a long history of supporting the essential work of content moderation and other curation work in peer production projects. From counter-vandalism to task routing, basic machine prediction has allowed open knowledge projects like Wikipedia to scale to the largest encyclopedia in the world, while maintaining quality and consistency. However, conversations about how quality control should work and what role algorithms should play have generally been led by the expert engineers who have the skills and resources to develop and modify these complex algorithmic systems. In this paper, we describe ORES: an algorithmic scoring service that supports real-time scoring of wiki edits using multiple independent classifiers trained on different datasets. ORES decouples several activities that have typically all been performed by engineers: choosing or curating training data, building models to serve predictions, auditing predictions, and developing interfaces or automated agents that act on those predictions. This meta-algorithmic system was designed to open up socio-technical conversations about algorithms in Wikipedia to a broader set of participants. In this paper, we discuss the theoretical mechanisms of social change ORES enables and detail case studies in participatory machine learning around ORES from the 5 years since its deployment. | true | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 145,025 |
1711.08136 | Signal-Aligned Network Coding in K-User MIMO Interference Channels with
Limited Receiver Cooperation | In this paper, we propose a signal-aligned network coding (SNC) scheme for K-user time-varying multiple-input multiple-output (MIMO) interference channels with limited receiver cooperation. We assume that the receivers are connected to a central processor via wired cooperation links with individual limited capacities. Our SNC scheme determines the precoding matrices of the transmitters so that the transmitted signals are aligned at each receiver. The aligned signals are then decoded into noiseless integer combinations of messages, also known as network-coded messages, by physical-layer network coding. The key idea of our scheme is to ensure that independent integer combinations of messages can be decoded at the receivers. Hence the central processor can recover the original messages of the transmitters by solving the linearly independent equations. We prove that our SNC scheme achieves full degrees of freedom (DoF) by utilizing signal alignment and physical-layer network coding. Simulation results show that our SNC scheme outperforms the compute-and-forward scheme in the finite SNR regime of the two-user and the three-user cases. The performance improvement of our SNC scheme mainly comes from efficient utilization of the signal subspaces for conveying independent linear equations of messages to the central processor. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 85,140 |
1405.2420 | Optimal Learners for Multiclass Problems | The fundamental theorem of statistical learning states that for binary classification problems, any Empirical Risk Minimization (ERM) learning rule has close to optimal sample complexity. In this paper we seek for a generic optimal learner for multiclass prediction. We start by proving a surprising result: a generic optimal multiclass learner must be improper, namely, it must have the ability to output hypotheses which do not belong to the hypothesis class, even though it knows that all the labels are generated by some hypothesis from the class. In particular, no ERM learner is optimal. This brings back the fundmamental question of "how to learn"? We give a complete answer to this question by giving a new analysis of the one-inclusion multiclass learner of Rubinstein et al (2006) showing that its sample complexity is essentially optimal. Then, we turn to study the popular hypothesis class of generalized linear classifiers. We derive optimal learners that, unlike the one-inclusion algorithm, are computationally efficient. Furthermore, we show that the sample complexity of these learners is better than the sample complexity of the ERM rule, thus settling in negative an open question due to Collins (2005). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 32,978 |
2007.10055 | Morphological Skip-Gram: Using morphological knowledge to improve word
representation | Natural language processing models have attracted much interest in the deep learning community. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others. Word embeddings are continuous word representations, they are an essential module for those applications and are generally used as input word representation to the deep learning models. Word2Vec and GloVe are two popular methods to learn word embeddings. They achieve good word representations, however, they learn representations with limited information because they ignore the morphological information of the words and consider only one representation vector for each word. This approach implies that Word2Vec and GloVe are unaware of the word inner structure. To mitigate this problem, the FastText model represents each word as a bag of characters n-grams. Hence, each n-gram has a continuous vector representation, and the final word representation is the sum of its characters n-grams vectors. Nevertheless, the use of all n-grams character of a word is a poor approach since some n-grams have no semantic relation with their words and increase the amount of potentially useless information. This approach also increases the training phase time. In this work, we propose a new method for training word embeddings, and its goal is to replace the FastText bag of character n-grams for a bag of word morphemes through the morphological analysis of the word. Thus, words with similar context and morphemes are represented by vectors close to each other. To evaluate our new approach, we performed intrinsic evaluations considering 15 different tasks, and the results show a competitive performance compared to FastText. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 188,157 |
2307.05083 | Vacaspati: A Diverse Corpus of Bangla Literature | Bangla (or Bengali) is the fifth most spoken language globally; yet, the state-of-the-art NLP in Bangla is lagging for even simple tasks such as lemmatization, POS tagging, etc. This is partly due to lack of a varied quality corpus. To alleviate this need, we build Vacaspati, a diverse corpus of Bangla literature. The literary works are collected from various websites; only those works that are publicly available without copyright violations or restrictions are collected. We believe that published literature captures the features of a language much better than newspapers, blogs or social media posts which tend to follow only a certain literary pattern and, therefore, miss out on language variety. Our corpus Vacaspati is varied from multiple aspects, including type of composition, topic, author, time, space, etc. It contains more than 11 million sentences and 115 million words. We also built a word embedding model, Vac-FT, using FastText from Vacaspati as well as trained an Electra model, Vac-BERT, using the corpus. Vac-BERT has far fewer parameters and requires only a fraction of resources compared to other state-of-the-art transformer models and yet performs either better or similar on various downstream tasks. On multiple downstream tasks, Vac-FT outperforms other FastText-based models. We also demonstrate the efficacy of Vacaspati as a corpus by showing that similar models built from other corpora are not as effective. The models are available at https://bangla.iitk.ac.in/. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 378,618 |
2308.01916 | Semi Supervised Meta Learning for Spatiotemporal Learning | We approached the goal of applying meta-learning to self-supervised masked autoencoders for spatiotemporal learning in three steps. Broadly, we seek to understand the impact of applying meta-learning to existing state-of-the-art representation learning architectures. Thus, we test spatiotemporal learning through: a meta-learning architecture only, a representation learning architecture only, and an architecture applying representation learning alongside a meta learning architecture. We utilize the Memory Augmented Neural Network (MANN) architecture to apply meta-learning to our framework. Specifically, we first experiment with applying a pre-trained MAE and fine-tuning on our small-scale spatiotemporal dataset for video reconstruction tasks. Next, we experiment with training an MAE encoder and applying a classification head for action classification tasks. Finally, we experiment with applying a pre-trained MAE and fine-tune with MANN backbone for action classification tasks. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 383,411 |
2003.00188 | Robust 6D Object Pose Estimation by Learning RGB-D Features | Accurate 6D object pose estimation is fundamental to robotic manipulation and grasping. Previous methods follow a local optimization approach which minimizes the distance between closest point pairs to handle the rotation ambiguity of symmetric objects. In this work, we propose a novel discrete-continuous formulation for rotation regression to resolve this local-optimum problem. We uniformly sample rotation anchors in SO(3), and predict a constrained deviation from each anchor to the target, as well as uncertainty scores for selecting the best prediction. Additionally, the object location is detected by aggregating point-wise vectors pointing to the 3D center. Experiments on two benchmarks: LINEMOD and YCB-Video, show that the proposed method outperforms state-of-the-art approaches. Our code is available at https://github.com/mentian/object-posenet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 166,218 |
2102.02734 | Deep learning-based synthetic-CT generation in radiotherapy and PET: a
review | Recently, deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: I) To replace CT in magnetic resonance (MR)-based treatment planning. II) Facilitate cone-beam computed tomography (CBCT)-based image-guided adaptive radiotherapy. III) Derive attenuation maps for the correction of positron emission tomography (PET). Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarising the achievements. Lastly, the statistics of all the cited works from various aspects were analysed, revealing the popularity and future trends, and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 218,505 |
2005.04063 | TSDM: Tracking by SiamRPN++ with a Depth-refiner and a Mask-generator | In a generic object tracking, depth (D) information provides informative cues for foreground-background separation and target bounding box regression. However, so far, few trackers have used depth information to play the important role aforementioned due to the lack of a suitable model. In this paper, a RGB-D tracker named TSDM is proposed, which is composed of a Mask-generator (M-g), SiamRPN++ and a Depth-refiner (D-r). The M-g generates the background masks, and updates them as the target 3D position changes. The D-r optimizes the target bounding box estimated by SiamRPN++, based on the spatial depth distribution difference between the target and the surrounding background. Extensive evaluation on the Princeton Tracking Benchmark and the Visual Object Tracking challenge shows that our tracker outperforms the state-of-the-art by a large margin while achieving 23 FPS. In addition, a light-weight variant can run at 31 FPS and thus it is practical for real world applications. Code and models of TSDM are available at https://github.com/lql-team/TSDM. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 176,341 |
2303.12963 | Forecast-Aware Model Driven LSTM | Poor air quality can have a significant impact on human health. The National Oceanic and Atmospheric Administration (NOAA) air quality forecasting guidance is challenged by the increasing presence of extreme air quality events due to extreme weather events such as wild fires and heatwaves. These extreme air quality events further affect human health. Traditional methods used to correct model bias make assumptions about linearity and the underlying distribution. Extreme air quality events tend to occur without a strong signal leading up to the event and this behavior tends to cause existing methods to either under or over compensate for the bias. Deep learning holds promise for air quality forecasting in the presence of extreme air quality events due to its ability to generalize and learn nonlinear problems. However, in the presence of these anomalous air quality events, standard deep network approaches that use a single network for generalizing to future forecasts, may not always provide the best performance even with a full feature-set including geography and meteorology. In this work we describe a method that combines unsupervised learning and a forecast-aware bi-directional LSTM network to perform bias correction for operational air quality forecasting using AirNow station data for ozone and PM2.5 in the continental US. Using an unsupervised clustering method trained on station geographical features such as latitude and longitude, urbanization, and elevation, the learned clusters direct training by partitioning the training data for the LSTM networks. LSTMs are forecast-aware and implemented using a unique way to perform learning forward and backwards in time across forecasting days. When comparing the RMSE of the forecast model to the RMSE of the bias corrected model, the bias corrected model shows significant improvement (27\% lower RMSE for ozone) over the base forecast. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 353,476 |
2401.15068 | Pairing Orthographically Variant Literary Words to Standard Equivalents
Using Neural Edit Distance Models | We present a novel corpus consisting of orthographically variant words found in works of 19th century U.S. literature annotated with their corresponding "standard" word pair. We train a set of neural edit distance models to pair these variants with their standard forms, and compare the performance of these models to the performance of a set of neural edit distance models trained on a corpus of orthographic errors made by L2 English learners. Finally, we analyze the relative performance of these models in the light of different negative training sample generation strategies, and offer concluding remarks on the unique challenge literary orthographic variation poses to string pairing methodologies. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 424,309 |
1909.01812 | Learning Distributions Generated by One-Layer ReLU Networks | We consider the problem of estimating the parameters of a $d$-dimensional rectified Gaussian distribution from i.i.d. samples. A rectified Gaussian distribution is defined by passing a standard Gaussian distribution through a one-layer ReLU neural network. We give a simple algorithm to estimate the parameters (i.e., the weight matrix and bias vector of the ReLU neural network) up to an error $\epsilon||W||_F$ using $\tilde{O}(1/\epsilon^2)$ samples and $\tilde{O}(d^2/\epsilon^2)$ time (log factors are ignored for simplicity). This implies that we can estimate the distribution up to $\epsilon$ in total variation distance using $\tilde{O}(\kappa^2d^2/\epsilon^2)$ samples, where $\kappa$ is the condition number of the covariance matrix. Our only assumption is that the bias vector is non-negative. Without this non-negativity assumption, we show that estimating the bias vector within any error requires the number of samples at least exponential in the infinity norm of the bias vector. Our algorithm is based on the key observation that vector norms and pairwise angles can be estimated separately. We use a recent result on learning from truncated samples. We also prove two sample complexity lower bounds: $\Omega(1/\epsilon^2)$ samples are required to estimate the parameters up to error $\epsilon$, while $\Omega(d/\epsilon^2)$ samples are necessary to estimate the distribution up to $\epsilon$ in total variation distance. The first lower bound implies that our algorithm is optimal for parameter estimation. Finally, we show an interesting connection between learning a two-layer generative model and non-negative matrix factorization. Experimental results are provided to support our analysis. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 144,017 |
2102.07076 | Tight lower bounds for Dynamic Time Warping | Dynamic Time Warping (DTW) is a popular similarity measure for aligning and comparing time series. Due to DTW's high computation time, lower bounds are often employed to screen poor matches. Many alternative lower bounds have been proposed, providing a range of different trade-offs between tightness and computational efficiency. LB Keogh provides a useful trade-off in many applications. Two recent lower bounds, LB Improved and LB Enhanced, are substantially tighter than LB Keogh. All three have the same worst case computational complexity - linear with respect to series length and constant with respect to window size. We present four new DTW lower bounds in the same complexity class. LB Petitjean is substantially tighter than LB Improved, with only modest additional computational overhead. LB Webb is more efficient than LB Improved, while often providing a tighter bound. LB Webb is always tighter than LB Keogh. The parameter free LB Webb is usually tighter than LB Enhanced. A parameterized variant, LB Webb Enhanced, is always tighter than LB Enhanced. A further variant, LB Webb*, is useful for some constrained distance functions. In extensive experiments, LB Webb proves to be very effective for nearest neighbor search. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 219,981 |
1502.05943 | Refining Adverse Drug Reactions using Association Rule Mining for
Electronic Healthcare Data | Side effects of prescribed medications are a common occurrence. Electronic healthcare databases present the opportunity to identify new side effects efficiently but currently the methods are limited due to confounding (i.e. when an association between two variables is identified due to them both being associated to a third variable). In this paper we propose a proof of concept method that learns common associations and uses this knowledge to automatically refine side effect signals (i.e. exposure-outcome associations) by removing instances of the exposure-outcome associations that are caused by confounding. This leaves the signal instances that are most likely to correspond to true side effect occurrences. We then calculate a novel measure termed the confounding-adjusted risk value, a more accurate absolute risk value of a patient experiencing the outcome within 60 days of the exposure. Tentative results suggest that the method works. For the four signals (i.e. exposure-outcome associations) investigated we are able to correctly filter the majority of exposure-outcome instances that were unlikely to correspond to true side effects. The method is likely to improve when tuning the association rule mining parameters for specific health outcomes. This paper shows that it may be possible to filter signals at a patient level based on association rules learned from considering patients' medical histories. However, additional work is required to develop a way to automate the tuning of the method's parameters. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 40,432 |
1708.03105 | Location Name Extraction from Targeted Text Streams using
Gazetteer-based Statistical Language Models | Extracting location names from informal and unstructured social media data requires the identification of referent boundaries and partitioning compound names. Variability, particularly systematic variability in location names (Carroll, 1983), challenges the identification task. Some of this variability can be anticipated as operations within a statistical language model, in this case drawn from gazetteers such as OpenStreetMap (OSM), Geonames, and DBpedia. This permits evaluation of an observed n-gram in Twitter targeted text as a legitimate location name variant from the same location-context. Using n-gram statistics and location-related dictionaries, our Location Name Extraction tool (LNEx) handles abbreviations and automatically filters and augments the location names in gazetteers (handling name contractions and auxiliary contents) to help detect the boundaries of multi-word location names and thereby delimit them in texts. We evaluated our approach on 4,500 event-specific tweets from three targeted streams to compare the performance of LNEx against that of ten state-of-the-art taggers that rely on standard semantic, syntactic and/or orthographic features. LNEx improved the average F-Score by 33-179%, outperforming all taggers. Further, LNEx is capable of stream processing. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 78,718 |
2202.13347 | A Robust Multimodal Remote Sensing Image Registration Method and System
Using Steerable Filters with First- and Second-order Gradients | Co-registration of multimodal remote sensing images is still an ongoing challenge because of nonlinear radiometric differences (NRD) and significant geometric distortions (e.g., scale and rotation changes) between these images. In this paper, a robust matching method based on the Steerable filters is proposed consisting of two critical steps. First, to address severe NRD, a novel structural descriptor named the Steerable Filters of first- and second-Order Channels (SFOC) is constructed, which combines the first- and second-order gradient information by using the steerable filters with a multi-scale strategy to depict more discriminative structure features of images. Then, a fast similarity measure is established called Fast Normalized Cross-Correlation (Fast-NCCSFOC), which employs the Fast Fourier Transform technique and the integral image to improve the matching efficiency. Furthermore, to achieve reliable registration performance, a coarse-to-fine multimodal registration system is designed consisting of two pivotal modules. The local coarse registration is first conducted by involving both detection of interest points (IPs) and local geometric correction, which effectively utilizes the prior georeferencing information of RS images to address global geometric distortions. In the fine registration stage, the proposed SFOC is used to resist significant NRD, and to detect control points between multimodal images by a template matching scheme. The performance of the proposed matching method has been evaluated with many different kinds of multimodal RS images. The results show its superior matching performance compared with the state-of-the-art methods. Moreover, the designed registration system also outperforms the popular commercial software in both registration accuracy and computational efficiency. Our system is available at https://github.com/yeyuanxin110. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 282,571 |
cs/0603013 | On the MacWilliams Identity for Convolutional Codes | The adjacency matrix associated with a convolutional code collects in a detailed manner information about the weight distribution of the code. A MacWilliams Identity Conjecture, stating that the adjacency matrix of a code fully determines the adjacency matrix of the dual code, will be formulated, and an explicit formula for the transformation will be stated. The formula involves the MacWilliams matrix known from complete weight enumerators of block codes. The conjecture will be proven for the class of convolutional codes where either the code itself or its dual does not have Forney indices bigger than one. For the general case the conjecture is backed up by many examples, and a weaker version will be established. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 539,307 |
1807.10587 | Finding any Waldo: zero-shot invariant and efficient visual search | Searching for a target object in a cluttered scene constitutes a fundamental challenge in daily vision. Visual search must be selective enough to discriminate the target from distractors, invariant to changes in the appearance of the target, efficient to avoid exhaustive exploration of the image, and must generalize to locate novel target objects with zero-shot training. Previous work has focused on searching for perfect matches of a target after extensive category-specific training. Here we show for the first time that humans can efficiently and invariantly search for natural objects in complex scenes. To gain insight into the mechanisms that guide visual search, we propose a biologically inspired computational model that can locate targets without exhaustive sampling and generalize to novel objects. The model provides an approximation to the mechanisms integrating bottom-up and top-down signals during search in natural scenes. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 103,979 |
2306.05440 | Generalized four person hat game | This paper studies Ebert's hat problem with four players and two colors, where the probabilities of the colors may be different for each player. Our goal is to maximize the probability of winning the game and to describe winning strategies We use the new concept of an adequate set. The construction of adequate sets is independent of underlying probabilities and we can use this fact in the analysis of our general case. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 372,204 |
2409.18254 | Evaluation of Cluster Id Assignment Schemes with ABCDE | A cluster id assignment scheme labels each cluster of a clustering with a distinct id. The goal of id assignment is semantic id stability, which means that, whenever possible, a cluster for the same underlying concept as that of a historical cluster should ideally receive the same id as the historical cluster. Semantic id stability allows the users of a clustering to refer to a concept's cluster with an id that is stable across clusterings/time. This paper treats the problem of evaluating the relative merits of id assignment schemes. In particular, it considers a historical clustering with id assignments, and a new clustering with ids assigned by a baseline and an experiment. It produces metrics that characterize both the magnitude and the quality of the id assignment diffs between the baseline and the experiment. That happens by transforming the problem of cluster id assignment into a problem of cluster membership, and evaluating it with ABCDE. ABCDE is a sophisticated and scalable technique for evaluating differences in cluster membership in real-world applications, where billions of items are grouped into millions of clusters, and some items are more important than others. The paper also describes several generalizations to the basic evaluation setup for id assignment schemes. For example, it is fairly straightforward to evaluate changes that simultaneously mutate cluster memberships and cluster ids. The ideas are generously illustrated with examples. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 492,159 |
2310.14796 | A Novel Transfer Learning Method Utilizing Acoustic and Vibration
Signals for Rotating Machinery Fault Diagnosis | Fault diagnosis of rotating machinery plays a important role for the safety and stability of modern industrial systems. However, there is a distribution discrepancy between training data and data of real-world operation scenarios, which causing the decrease of performance of existing systems. This paper proposed a transfer learning based method utilizing acoustic and vibration signal to address this distribution discrepancy. We designed the acoustic and vibration feature fusion MAVgram to offer richer and more reliable information of faults, coordinating with a DNN-based classifier to obtain more effective diagnosis representation. The backbone was pre-trained and then fine-tuned to obtained excellent performance of the target task. Experimental results demonstrate the effectiveness of the proposed method, and achieved improved performance compared to STgram-MFN. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 402,025 |
2208.13796 | Inferring subhalo effective density slopes from strong lensing
observations with neural likelihood-ratio estimation | Strong gravitational lensing has emerged as a promising approach for probing dark matter models on sub-galactic scales. Recent work has proposed the subhalo effective density slope as a more reliable observable than the commonly used subhalo mass function. The subhalo effective density slope is a measurement independent of assumptions about the underlying density profile and can be inferred for individual subhalos through traditional sampling methods. To go beyond individual subhalo measurements, we leverage recent advances in machine learning and introduce a neural likelihood-ratio estimator to infer an effective density slope for populations of subhalos. We demonstrate that our method is capable of harnessing the statistical power of multiple subhalos (within and across multiple images) to distinguish between characteristics of different subhalo populations. The computational efficiency warranted by the neural likelihood-ratio estimator over traditional sampling enables statistical studies of dark matter perturbers and is particularly useful as we expect an influx of strong lensing systems from upcoming surveys. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 315,142 |
2405.12309 | Accurate Learning of Equivariant Quantum Systems from a Single Ground
State | Predicting properties across system parameters is an important task in quantum physics, with applications ranging from molecular dynamics to variational quantum algorithms. Recently, provably efficient algorithms to solve this task for ground states within a gapped phase were developed. Here we dramatically improve the efficiency of these algorithms by showing how to learn properties of all ground states for systems with periodic boundary conditions from a single ground state sample. We prove that the prediction error tends to zero in the thermodynamic limit and numerically verify the results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 455,472 |
1401.3854 | A Constraint Satisfaction Framework for Executing Perceptions and
Actions in Diagrammatic Reasoning | Diagrammatic reasoning (DR) is pervasive in human problem solving as a powerful adjunct to symbolic reasoning based on language-like representations. The research reported in this paper is a contribution to building a general purpose DR system as an extension to a SOAR-like problem solving architecture. The work is in a framework in which DR is modeled as a process where subtasks are solved, as appropriate, either by inference from symbolic representations or by interaction with a diagram, i.e., perceiving specified information from a diagram or modifying/creating objects in a diagram in specified ways according to problem solving needs. The perceptions and actions in most DR systems built so far are hand-coded for the specific application, even when the rest of the system is built using the general architecture. The absence of a general framework for executing perceptions/actions poses as a major hindrance to using them opportunistically -- the essence of open-ended search in problem solving. Our goal is to develop a framework for executing a wide variety of specified perceptions and actions across tasks/domains without human intervention. We observe that the domain/task-specific visual perceptions/actions can be transformed into domain/task-independent spatial problems. We specify a spatial problem as a quantified constraint satisfaction problem in the real domain using an open-ended vocabulary of properties, relations and actions involving three kinds of diagrammatic objects -- points, curves, regions. Solving a spatial problem from this specification requires computing the equivalent simplified quantifier-free expression, the complexity of which is inherently doubly exponential. We represent objects as configuration of simple elements to facilitate decomposition of complex problems into simpler and similar subproblems. We show that, if the symbolic solution to a subproblem can be expressed concisely, quantifiers can be eliminated from spatial problems in low-order polynomial time using similar previously solved subproblems. This requires determining the similarity of two problems, the existence of a mapping between them computable in polynomial time, and designing a memory for storing previously solved problems so as to facilitate search. The efficacy of the idea is shown by time complexity analysis. We demonstrate the proposed approach by executing perceptions and actions involved in DR tasks in two army applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,969 |
2103.14474 | Composable Learning with Sparse Kernel Representations | We present a reinforcement learning algorithm for learning sparse non-parametric controllers in a Reproducing Kernel Hilbert Space. We improve the sample complexity of this approach by imposing a structure of the state-action function through a normalized advantage function (NAF). This representation of the policy enables efficiently composing multiple learned models without additional training samples or interaction with the environment. We demonstrate the performance of this algorithm on learning obstacle-avoidance policies in multiple simulations of a robot equipped with a laser scanner while navigating in a 2D environment. We apply the composition operation to various policy combinations and test them to show that the composed policies retain the performance of their components. We also transfer the composed policy directly to a physical platform operating in an arena with obstacles in order to demonstrate a degree of generalization. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 226,873 |
2204.07780 | Towards Lightweight Transformer via Group-wise Transformation for
Vision-and-Language Tasks | Despite the exciting performance, Transformer is criticized for its excessive parameters and computation cost. However, compressing Transformer remains as an open problem due to its internal complexity of the layer designs, i.e., Multi-Head Attention (MHA) and Feed-Forward Network (FFN). To address this issue, we introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer. LW-Transformer applies Group-wise Transformation to reduce both the parameters and computations of Transformer, while also preserving its two main properties, i.e., the efficient attention modeling on diverse subspaces of MHA, and the expanding-scaling feature transformation of FFN. We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets. Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks. To examine the generalization ability, we also apply our optimization strategy to a recently proposed image Transformer called Swin-Transformer for image classification, where the effectiveness can be also confirmed | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 291,847 |
2103.13107 | W2WNet: a two-module probabilistic Convolutional Neural Network with
embedded data cleansing functionality | Convolutional Neural Networks (CNNs) are supposed to be fed with only high-quality annotated datasets. Nonetheless, in many real-world scenarios, such high quality is very hard to obtain, and datasets may be affected by any sort of image degradation and mislabelling issues. This negatively impacts the performance of standard CNNs, both during the training and the inference phase. To address this issue we propose Wise2WipedNet (W2WNet), a new two-module Convolutional Neural Network, where a Wise module exploits Bayesian inference to identify and discard spurious images during the training, and a Wiped module takes care of the final classification while broadcasting information on the prediction confidence at inference time. The goodness of our solution is demonstrated on a number of public benchmarks addressing different image classification tasks, as well as on a real-world case study on histological image analysis. Overall, our experiments demonstrate that W2WNet is able to identify image degradation and mislabelling issues both at training and at inference time, with a positive impact on the final classification accuracy. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 226,391 |
2408.10333 | An LMI-based Robust Fuzzy Controller for Blood Glucose Regulation in
Type 1 Diabetes | This paper presents a control algorithm for creating an artificial pancreas for type 1 diabetes, factoring in input saturation for a practical application. By utilizing the parallel distributed compensation and Takagi-Sugeno Fuzzy model, we design an optimal robust fuzzy controller. Stability conditions derived from the Lyapunov method are expressed as linear matrix inequalities, allowing for optimal controller gain selection that minimizes disturbance effects. We employ the minimal Bergman and Tolic models to represent type 1 diabetes glucose-insulin dynamics, converting them into corresponding Takagi-Sugeno fuzzy models using the sector nonlinearity approach. Simulation results demonstrate the proposed controller's effectiveness. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 481,810 |
2406.13365 | PPT-GNN: A Practical Pre-Trained Spatio-Temporal Graph Neural Network
for Network Security | Recent works have demonstrated the potential of Graph Neural Networks (GNN) for network intrusion detection. Despite their advantages, a significant gap persists between real-world scenarios, where detection speed is critical, and existing proposals, which operate on large graphs representing several hours of traffic. This gap results in unrealistic operational conditions and impractical detection delays. Moreover, existing models do not generalize well across different networks, hampering their deployment in production environments. To address these issues, we introduce PPTGNN, a practical spatio-temporal GNN for intrusion detection. PPTGNN enables near real-time predictions, while better capturing the spatio-temporal dynamics of network attacks. PPTGNN employs self-supervised pre-training for improved performance and reduced dependency on labeled data. We evaluate PPTGNN on three public datasets and show that it significantly outperforms state-of-the-art models, such as E-ResGAT and E-GraphSAGE, with an average accuracy improvement of 10.38%. Finally, we show that a pre-trained PPTGNN can easily be fine-tuned to unseen networks with minimal labeled examples. This highlights the potential of PPTGNN as a general, large-scale pre-trained model that can effectively operate in diverse network environments. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 465,821 |
2312.01148 | Has Anything Changed? 3D Change Detection by 2D Segmentation Masks | As capturing devices become common, 3D scans of interior spaces are acquired on a daily basis. Through scene comparison over time, information about objects in the scene and their changes is inferred. This information is important for robots and AR and VR devices, in order to operate in an immersive virtual experience. We thus propose an unsupervised object discovery method that identifies added, moved, or removed objects without any prior knowledge of what objects exist in the scene. We model this problem as a combination of a 3D change detection and a 2D segmentation task. Our algorithm leverages generic 2D segmentation masks to refine an initial but incomplete set of 3D change detections. The initial changes, acquired through render-and-compare likely correspond to movable objects. The incomplete detections are refined through graph optimization, distilling the information of the 2D segmentation masks in the 3D space. Experiments on the 3Rscan dataset prove that our method outperforms competitive baselines, with SoTA results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 412,333 |
2106.13804 | SITTA: Single Image Texture Translation for Data Augmentation | Recent advances in data augmentation enable one to translate images by learning the mapping between a source domain and a target domain. Existing methods tend to learn the distributions by training a model on a variety of datasets, with results evaluated largely in a subjective manner. Relatively few works in this area, however, study the potential use of image synthesis methods for recognition tasks. In this paper, we propose and explore the problem of image translation for data augmentation. We first propose a lightweight yet efficient model for translating texture to augment images based on a single input of source texture, allowing for fast training and testing, referred to as Single Image Texture Translation for data Augmentation (SITTA). Then we explore the use of augmented data in long-tailed and few-shot image classification tasks. We find the proposed augmentation method and workflow is capable of translating the texture of input data into a target domain, leading to consistently improved image recognition performance. Finally, we examine how SITTA and related image translation methods can provide a basis for a data-efficient, "augmentation engineering" approach to model training. Codes are available at https://github.com/Boyiliee/SITTA. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 243,190 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.