id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2303.10951 | Tracker Meets Night: A Transformer Enhancer for UAV Tracking | Most previous progress in object tracking is realized in daytime scenes with favorable illumination. State-of-the-arts can hardly carry on their superiority at night so far, thereby considerably blocking the broadening of visual tracking-related unmanned aerial vehicle (UAV) applications. To realize reliable UAV tracking at night, a spatial-channel Transformer-based low-light enhancer (namely SCT), which is trained in a novel task-inspired manner, is proposed and plugged prior to tracking approaches. To achieve semantic-level low-light enhancement targeting the high-level task, the novel spatial-channel attention module is proposed to model global information while preserving local context. In the enhancement process, SCT denoises and illuminates nighttime images simultaneously through a robust non-linear curve projection. Moreover, to provide a comprehensive evaluation, we construct a challenging nighttime tracking benchmark, namely DarkTrack2021, which contains 110 challenging sequences with over 100 K frames in total. Evaluations on both the public UAVDark135 benchmark and the newly constructed DarkTrack2021 benchmark show that the task-inspired design enables SCT with significant performance gains for nighttime UAV tracking compared with other top-ranked low-light enhancers. Real-world tests on a typical UAV platform further verify the practicability of the proposed approach. The DarkTrack2021 benchmark and the code of the proposed approach are publicly available at https://github.com/vision4robotics/SCT. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 352,646 |
1909.03723 | Machine learning for automatic construction of pseudo-realistic
pediatric abdominal phantoms | Machine Learning (ML) is proving extremely beneficial in many healthcare applications. In pediatric oncology, retrospective studies that investigate the relationship between treatment and late adverse effects still rely on simple heuristics. To assess the effects of radiation therapy, treatment plans are typically simulated on phantoms, i.e., virtual surrogates of patient anatomy. Currently, phantoms are built according to reasonable, yet simple, human-designed criteria. This often results in a lack of individualization. We present a novel approach that combines imaging and ML to build individualized phantoms automatically. Given the features of a patient treated historically (only 2D radiographs available), and a database of 3D Computed Tomography (CT) imaging with organ segmentations and relative patient features, our approach uses ML to predict how to assemble a patient-specific phantom automatically. Experiments on 60 abdominal CTs of pediatric patients show that our approach constructs significantly more representative phantoms than using current phantom building criteria, in terms of location and shape of the abdomen and of two considered organs, the liver and the spleen. Among several ML algorithms considered, the Gene-pool Optimal Mixing Evolutionary Algorithm for Genetic Programming (GP-GOMEA) is found to deliver the best performing models, which are, moreover, transparent and interpretable mathematical expressions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 144,582 |
1601.04621 | Probabilistic Inference of Twitter Users' Age based on What They Follow | Twitter provides an open and rich source of data for studying human behaviour at scale and is widely used in social and network sciences. However, a major criticism of Twitter data is that demographic information is largely absent. Enhancing Twitter data with user ages would advance our ability to study social network structures, information flows and the spread of contagions. Approaches toward age detection of Twitter users typically focus on specific properties of tweets, e.g., linguistic features, which are language dependent. In this paper, we devise a language-independent methodology for determining the age of Twitter users from data that is native to the Twitter ecosystem. The key idea is to use a Bayesian framework to generalise ground-truth age information from a few Twitter users to the entire network based on what/whom they follow. Our approach scales to inferring the age of 700 million Twitter accounts with high accuracy. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 51,044 |
1710.10727 | Exact Topology and Parameter Estimation in Distribution Grids with
Minimal Observability | Limited presence of nodal and line meters in distribution grids hinders their optimal operation and participation in real-time markets. In particular lack of real-time information on the grid topology and infrequently calibrated line parameters (impedances) adversely affect the accuracy of any operational power flow control. This paper suggests a novel algorithm for learning the topology of distribution grid and estimating impedances of the operational lines with minimal observational requirements - it provably reconstructs topology and impedances using voltage and injection measured only at the terminal (end-user) nodes of the distribution grid. All other (intermediate) nodes in the network may be unobserved/hidden. Furthermore no additional input (e.g., number of grid nodes, historical information on injections at hidden nodes) is needed for the learning to succeed. Performance of the algorithm is illustrated in numerical experiments on the IEEE and custom power distribution models. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 83,452 |
1410.3080 | Tree-Structure Bayesian Compressive Sensing for Video | A Bayesian compressive sensing framework is developed for video reconstruction based on the color coded aperture compressive temporal imaging (CACTI) system. By exploiting the three dimension (3D) tree structure of the wavelet and Discrete Cosine Transformation (DCT) coefficients, a Bayesian compressive sensing inversion algorithm is derived to reconstruct (up to 22) color video frames from a single monochromatic compressive measurement. Both simulated and real datasets are adopted to verify the performance of the proposed algorithm. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 36,680 |
1810.11518 | An Acceleration Scheme to The Local Directional Pattern | This study seeks to improve the running time of the Local Directional Pattern (LDP) during feature extraction using a newly proposed acceleration scheme to LDP. LDP is considered to be computationally expensive. To confirm this, the running time of the LDP to gray level co-occurrence matrix (GLCM) were it was established that the running time for LDP was two orders of magnitude higher than that of the GLCM. In this study, the performance of the newly proposed acceleration scheme was evaluated against LDP and Local Binary patter (LBP) using images from the publicly available extended Cohn-Kanade (CK+) dataset. Based on our findings, the proposed acceleration scheme significantly improves the running time of the LDP by almost 3 times during feature extraction | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 111,513 |
2009.02126 | Evaluating the Impact of COVID-19 on Cyberbullying through Bayesian
Trend Analysis | COVID-19's impact has surpassed from personal and global health to our social life. In terms of digital presence, it is speculated that during pandemic, there has been a significant rise in cyberbullying. In this paper, we have examined the hypothesis of whether cyberbullying and reporting of such incidents have increased in recent times. To evaluate the speculations, we collected cyberbullying related public tweets (N=454,046) posted between January 1st, 2020 -- June 7th, 2020. A simple visual frequentist analysis ignores serial correlation and does not depict changepoints as such. To address correlation and a relatively small number of time points, Bayesian estimation of the trends is proposed for the collected data via an autoregressive Poisson model. We show that this new Bayesian method detailed in this paper can clearly show the upward trend on cyberbullying-related tweets since mid-March 2020. However, this evidence itself does not signify a rise in cyberbullying but shows a correlation of the crisis with the discussion of such incidents by individuals. Our work emphasizes a critical issue of cyberbullying and how a global crisis impacts social media abuse and provides a trend analysis model that can be utilized for social media data analysis in general. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 194,473 |
1903.05174 | Richness of Deep Echo State Network Dynamics | Reservoir Computing (RC) is a popular methodology for the efficient design of Recurrent Neural Networks (RNNs). Recently, the advantages of the RC approach have been extended to the context of multi-layered RNNs, with the introduction of the Deep Echo State Network (DeepESN) model. In this paper, we study the quality of state dynamics in progressively higher layers of DeepESNs, using tools from the areas of information theory and numerical analysis. Our experimental results on RC benchmark datasets reveal the fundamental role played by the strength of inter-reservoir connections to increasingly enrich the representations developed in higher layers. Our analysis also gives interesting insights into the possibility of effective exploitation of training algorithms based on stochastic gradient descent in the RC field. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 124,109 |
1707.09030 | A Locally Adapting Technique for Boundary Detection using Image
Segmentation | Rapid growth in the field of quantitative digital image analysis is paving the way for researchers to make precise measurements about objects in an image. To compute quantities from the image such as the density of compressed materials or the velocity of a shockwave, we must determine object boundaries. Images containing regions that each have a spatial trend in intensity are of particular interest. We present a supervised image segmentation method that incorporates spatial information to locate boundaries between regions with overlapping intensity histograms. The segmentation of a pixel is determined by comparing its intensity to distributions from local, nearby pixel intensities. Because of the statistical nature of the algorithm, we use maximum likelihood estimation theory to quantify uncertainty about each boundary. We demonstrate the success of this algorithm on a radiograph of a multicomponent cylinder and on an optical image of a laser-induced shockwave, and we provide final boundary locations with associated bands of uncertainty. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 77,932 |
2206.04742 | Accelerating Asynchronous Federated Learning Convergence via
Opportunistic Mobile Relaying | This paper presents a study on asynchronous Federated Learning (FL) in a mobile network setting. The majority of FL algorithms assume that communication between clients and the server is always available, however, this is not the case in many real-world systems. To address this issue, the paper explores the impact of mobility on the convergence performance of asynchronous FL. By exploiting mobility, the study shows that clients can indirectly communicate with the server through another client serving as a relay, creating additional communication opportunities. This enables clients to upload local model updates sooner or receive fresher global models. We propose a new FL algorithm, called FedMobile, that incorporates opportunistic relaying and addresses key questions such as when and how to relay. We prove that FedMobile achieves a convergence rate $O(\frac{1}{\sqrt{NT}})$, where $N$ is the number of clients and $T$ is the number of communication slots, and show that the optimal design involves an interesting trade-off on the best timing of relaying. The paper also presents an extension that considers data manipulation before relaying to reduce the cost and enhance privacy. Experiment results on a synthetic dataset and two real-world datasets verify our theoretical findings. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 301,749 |
2402.03368 | Empirical and Experimental Perspectives on Big Data in Recommendation
Systems: A Comprehensive Survey | This survey paper provides a comprehensive analysis of big data algorithms in recommendation systems, addressing the lack of depth and precision in existing literature. It proposes a two-pronged approach: a thorough analysis of current algorithms and a novel, hierarchical taxonomy for precise categorization. The taxonomy is based on a tri-level hierarchy, starting with the methodology category and narrowing down to specific techniques. Such a framework allows for a structured and comprehensive classification of algorithms, assisting researchers in understanding the interrelationships among diverse algorithms and techniques. Covering a wide range of algorithms, this taxonomy first categorizes algorithms into four main analysis types: User and Item Similarity-Based Methods, Hybrid and Combined Approaches, Deep Learning and Algorithmic Methods, and Mathematical Modeling Methods, with further subdivisions into sub-categories and techniques. The paper incorporates both empirical and experimental evaluations to differentiate between the techniques. The empirical evaluation ranks the techniques based on four criteria. The experimental assessments rank the algorithms that belong to the same category, sub-category, technique, and sub-technique. Also, the paper illuminates the future prospects of big data techniques in recommendation systems, underscoring potential advancements and opportunities for further research in this field | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 426,986 |
2102.09003 | Domain Impression: A Source Data Free Domain Adaptation Method | Unsupervised Domain adaptation methods solve the adaptation problem for an unlabeled target set, assuming that the source dataset is available with all labels. However, the availability of actual source samples is not always possible in practical cases. It could be due to memory constraints, privacy concerns, and challenges in sharing data. This practical scenario creates a bottleneck in the domain adaptation problem. This paper addresses this challenging scenario by proposing a domain adaptation technique that does not need any source data. Instead of the source data, we are only provided with a classifier that is trained on the source data. Our proposed approach is based on a generative framework, where the trained classifier is used for generating samples from the source classes. We learn the joint distribution of data by using the energy-based modeling of the trained classifier. At the same time, a new classifier is also adapted for the target domain. We perform various ablation analysis under different experimental setups and demonstrate that the proposed approach achieves better results than the baseline models in this extremely novel scenario. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 220,638 |
1408.0549 | Downlink Cellular Network Analysis with Multi-slope Path Loss Models | Existing cellular network analyses, and even simulations, typically use the standard path loss model where received power decays like $\|x\|^{-\alpha}$ over a distance $\|x\|$. This standard path loss model is quite idealized, and in most scenarios the path loss exponent $\alpha$ is itself a function of $\|x\|$, typically an increasing one. Enforcing a single path loss exponent can lead to orders of magnitude differences in average received and interference powers versus the true values. In this paper we study \emph{multi-slope} path loss models, where different distance ranges are subject to different path loss exponents. We focus on the dual-slope path loss function, which is a piece-wise power law and continuous and accurately approximates many practical scenarios. We derive the distributions of SIR, SNR, and finally SINR before finding the potential throughput scaling, which provides insight on the observed cell-splitting rate gain. The exact mathematical results show that the SIR monotonically decreases with network density, while the converse is true for SNR, and thus the network coverage probability in terms of SINR is maximized at some finite density. With ultra-densification (network density goes to infinity), there exists a \emph{phase transition} in the near-field path loss exponent $\alpha_0$: if $\alpha_0 >1$ unbounded potential throughput can be achieved asymptotically; if $\alpha_0 <1$, ultra-densification leads in the extreme case to zero throughput. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 35,094 |
2312.06445 | Towards a Unified Naming Scheme for Thermo-Active Soft Actuators: A
Review of Materials, Working Principles, and Applications | Soft robotics is a rapidly growing field that spans the fields of chemistry, materials science, and engineering. Due to the diverse background of the field, there have been contrasting naming schemes such as 'intelligent', 'smart' and 'adaptive' materials which add vagueness to the broad innovation among literature. Therefore, a clear, functional and descriptive naming scheme is proposed in which a previously vague name -- Soft Material for Soft Actuators -- can remain clear and concise -- Phase-Change Elastomers for Artificial Muscles. By synthesizing the working principle, material, and application into a naming scheme, the searchability of soft robotics can be enhanced and applied to other fields. The field of thermo-active soft actuators spans multiple domains and requires added clarity. Thermo-active actuators have potential for a variety of applications spanning virtual reality haptics to assistive devices. This review offers a comprehensive guide to selecting the type of thermo-active actuator when one has an application in mind. Additionally, it discusses future directions and improvements that are necessary for implementation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 414,521 |
2308.12968 | Scenimefy: Learning to Craft Anime Scene via Semi-Supervised
Image-to-Image Translation | Automatic high-quality rendering of anime scenes from complex real-world images is of significant practical value. The challenges of this task lie in the complexity of the scenes, the unique features of anime style, and the lack of high-quality datasets to bridge the domain gap. Despite promising attempts, previous efforts are still incompetent in achieving satisfactory results with consistent semantic preservation, evident stylization, and fine details. In this study, we propose Scenimefy, a novel semi-supervised image-to-image translation framework that addresses these challenges. Our approach guides the learning with structure-consistent pseudo paired data, simplifying the pure unsupervised setting. The pseudo data are derived uniquely from a semantic-constrained StyleGAN leveraging rich model priors like CLIP. We further apply segmentation-guided data selection to obtain high-quality pseudo supervision. A patch-wise contrastive style loss is introduced to improve stylization and fine details. Besides, we contribute a high-resolution anime scene dataset to facilitate future research. Our extensive experiments demonstrate the superiority of our method over state-of-the-art baselines in terms of both perceptual quality and quantitative performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 387,740 |
2112.13939 | SPIDER: Searching Personalized Neural Architecture for Federated
Learning | Federated learning (FL) is an efficient learning framework that assists distributed machine learning when data cannot be shared with a centralized server due to privacy and regulatory restrictions. Recent advancements in FL use predefined architecture-based learning for all the clients. However, given that clients' data are invisible to the server and data distributions are non-identical across clients, a predefined architecture discovered in a centralized setting may not be an optimal solution for all the clients in FL. Motivated by this challenge, in this work, we introduce SPIDER, an algorithmic framework that aims to Search Personalized neural architecture for federated learning. SPIDER is designed based on two unique features: (1) alternately optimizing one architecture-homogeneous global model (Supernet) in a generic FL manner and one architecture-heterogeneous local model that is connected to the global model by weight sharing-based regularization (2) achieving architecture-heterogeneous local model by a novel neural architecture search (NAS) method that can select optimal subnet progressively using operation-level perturbation on the accuracy value as the criterion. Experimental results demonstrate that SPIDER outperforms other state-of-the-art personalization methods, and the searched personalized architectures are more inference efficient. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 273,399 |
2405.12750 | Generative AI in Cybersecurity: A Comprehensive Review of LLM
Applications and Vulnerabilities | This paper provides a comprehensive review of the future of cybersecurity through Generative AI and Large Language Models (LLMs). We explore LLM applications across various domains, including hardware design security, intrusion detection, software engineering, design verification, cyber threat intelligence, malware detection, and phishing detection. We present an overview of LLM evolution and its current state, focusing on advancements in models such as GPT-4, GPT-3.5, Mixtral-8x7B, BERT, Falcon2, and LLaMA. Our analysis extends to LLM vulnerabilities, such as prompt injection, insecure output handling, data poisoning, DDoS attacks, and adversarial instructions. We delve into mitigation strategies to protect these models, providing a comprehensive look at potential attack scenarios and prevention techniques. Furthermore, we evaluate the performance of 42 LLM models in cybersecurity knowledge and hardware security, highlighting their strengths and weaknesses. We thoroughly evaluate cybersecurity datasets for LLM training and testing, covering the lifecycle from data creation to usage and identifying gaps for future research. In addition, we review new strategies for leveraging LLMs, including techniques like Half-Quadratic Quantization (HQQ), Reinforcement Learning with Human Feedback (RLHF), Direct Preference Optimization (DPO), Quantized Low-Rank Adapters (QLoRA), and Retrieval-Augmented Generation (RAG). These insights aim to enhance real-time cybersecurity defenses and improve the sophistication of LLM applications in threat detection and response. Our paper provides a foundational understanding and strategic direction for integrating LLMs into future cybersecurity frameworks, emphasizing innovation and robust model deployment to safeguard against evolving cyber threats. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 455,639 |
2301.02499 | Evaluating counterfactual explanations using Pearl's counterfactual
method | Counterfactual explanations (CEs) are methods for generating an alternative scenario that produces a different desirable outcome. For example, if a student is predicted to fail a course, then counterfactual explanations can provide the student with alternate ways so that they would be predicted to pass. The applications are many. However, CEs are currently generated from machine learning models that do not necessarily take into account the true causal structure in the data. By doing this, bias can be introduced into the CE quantities. I propose in this study to test the CEs using Judea Pearl's method of computing counterfactuals which has thus far, surprisingly, not been seen in the counterfactual explanation (CE) literature. I furthermore evaluate these CEs on three different causal structures to show how the true underlying causal structure affects the CEs that are generated. This study presented a method of evaluating CEs using Pearl's method and it showed, (although using a limited sample size), that thirty percent of the CEs conflicted with those computed by Pearl's method. This shows that we cannot simply trust CEs and it is vital for us to know the true causal structure before we blindly compute counterfactuals using the original machine learning model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 339,528 |
2201.06164 | Synthesis and Reconstruction of Fingerprints using Generative
Adversarial Networks | Deep learning-based models have been shown to improve the accuracy of fingerprint recognition. While these algorithms show exceptional performance, they require large-scale fingerprint datasets for training and evaluation. In this work, we propose a novel fingerprint synthesis and reconstruction framework based on the StyleGan2 architecture, to address the privacy issues related to the acquisition of such large-scale datasets. We also derive a computational approach to modify the attributes of the generated fingerprint while preserving their identity. This allows synthesizing multiple different fingerprint images per finger. In particular, we introduce the SynFing synthetic fingerprints dataset consisting of 100K image pairs, each pair corresponding to the same identity. The proposed framework was experimentally shown to outperform contemporary state-of-the-art approaches for both fingerprint synthesis and reconstruction. It significantly improved the realism of the generated fingerprints, both visually and in terms of their ability to spoof fingerprint-based verification systems. The code and fingerprints dataset are publicly available: https://github.com/rafaelbou/fingerprint_generator. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 275,631 |
2412.09816 | Distributed Inverse Dynamics Control for Quadruped Robots using
Geometric Optimization | This paper presents a distributed inverse dynamics controller (DIDC) for quadruped robots that addresses the limitations of existing reactive controllers: simplified dynamical models, the inability to handle exact friction cone constraints, and the high computational requirements of whole-body controllers. Current methods either ignore friction constraints entirely or use linear approximations, leading to potential slip and instability, while comprehensive whole-body controllers demand significant computational resources. Our approach uses full rigid-body dynamics and enforces exact friction cone constraints through a novel geometric optimization-based solver. DIDC combines the required generalized forces corresponding to the actuated and unactuated spaces by projecting them onto the actuated space while satisfying the physical constraints and maintaining orthogonality between the base and joint tracking objectives. Experimental validation shows that our approach reduces foot slippage, improves orientation tracking, and converges at least two times faster than existing reactive controllers with generic QP-based implementations. The controller enables stable omnidirectional trotting at various speeds and consumes less power than comparable methods while running efficiently on embedded processors. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 516,655 |
1710.00969 | Event Identification as a Decision Process with Non-linear
Representation of Text | We propose scale-free Identifier Network(sfIN), a novel model for event identification in documents. In general, sfIN first encodes a document into multi-scale memory stacks, then extracts special events via conducting multi-scale actions, which can be considered as a special type of sequence labelling. The design of large scale actions makes it more efficient processing a long document. The whole model is trained with both supervised learning and reinforcement learning. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 81,947 |
2412.09817 | Enhancing Multimodal Large Language Models Complex Reason via Similarity
Computation | Multimodal large language models have experienced rapid growth, and numerous different models have emerged. The interpretability of LVLMs remains an under-explored area. Especially when faced with more complex tasks such as chain-of-thought reasoning, its internal mechanisms still resemble a black box that is difficult to decipher. By studying the interaction and information flow between images and text, we noticed that in models such as LLaVA1.5, image tokens that are semantically related to text are more likely to have information flow convergence in the LLM decoding layer, and these image tokens receive higher attention scores. However, those image tokens that are less relevant to the text do not have information flow convergence, and they only get very small attention scores. To efficiently utilize the image information, we propose a new image token reduction method, Simignore, which aims to improve the complex reasoning ability of LVLMs by computing the similarity between image and text embeddings and ignoring image tokens that are irrelevant and unimportant to the text. Through extensive experiments, we demonstrate the effectiveness of our method for complex reasoning tasks. The paper's source code can be accessed from \url{https://github.com/FanshuoZeng/Simignore}. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 516,656 |
2309.12102 | SemEval-2022 Task 7: Identifying Plausible Clarifications of Implicit
and Underspecified Phrases in Instructional Texts | We describe SemEval-2022 Task 7, a shared task on rating the plausibility of clarifications in instructional texts. The dataset for this task consists of manually clarified how-to guides for which we generated alternative clarifications and collected human plausibility judgements. The task of participating systems was to automatically determine the plausibility of a clarification in the respective context. In total, 21 participants took part in this task, with the best system achieving an accuracy of 68.9%. This report summarizes the results and findings from 8 teams and their system descriptions. Finally, we show in an additional evaluation that predictions by the top participating team make it possible to identify contexts with multiple plausible clarifications with an accuracy of 75.2%. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 393,661 |
2111.04356 | A comprehensive assessment of accuracy of adaptive integration of cut
cells for laminar fluid-structure interaction problems | Finite element methods based on cut-cells are becoming increasingly popular because of their advantages over formulations based on body-fitted meshes for problems with moving interfaces. In such methods, the cells (or elements) which are cut by the interface between two different domains need to be integrated using special techniques in order to obtain optimal convergence rates and accurate fluxes across the interface. The adaptive integration technique in which the cells are recursively subdivided is one of the popular techniques for the numerical integration of cut-cells due to its advantages over tessellation, particularly for problems involving complex geometries in three dimensions. Although adaptive integration does not impose any limitations on the representation of the geometry of immersed solids as it requires only point location algorithms, it becomes computationally expensive for recovering optimal convergence rates. This paper presents a comprehensive assessment of the adaptive integration of cut-cells for applications in computational fluid dynamics and fluid-structure interaction. We assess the effect of the accuracy of integration of cut cells on convergence rates in velocity and pressure fields, and then on forces and displacements for fluid-structure interaction problems by studying several examples in two and three dimensions. By taking the computational cost and the accuracy of forces and displacements into account, we demonstrate that numerical results of acceptable accuracy for FSI problems involving laminar flows can be obtained with only fewer levels of refinement. In particular, we show that three levels of adaptive refinement are sufficient for obtaining force and displacement values of acceptable accuracy for laminar fluid-structure interaction problems. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 265,467 |
2403.05290 | Foundational propositions of hesitant fuzzy soft $\beta$-covering
approximation spaces | Soft set theory serves as a mathematical framework for handling uncertain information, and hesitant fuzzy sets find extensive application in scenarios involving uncertainty and hesitation. Hesitant fuzzy sets exhibit diverse membership degrees, giving rise to various forms of inclusion relationships among them. This article introduces the notions of hesitant fuzzy soft $\beta$-coverings and hesitant fuzzy soft $\beta$-neighborhoods, which are formulated based on distinct forms of inclusion relationships among hesitancy fuzzy sets. Subsequently, several associated properties are investigated. Additionally, specific variations of hesitant fuzzy soft $\beta$-coverings are introduced by incorporating hesitant fuzzy rough sets, followed by an exploration of properties pertaining to hesitant fuzzy soft $\beta$-covering approximation spaces. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 435,946 |
2305.06436 | Multi-Robot Coordination and Layout Design for Automated Warehousing | With the rapid progress in Multi-Agent Path Finding (MAPF), researchers have studied how MAPF algorithms can be deployed to coordinate hundreds of robots in large automated warehouses. While most works try to improve the throughput of such warehouses by developing better MAPF algorithms, we focus on improving the throughput by optimizing the warehouse layout. We show that, even with state-of-the-art MAPF algorithms, commonly used human-designed layouts can lead to congestion for warehouses with large numbers of robots and thus have limited scalability. We extend existing automatic scenario generation methods to optimize warehouse layouts. Results show that our optimized warehouse layouts (1) reduce traffic congestion and thus improve throughput, (2) improve the scalability of the automated warehouses by doubling the number of robots in some cases, and (3) are capable of generating layouts with user-specified diversity measures. We include the source code at: https://github.com/lunjohnzhang/warehouse_env_gen_public | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | false | false | 363,534 |
2212.01992 | Fast and accurate factorized neural transducer for text adaption of
end-to-end speech recognition models | Neural transducer is now the most popular end-to-end model for speech recognition, due to its naturally streaming ability. However, it is challenging to adapt it with text-only data. Factorized neural transducer (FNT) model was proposed to mitigate this problem. The improved adaptation ability of FNT on text-only adaptation data came at the cost of lowered accuracy compared to the standard neural transducer model. We propose several methods to improve the performance of the FNT model. They are: adding CTC criterion during training, adding KL divergence loss during adaptation, using a pre-trained language model to seed the vocabulary predictor, and an efficient adaptation approach by interpolating the vocabulary predictor with the n-gram language model. A combination of these approaches results in a relative word-error-rate reduction of 9.48\% from the standard FNT model. Furthermore, n-gram interpolation with the vocabulary predictor improves the adaptation speed hugely with satisfactory adaptation performance. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 334,639 |
2202.03875 | MixCycle: Unsupervised Speech Separation via Cyclic Mixture Permutation
Invariant Training | We introduce two unsupervised source separation methods, which involve self-supervised training from single-channel two-source speech mixtures. Our first method, mixture permutation invariant training (MixPIT), enables learning a neural network model which separates the underlying sources via a challenging proxy task without supervision from the reference sources. Our second method, cyclic mixture permutation invariant training (MixCycle), uses MixPIT as a building block in a cyclic fashion for continuous learning. MixCycle gradually converts the problem from separating mixtures of mixtures into separating single mixtures. We compare our methods to common supervised and unsupervised baselines: permutation invariant training with dynamic mixing (PIT-DM) and mixture invariant training (MixIT). We show that MixCycle outperforms MixIT and reaches a performance level very close to the supervised baseline (PIT-DM) while circumventing the over-separation issue of MixIT. Also, we propose a self-evaluation technique inspired by MixCycle that estimates model performance without utilizing any reference sources. We show that it yields results consistent with an evaluation on reference sources (LibriMix) and also with an informal listening test conducted on a real-life mixtures dataset (REAL-M). | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 279,374 |
2005.05672 | Learning and Evaluating Emotion Lexicons for 91 Languages | Emotion lexicons describe the affective meaning of words and thus constitute a centerpiece for advanced sentiment and emotion analysis. Yet, manually curated lexicons are only available for a handful of languages, leaving most languages of the world without such a precious resource for downstream applications. Even worse, their coverage is often limited both in terms of the lexical units they contain and the emotional variables they feature. In order to break this bottleneck, we here introduce a methodology for creating almost arbitrarily large emotion lexicons for any target language. Our approach requires nothing but a source language emotion lexicon, a bilingual word translation model, and a target language embedding model. Fulfilling these requirements for 91 languages, we are able to generate representationally rich high-coverage lexicons comprising eight emotional variables with more than 100k lexical entries each. We evaluated the automatically generated lexicons against human judgment from 26 datasets, spanning 12 typologically diverse languages, and found that our approach produces results in line with state-of-the-art monolingual approaches to lexicon creation and even surpasses human reliability for some languages and variables. Code and data are available at https://github.com/JULIELab/MEmoLon archived under DOI https://doi.org/10.5281/zenodo.3779901. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 176,796 |
2103.06716 | Learning Word-Level Confidence For Subword End-to-End ASR | We study the problem of word-level confidence estimation in subword-based end-to-end (E2E) models for automatic speech recognition (ASR). Although prior works have proposed training auxiliary confidence models for ASR systems, they do not extend naturally to systems that operate on word-pieces (WP) as their vocabulary. In particular, ground truth WP correctness labels are needed for training confidence models, but the non-unique tokenization from word to WP causes inaccurate labels to be generated. This paper proposes and studies two confidence models of increasing complexity to solve this problem. The final model uses self-attention to directly learn word-level confidence without needing subword tokenization, and exploits full context features from multiple hypotheses to improve confidence accuracy. Experiments on Voice Search and long-tail test sets show standard metrics (e.g., NCE, AUC, RMSE) improving substantially. The proposed confidence module also enables a model selection approach to combine an on-device E2E model with a hybrid model on the server to address the rare word recognition problem for the E2E model. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 224,388 |
2111.12521 | Probabilistic Behavioral Distance and Tuning - Reducing and aggregating
complex systems | Given a complex system with a given interface to the rest of the world, what does it mean for a the system to behave close to a simpler specification describing the behavior at the interface? We give several definitions for useful notions of distances between a complex system and a specification by combining a behavioral and probabilistic perspective. These distances can be used to tune a complex system to a specification. We show that our approach can successfully tune non-linear networked systems to behave like much smaller networks, allowing us to aggregate large sub-networks into one or two effective nodes. Finally, we discuss similarities and differences between our approach and $H_\infty$ model reduction. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 267,993 |
2010.03790 | Text-based RL Agents with Commonsense Knowledge: New Challenges,
Environments and Baselines | Text-based games have emerged as an important test-bed for Reinforcement Learning (RL) research, requiring RL agents to combine grounded language understanding with sequential decision making. In this paper, we examine the problem of infusing RL agents with commonsense knowledge. Such knowledge would allow agents to efficiently act in the world by pruning out implausible actions, and to perform look-ahead planning to determine how current actions might affect future world states. We design a new text-based gaming environment called TextWorld Commonsense (TWC) for training and evaluating RL agents with a specific kind of commonsense knowledge about objects, their attributes, and affordances. We also introduce several baseline RL agents which track the sequential context and dynamically retrieve the relevant commonsense knowledge from ConceptNet. We show that agents which incorporate commonsense knowledge in TWC perform better, while acting more efficiently. We conduct user-studies to estimate human performance on TWC and show that there is ample room for future improvement. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 199,530 |
2410.23933 | Language Models can Self-Lengthen to Generate Long Texts | Recent advancements in Large Language Models (LLMs) have significantly enhanced their ability to process long contexts, yet a notable gap remains in generating long, aligned outputs. This limitation stems from a training gap where pre-training lacks effective instructions for long-text generation, and post-training data primarily consists of short query-response pairs. Current approaches, such as instruction backtranslation and behavior imitation, face challenges including data quality, copyright issues, and constraints on proprietary model usage. In this paper, we introduce an innovative iterative training framework called Self-Lengthen that leverages only the intrinsic knowledge and skills of LLMs without the need for auxiliary data or proprietary models. The framework consists of two roles: the Generator and the Extender. The Generator produces the initial response, which is then split and expanded by the Extender. This process results in a new, longer response, which is used to train both the Generator and the Extender iteratively. Through this process, the models are progressively trained to handle increasingly longer responses. Experiments on benchmarks and human evaluations show that Self-Lengthen outperforms existing methods in long-text generation, when applied to top open-source LLMs such as Qwen2 and LLaMA3. Our code is publicly available at https://github.com/QwenLM/Self-Lengthen. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 504,254 |
1807.01622 | Neural Processes | A neural network (NN) is a parameterised function that can be tuned via gradient descent to approximate a labelled collection of data with high precision. A Gaussian process (GP), on the other hand, is a probabilistic model that defines a distribution over possible functions, and is updated in light of data via the rules of probabilistic inference. GPs are probabilistic, data-efficient and flexible, however they are also computationally intensive and thus limited in their applicability. We introduce a class of neural latent variable models which we call Neural Processes (NPs), combining the best of both worlds. Like GPs, NPs define distributions over functions, are capable of rapid adaptation to new observations, and can estimate the uncertainty in their predictions. Like NNs, NPs are computationally efficient during training and evaluation but also learn to adapt their priors to data. We demonstrate the performance of NPs on a range of learning tasks, including regression and optimisation, and compare and contrast with related models in the literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 102,099 |
2202.05049 | Fair When Trained, Unfair When Deployed: Observable Fairness Measures
are Unstable in Performative Prediction Settings | Many popular algorithmic fairness measures depend on the joint distribution of predictions, outcomes, and a sensitive feature like race or gender. These measures are sensitive to distribution shift: a predictor which is trained to satisfy one of these fairness definitions may become unfair if the distribution changes. In performative prediction settings, however, predictors are precisely intended to induce distribution shift. For example, in many applications in criminal justice, healthcare, and consumer finance, the purpose of building a predictor is to reduce the rate of adverse outcomes such as recidivism, hospitalization, or default on a loan. We formalize the effect of such predictors as a type of concept shift-a particular variety of distribution shift-and show both theoretically and via simulated examples how this causes predictors which are fair when they are trained to become unfair when they are deployed. We further show how many of these issues can be avoided by using fairness definitions that depend on counterfactual rather than observable outcomes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 279,747 |
2201.01895 | Event-based EV Charging Scheduling in A Microgrid of Buildings | With the popularization of the electric vehicles (EVs), EV charging demand is becoming an important load in the building. Considering the mobility of EVs from building to building and their uncertain charging demand, it is of great practical interest to control the EV charging process in a microgrid of buildings to optimize the total operation cost while ensuring the transmission safety between the microgrid and the main grid. We consider this important problem in this paper and make the following contributions. First, we formulate this problem as a Markov decision process to capture the uncertain supply and EV charging demand in the microgrid of buildings. Besides reducing the total operation cost of buildings, the model also considers the power exchange limitation to ensure transmission safety. Second, this model is reformulated under event-based optimization framework to alleviate the impact of large state and action space. By appropriately defining the event and event-based action, the EV charging process can be optimized by searching a randomized parametric event-based control policy in the microgrid controller and implementing a selecting-to-charging rule in each building controller. Third, a constrained gradient-based policy optimzation method with adjusting mechanism is proposed to iteratively find the optimal event-based control policy for EV charging demand in each building. Numerical experiments considering a microgrid of three buildings are conducted to analyze the structure and the performance of the event-based control policy for EV charging. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 274,391 |
2002.08267 | Multilogue-Net: A Context Aware RNN for Multi-modal Emotion Detection
and Sentiment Analysis in Conversation | Sentiment Analysis and Emotion Detection in conversation is key in several real-world applications, with an increase in modalities available aiding a better understanding of the underlying emotions. Multi-modal Emotion Detection and Sentiment Analysis can be particularly useful, as applications will be able to use specific subsets of available modalities, as per the available data. Current systems dealing with Multi-modal functionality fail to leverage and capture - the context of the conversation through all modalities, the dependency between the listener(s) and speaker emotional states, and the relevance and relationship between the available modalities. In this paper, we propose an end to end RNN architecture that attempts to take into account all the mentioned drawbacks. Our proposed model, at the time of writing, out-performs the state of the art on a benchmark dataset on a variety of accuracy and regression metrics. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 164,702 |
1805.07431 | Can machine learning identify interesting mathematics? An exploration
using empirically observed laws | We explore the possibility of using machine learning to identify interesting mathematical structures by using certain quantities that serve as fingerprints. In particular, we extract features from integer sequences using two empirical laws: Benford's law and Taylor's law and experiment with various classifiers to identify whether a sequence is, for example, nice, important, multiplicative, easy to compute or related to primes or palindromes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 97,812 |
1803.06916 | Simulating the future urban growth in Xiongan New Area: a upcoming big
city in China | China made the announement to create the Xiongan New Area in Hebei in April 1,2017. Thus a new magacity about 110km south west of Beijing will emerge. Xiongan New Area is of great practial significant and historical significant for transferring Beijing's non-capital function. Simulating the urban dynamics in Xiongan New Area can help planners to decide where to build the new urban and further manage the future urban growth. However, only a little research focus on the future urban development in Xiongan New Area. In addition, previous models are unable to simulate the urban dynamics in Xiongan New Area. Because there are no original high density urbna for these models to learn the transition rules.In this study, we proposed a C-FLUS model to solve such problems. This framework was implemented by coupling a modified Cellular automata(CA). An elaborately designed random planted seeds machanism based on local maximums is addressed in the CA model to better simulate the occurrence of the new urban. Through an analysis of the current driving forces, the C-FLUS can detect the potential start zone and simulate the urban development under different scenarios in Xiongan New Area. Our study shows that the new urban is most likely to occur in northwest of Xiongxian, and it will rapidly extend to Rongcheng and Anxin until almost cover the northern part of Xiongan New Area. Moreover, the method can help planners to evaluate the impact of urban expansion in Xiongan New Area. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 92,930 |
2309.00399 | Fine-grained Recognition with Learnable Semantic Data Augmentation | Fine-grained image recognition is a longstanding computer vision challenge that focuses on differentiating objects belonging to multiple subordinate categories within the same meta-category. Since images belonging to the same meta-category usually share similar visual appearances, mining discriminative visual cues is the key to distinguishing fine-grained categories. Although commonly used image-level data augmentation techniques have achieved great success in generic image classification problems, they are rarely applied in fine-grained scenarios, because their random editing-region behavior is prone to destroy the discriminative visual cues residing in the subtle regions. In this paper, we propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem. Specifically, we produce diversified augmented samples by translating image features along semantically meaningful directions. The semantic directions are estimated with a covariance prediction network, which predicts a sample-wise covariance matrix to adapt to the large intra-class variation inherent in fine-grained images. Furthermore, the covariance prediction network is jointly optimized with the classification network in a meta-learning manner to alleviate the degenerate solution problem. Experiments on four competitive fine-grained recognition benchmarks (CUB-200-2011, Stanford Cars, FGVC Aircrafts, NABirds) demonstrate that our method significantly improves the generalization performance on several popular classification networks (e.g., ResNets, DenseNets, EfficientNets, RegNets and ViT). Combined with a recently proposed method, our semantic data augmentation approach achieves state-of-the-art performance on the CUB-200-2011 dataset. The source code will be released. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 389,301 |
2411.00151 | NIMBA: Towards Robust and Principled Processing of Point Clouds With
SSMs | Transformers have become dominant in large-scale deep learning tasks across various domains, including text, 2D and 3D vision. However, the quadratic complexity of their attention mechanism limits their efficiency as the sequence length increases, particularly in high-resolution 3D data such as point clouds. Recently, state space models (SSMs) like Mamba have emerged as promising alternatives, offering linear complexity, scalability, and high performance in long-sequence tasks. The key challenge in the application of SSMs in this domain lies in reconciling the non-sequential structure of point clouds with the inherently directional (or bi-directional) order-dependent processing of recurrent models like Mamba. To achieve this, previous research proposed reorganizing point clouds along multiple directions or predetermined paths in 3D space, concatenating the results to produce a single 1D sequence capturing different views. In our work, we introduce a method to convert point clouds into 1D sequences that maintain 3D spatial structure with no need for data replication, allowing Mamba sequential processing to be applied effectively in an almost permutation-invariant manner. In contrast to other works, we found that our method does not require positional embeddings and allows for shorter sequence lengths while still achieving state-of-the-art results in ModelNet40 and ScanObjectNN datasets and surpassing Transformer-based models in both accuracy and efficiency. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 504,455 |
1602.04485 | Benefits of depth in neural networks | For any positive integer $k$, there exist neural networks with $\Theta(k^3)$ layers, $\Theta(1)$ nodes per layer, and $\Theta(1)$ distinct parameters which can not be approximated by networks with $\mathcal{O}(k)$ layers unless they are exponentially large --- they must possess $\Omega(2^k)$ nodes. This result is proved here for a class of nodes termed "semi-algebraic gates" which includes the common choices of ReLU, maximum, indicator, and piecewise polynomial functions, therefore establishing benefits of depth against not just standard networks with ReLU gates, but also convolutional networks with ReLU and maximization gates, sum-product networks, and boosted decision trees (in this last case with a stronger separation: $\Omega(2^{k^3})$ total tree nodes are required). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 52,142 |
2210.09100 | Estimating the Cost of Executing Link Traversal based SPARQL Queries | An increasing number of organisations in almost all fields have started adopting semantic web technologies for publishing their data as open, linked and interoperable (RDF) datasets, queryable through the SPARQL language and protocol. Link traversal has emerged as a SPARQL query processing method that exploits the Linked Data principles and the dynamic nature of the Web to dynamically discover data relevant for answering a query by resolving online resources (URIs) during query evaluation. However, the execution time of link traversal queries can become prohibitively high for certain query types due to the high number of resources that need to be accessed during query execution. In this paper we propose and evaluate baseline methods for estimating the evaluation cost of link traversal queries. Such methods can be very useful for deciding on-the-fly the query execution strategy to follow for a given query, thereby reducing the load of a SPARQL endpoint and increasing the overall reliability of the query service. To evaluate the performance of the proposed methods, we have created (and make publicly available) a ground truth dataset consisting of 2,425 queries. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 324,422 |
2404.04397 | Generating Synthetic Ground Truth Distributions for Multi-step
Trajectory Prediction using Probabilistic Composite B\'ezier Curves | An appropriate data basis grants one of the most important aspects for training and evaluating probabilistic trajectory prediction models based on neural networks. In this regard, a common shortcoming of current benchmark datasets is their limitation to sets of sample trajectories and a lack of actual ground truth distributions, which prevents the use of more expressive error metrics, such as the Wasserstein distance for model evaluation. Towards this end, this paper proposes a novel approach to synthetic dataset generation based on composite probabilistic B\'ezier curves, which is capable of generating ground truth data in terms of probability distributions over full trajectories. This allows the calculation of arbitrary posterior distributions. The paper showcases an exemplary trajectory prediction model evaluation using generated ground truth distribution data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 444,629 |
cs/0603078 | Consensus Propagation | We propose consensus propagation, an asynchronous distributed protocol for averaging numbers across a network. We establish convergence, characterize the convergence rate for regular graphs, and demonstrate that the protocol exhibits better scaling properties than pairwise averaging, an alternative that has received much recent attention. Consensus propagation can be viewed as a special case of belief propagation, and our results contribute to the belief propagation literature. In particular, beyond singly-connected graphs, there are very few classes of relevant problems for which belief propagation is known to converge. | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | false | true | 539,340 |
2004.14607 | Can Your Context-Aware MT System Pass the DiP Benchmark Tests? :
Evaluation Benchmarks for Discourse Phenomena in Machine Translation | Despite increasing instances of machine translation (MT) systems including contextual information, the evidence for translation quality improvement is sparse, especially for discourse phenomena. Popular metrics like BLEU are not expressive or sensitive enough to capture quality improvements or drops that are minor in size but significant in perception. We introduce the first of their kind MT benchmark datasets that aim to track and hail improvements across four main discourse phenomena: anaphora, lexical consistency, coherence and readability, and discourse connective translation. We also introduce evaluation methods for these tasks, and evaluate several baseline MT systems on the curated datasets. Surprisingly, we find that existing context-aware models do not improve discourse-related translations consistently across languages and phenomena. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 174,953 |
2403.06202 | Pursuit Winning Strategies for Reach-Avoid Games with Polygonal
Obstacles | This paper studies a multiplayer reach-avoid differential game in the presence of general polygonal obstacles that block the players' motions. The pursuers cooperate to protect a convex region from the evaders who try to reach the region. We propose a multiplayer onsite and close-to-goal (MOCG) pursuit strategy that can tell and achieve an increasing lower bound on the number of guaranteed defeated evaders. This pursuit strategy fuses the subgame outcomes for multiple pursuers against one evader with hierarchical optimal task allocation in the receding-horizon manner. To determine the qualitative subgame outcomes that who is the game winner, we construct three pursuit winning regions and strategies under which the pursuers guarantee to win against the evader, regardless of the unknown evader strategy. First, we utilize the expanded Apollonius circles and propose the onsite pursuit winning that achieves the capture in finite time. Second, we introduce convex goal-covering polygons (GCPs) and propose the close-to-goal pursuit winning for the pursuers whose visibility region contains the whole protected region, and the goal-visible property will be preserved afterwards. Third, we employ Euclidean shortest paths (ESPs) and construct a pursuit winning region and strategy for the non-goal-visible pursuers, where the pursuers are firstly steered to positions with goal visibility along ESPs. In each horizon, the hierarchical optimal task allocation maximizes the number of defeated evaders and consists of four sequential matchings: capture, enhanced, non-dominated and closest matchings. Numerical examples are presented to illustrate the results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 436,346 |
1905.06455 | On Norm-Agnostic Robustness of Adversarial Training | Adversarial examples are carefully perturbed in-puts for fooling machine learning models. A well-acknowledged defense method against such examples is adversarial training, where adversarial examples are injected into training data to increase robustness. In this paper, we propose a new attack to unveil an undesired property of the state-of-the-art adversarial training, that is it fails to obtain robustness against perturbations in $\ell_2$ and $\ell_\infty$ norms simultaneously. We discuss a possible solution to this issue and its limitations as well. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 130,992 |
2007.01941 | Multigrid for Bundle Adjustment | Bundle adjustment is an important global optimization step in many structure from motion pipelines. Performance is dependent on the speed of the linear solver used to compute steps towards the optimum. For large problems, the current state of the art scales superlinearly with the number of cameras in the problem. We investigate the conditioning of global bundle adjustment problems as the number of images increases in different regimes and fundamental consequences in terms of superlinear scaling of the current state of the art methods. We present an unsmoothed aggregation multigrid preconditioner that accurately represents the global modes that underlie poor scaling of existing methods and demonstrate solves of up to 13 times faster than the state of the art on large, challenging problem sets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 185,576 |
2408.07703 | Knowledge Distillation with Refined Logits | Recent research on knowledge distillation has increasingly focused on logit distillation because of its simplicity, effectiveness, and versatility in model compression. In this paper, we introduce Refined Logit Distillation (RLD) to address the limitations of current logit distillation methods. Our approach is motivated by the observation that even high-performing teacher models can make incorrect predictions, creating a conflict between the standard distillation loss and the cross-entropy loss. This conflict can undermine the consistency of the student model's learning objectives. Previous attempts to use labels to empirically correct teacher predictions may undermine the class correlation. In contrast, our RLD employs labeling information to dynamically refine teacher logits. In this way, our method can effectively eliminate misleading information from the teacher while preserving crucial class correlations, thus enhancing the value and efficiency of distilled knowledge. Experimental results on CIFAR-100 and ImageNet demonstrate its superiority over existing methods. The code is provided at \text{https://github.com/zju-SWJ/RLD}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 480,693 |
1701.02344 | Database Engines: Evolution of Greenness | Context: Information Technology consumes up to 10\% of the world's electricity generation, contributing to CO2 emissions and high energy costs. Data centers, particularly databases, use up to 23% of this energy. Therefore, building an energy-efficient (green) database engine could reduce energy consumption and CO2 emissions. Goal: To understand the factors driving databases' energy consumption and execution time throughout their evolution. Method: We conducted an empirical case study of energy consumption by two MySQL database engines, InnoDB and MyISAM, across 40 releases. We examined the relationships of four software metrics to energy consumption and execution time to determine which metrics reflect the greenness and performance of a database. Results: Our analysis shows that database engines' energy consumption and execution time increase as databases evolve. Moreover, the Lines of Code metric is correlated moderately to strongly with energy consumption and execution time in 88% of cases. Conclusions: Our findings provide insights to both practitioners and researchers. Database administrators may use them to select a fast, green release of the MySQL database engine. MySQL database-engine developers may use the software metric to assess products' greenness and performance. Researchers may use our findings to further develop new hypotheses or build models to predict greenness and performance of databases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 66,535 |
1907.09997 | An Improved Convolutional Neural Network System for Automatically
Detecting Rebar in GPR Data | As a mature technology, Ground Penetration Radar (GPR) is now widely employed in detecting rebar and other embedded elements in concrete structures. Manually recognizing rebar from GPR data is a time-consuming and error-prone procedure. Although there are several approaches to automatically detect rebar, it is still challenging to find a high resolution and efficient method for different rebar arrangements, especially for closely spaced rebar meshes. As an improved Convolution Neural Network (CNN), AlexNet shows superiority over traditional methods in image recognition domain. Thus, this paper introduces AlexNet as an alternative solution for automatically detecting rebar within GPR data. In order to show the efficiency of the proposed approach, a traditional CNN is built as the comparative option. Moreover, this research evaluates the impacts of different rebar arrangements and different window sizes on the accuracy of results. The results revealed that: (1) AlexNet outperforms the traditional CNN approach, and its superiority is more notable when the rebar meshes are densely distributed; (2) the detection accuracy significantly varies with changing the size of splitting window, and a proper window should contain enough information about rebar; (3) uniformly and sparsely distributed rebar meshes are more recognizable than densely or unevenly distributed items, due to lower chances of signal interferences. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 139,516 |
1610.01052 | A novel and effective scoring scheme for structure classification and
pairwise similarity measurement | Protein tertiary structure defines its functions, classification and binding sites. Similar structural characteristics between two proteins often lead to the similar characteristics thereof. Determining structural similarity accurately in real time is a crucial research issue. In this paper, we present a novel and effective scoring scheme that is dependent on novel features extracted from protein alpha carbon distance matrices. Our scoring scheme is inspired from pattern recognition and computer vision. Our method is significantly better than the current state of the art methods in terms of family match of pairs of protein structures and other statistical measurements. The effectiveness of our method is tested on standard benchmark structures. A web service is available at http://research.buet.ac.bd:8080/Comograd/score.html where you can get the similarity measurement score between two protein structures based on our method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 61,915 |
2110.08419 | Robustness Challenges in Model Distillation and Pruning for Natural
Language Understanding | Recent work has focused on compressing pre-trained language models (PLMs) like BERT where the major focus has been to improve the in-distribution performance for downstream tasks. However, very few of these studies have analyzed the impact of compression on the generalizability and robustness of compressed models for out-of-distribution (OOD) data. Towards this end, we study two popular model compression techniques including knowledge distillation and pruning and show that the compressed models are significantly less robust than their PLM counterparts on OOD test sets although they obtain similar performance on in-distribution development sets for a task. Further analysis indicates that the compressed models overfit on the shortcut samples and generalize poorly on the hard ones. We further leverage this observation to develop a regularization strategy for robust model compression based on sample uncertainty. Experimental results on several natural language understanding tasks demonstrate that our bias mitigation framework improves the OOD generalization of the compressed models, while not sacrificing the in-distribution task performance. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 261,387 |
1606.04695 | Strategic Attentive Writer for Learning Macro-Actions | We present a novel deep recurrent neural network architecture that learns to build implicit plans in an end-to-end manner by purely interacting with an environment in reinforcement learning setting. The network builds an internal plan, which is continuously updated upon observation of the next input from the environment. It can also partition this internal representation into contiguous sub- sequences by learning for how long the plan can be committed to - i.e. followed without re-planing. Combining these properties, the proposed model, dubbed STRategic Attentive Writer (STRAW) can learn high-level, temporally abstracted macro- actions of varying lengths that are solely learnt from data without any prior information. These macro-actions enable both structured exploration and economic computation. We experimentally demonstrate that STRAW delivers strong improvements on several ATARI games by employing temporally extended planning strategies (e.g. Ms. Pacman and Frostbite). It is at the same time a general algorithm that can be applied on any sequence data. To that end, we also show that when trained on text prediction task, STRAW naturally predicts frequent n-grams (instead of macro-actions), demonstrating the generality of the approach. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 57,300 |
2312.08851 | Achelous++: Power-Oriented Water-Surface Panoptic Perception Framework
on Edge Devices based on Vision-Radar Fusion and Pruning of Heterogeneous
Modalities | Urban water-surface robust perception serves as the foundation for intelligent monitoring of aquatic environments and the autonomous navigation and operation of unmanned vessels, especially in the context of waterway safety. It is worth noting that current multi-sensor fusion and multi-task learning models consume substantial power and heavily rely on high-power GPUs for inference. This contributes to increased carbon emissions, a concern that runs counter to the prevailing emphasis on environmental preservation and the pursuit of sustainable, low-carbon urban environments. In light of these concerns, this paper concentrates on low-power, lightweight, multi-task panoptic perception through the fusion of visual and 4D radar data, which is seen as a promising low-cost perception method. We propose a framework named Achelous++ that facilitates the development and comprehensive evaluation of multi-task water-surface panoptic perception models. Achelous++ can simultaneously execute five perception tasks with high speed and low power consumption, including object detection, object semantic segmentation, drivable-area segmentation, waterline segmentation, and radar point cloud semantic segmentation. Furthermore, to meet the demand for developers to customize models for real-time inference on low-performance devices, a novel multi-modal pruning strategy known as Heterogeneous-Aware SynFlow (HA-SynFlow) is proposed. Besides, Achelous++ also supports random pruning at initialization with different layer-wise sparsity, such as Uniform and Erdos-Renyi-Kernel (ERK). Overall, our Achelous++ framework achieves state-of-the-art performance on the WaterScenes benchmark, excelling in both accuracy and power efficiency compared to other single-task and multi-task models. We release and maintain the code at https://github.com/GuanRunwei/Achelous. | false | true | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 415,474 |
1303.3965 | Bit Level Soft Decision Decoding of Triple Parity Reed Solomon Codes
through Automorphism Groups | This paper discusses bit-level soft decoding of triple-parity Reed-Solomon (RS) codes through automorphism permutation. A new method for identifying the automorphism groups of RS binary images is first developed. The new algorithm runs effectively, and can handle more RS codes and capture more automorphism groups than the existing ones. Utilizing the automorphism results, a new bit-level soft-decision decoding algorithm is subsequently developed for general $(n,n-3,4)$ RS codes. Simulation on $(31,28,4)$ RS codes demonstrates an impressive gain of more than 1 dB at the bit error rate of $10^{-5}$ over the existing algorithms. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 22,970 |
cs/0511042 | Dimensions of Neural-symbolic Integration - A Structured Survey | Research on integrated neural-symbolic systems has made significant progress in the recent past. In particular the understanding of ways to deal with symbolic knowledge within connectionist systems (also called artificial neural networks) has reached a critical mass which enables the community to strive for applicable implementations and use cases. Recent work has covered a great variety of logics used in artificial intelligence and provides a multitude of techniques for dealing with them within the context of artificial neural networks. We present a comprehensive survey of the field of neural-symbolic integration, including a new classification of system according to their architectures and abilities. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | true | 539,074 |
2203.02846 | Region Proposal Rectification Towards Robust Instance Segmentation of
Biological Images | Top-down instance segmentation framework has shown its superiority in object detection compared to the bottom-up framework. While it is efficient in addressing over-segmentation, top-down instance segmentation suffers from over-crop problem. However, a complete segmentation mask is crucial for biological image analysis as it delivers important morphological properties such as shapes and volumes. In this paper, we propose a region proposal rectification (RPR) module to address this challenging incomplete segmentation problem. In particular, we offer a progressive ROIAlign module to introduce neighbor information into a series of ROIs gradually. The ROI features are fed into an attentive feed-forward network (FFN) for proposal box regression. With additional neighbor information, the proposed RPR module shows significant improvement in correction of region proposal locations and thereby exhibits favorable instance segmentation performances on three biological image datasets compared to state-of-the-art baseline methods. Experimental results demonstrate that the proposed RPR module is effective in both anchor-based and anchor-free top-down instance segmentation approaches, suggesting the proposed method can be applied to general top-down instance segmentation of biological images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 283,884 |
1304.2388 | Joint Iterative Power Adjustment and Interference Suppression Algorithms
for Cooperative DS-CDMA Networks | This work presents joint iterative power allocation and interference suppression algorithms for DS-CDMA networks which employ multiple relays and the amplify and forward cooperation strategy. We propose a joint constrained optimization framework that considers the allocation of power levels across the relays subject to individual and global power constraints and the design of linear receivers for interference suppression. We derive constrained minimum mean-squared error (MMSE) expressions for the parameter vectors that determine the optimal power levels across the relays and the parameters of the linear receivers. In order to solve the proposed optimization problems efficiently, we develop recursive least squares (RLS) algorithms for adaptive joint iterative power allocation, and receiver and channel parameter estimation. Simulation results show that the proposed algorithms obtain significant gains in performance and capacity over existing schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 23,694 |
2402.18807 | On the Decision-Making Abilities in Role-Playing using Large Language
Models | Large language models (LLMs) are now increasingly utilized for role-playing tasks, especially in impersonating domain-specific experts, primarily through role-playing prompts. When interacting in real-world scenarios, the decision-making abilities of a role significantly shape its behavioral patterns. In this paper, we concentrate on evaluating the decision-making abilities of LLMs post role-playing thereby validating the efficacy of role-playing. Our goal is to provide metrics and guidance for enhancing the decision-making abilities of LLMs in role-playing tasks. Specifically, we first use LLMs to generate virtual role descriptions corresponding to the 16 personality types of Myers-Briggs Type Indicator (abbreviated as MBTI) representing a segmentation of the population. Then we design specific quantitative operations to evaluate the decision-making abilities of LLMs post role-playing from four aspects: adaptability, exploration$\&$exploitation trade-off ability, reasoning ability, and safety. Finally, we analyze the association between the performance of decision-making and the corresponding MBTI types through GPT-4. Extensive experiments demonstrate stable differences in the four aspects of decision-making abilities across distinct roles, signifying a robust correlation between decision-making abilities and the roles emulated by LLMs. These results underscore that LLMs can effectively impersonate varied roles while embodying their genuine sociological characteristics. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 433,568 |
2011.09113 | Effectiveness of Arbitrary Transfer Sets for Data-free Knowledge
Distillation | Knowledge Distillation is an effective method to transfer the learning across deep neural networks. Typically, the dataset originally used for training the Teacher model is chosen as the "Transfer Set" to conduct the knowledge transfer to the Student. However, this original training data may not always be freely available due to privacy or sensitivity concerns. In such scenarios, existing approaches either iteratively compose a synthetic set representative of the original training dataset, one sample at a time or learn a generative model to compose such a transfer set. However, both these approaches involve complex optimization (GAN training or several backpropagation steps to synthesize one sample) and are often computationally expensive. In this paper, as a simple alternative, we investigate the effectiveness of "arbitrary transfer sets" such as random noise, publicly available synthetic, and natural datasets, all of which are completely unrelated to the original training dataset in terms of their visual or semantic contents. Through extensive experiments on multiple benchmark datasets such as MNIST, FMNIST, CIFAR-10 and CIFAR-100, we discover and validate surprising effectiveness of using arbitrary data to conduct knowledge distillation when this dataset is "target-class balanced". We believe that this important observation can potentially lead to designing baselines for the data-free knowledge distillation task. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 207,096 |
2407.11484 | The Oscars of AI Theater: A Survey on Role-Playing with Language Models | This survey explores the burgeoning field of role-playing with language models, focusing on their development from early persona-based models to advanced character-driven simulations facilitated by Large Language Models (LLMs). Initially confined to simple persona consistency due to limited model capabilities, role-playing tasks have now expanded to embrace complex character portrayals involving character consistency, behavioral alignment, and overall attractiveness. We provide a comprehensive taxonomy of the critical components in designing these systems, including data, models and alignment, agent architecture and evaluation. This survey not only outlines the current methodologies and challenges, such as managing dynamic personal profiles and achieving high-level persona consistency but also suggests avenues for future research in improving the depth and realism of role-playing applications. The goal is to guide future research by offering a structured overview of current methodologies and identifying potential areas for improvement. Related resources and papers are available at https://github.com/nuochenpku/Awesome-Role-Play-Papers. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 473,479 |
2103.05131 | Few-Shot Learning of an Interleaved Text Summarization Model by
Pretraining with Synthetic Data | Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While end-to-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an end-to-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22%. We also compare against transformer models and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 223,872 |
1607.01274 | Temporal Topic Analysis with Endogenous and Exogenous Processes | We consider the problem of modeling temporal textual data taking endogenous and exogenous processes into account. Such text documents arise in real world applications, including job advertisements and economic news articles, which are influenced by the fluctuations of the general economy. We propose a hierarchical Bayesian topic model which imposes a "group-correlated" hierarchical structure on the evolution of topics over time incorporating both processes, and show that this model can be estimated from Markov chain Monte Carlo sampling methods. We further demonstrate that this model captures the intrinsic relationships between the topic distribution and the time-dependent factors, and compare its performance with latent Dirichlet allocation (LDA) and two other related models. The model is applied to two collections of documents to illustrate its empirical performance: online job advertisements from DirectEmployers Association and journalists' postings on BusinessInsider.com. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 58,204 |
1612.01392 | An Extended Treatment of Uncertainty Constrained robotic Exploration: An
Integrated Exploration Planner | Efficient robotic exploration of unknown, sensor limited, global-information-deficient environments poses unique challenges to path planning algorithms. In these difficult environments, no deterministic guarantees on path completion and mission success can be made in general. Integrated Exploration (IE), which strives to combine localization and exploration, must be solved in order to create an autonomous robotic system capable of long term operation in new and challenging environments. This paper formulates a probabilistic framework which allows the creation of exploration algorithms providing probabilistic guarantees of success. A novel connection is made between the Hamiltonian Path Problem and exploration. The Guaranteed Probabilistic Information Explorer (G-PIE) is developed for the IE problem, providing a probabilistic guarantee on path completion, and asymptotic optimality of exploration. A receding horizon formulation, dubbed RH-PIE, is presented which addresses the exponential complexity present in G-PIE. Finally, RH-PIE planner is verified via autonomous, hardware-in-the-loop experiments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 65,074 |
2101.09401 | Adaptively Sparse Regularization for Blind Image Restoration | Image quality is the basis of image communication and understanding tasks. Due to the blur and noise effects caused by imaging, transmission and other processes, the image quality is degraded. Blind image restoration is widely used to improve image quality, where the main goal is to faithfully estimate the blur kernel and the latent sharp image. In this study, based on experimental observation and research, an adaptively sparse regularized minimization method is originally proposed. The high-order gradients combine with low-order ones to form a hybrid regularization term, and an adaptive operator derived from the image entropy is introduced to maintain a good convergence. Extensive experiments were conducted on different blur kernels and images. Compared with existing state-of-the-art blind deblurring methods, our method demonstrates superiority on the recovery accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 216,585 |
2009.09929 | CVPR 2020 Continual Learning in Computer Vision Competition: Approaches,
Results, Current Challenges and Future Directions | In the last few years, we have witnessed a renewed and fast-growing interest in continual learning with deep neural networks with the shared objective of making current AI systems more adaptive, efficient and autonomous. However, despite the significant and undoubted progress of the field in addressing the issue of catastrophic forgetting, benchmarking different continual learning approaches is a difficult task by itself. In fact, given the proliferation of different settings, training and evaluation protocols, metrics and nomenclature, it is often tricky to properly characterize a continual learning algorithm, relate it to other solutions and gauge its real-world applicability. The first Continual Learning in Computer Vision challenge held at CVPR in 2020 has been one of the first opportunities to evaluate different continual learning algorithms on a common hardware with a large set of shared evaluation metrics and 3 different settings based on the realistic CORe50 video benchmark. In this paper, we report the main results of the competition, which counted more than 79 teams registered, 11 finalists and 2300$ in prizes. We also summarize the winning approaches, current challenges and future research directions. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 196,739 |
2103.17182 | Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to
Improve Generalization | It is well-known that stochastic gradient noise (SGN) acts as implicit regularization for deep learning and is essentially important for both optimization and generalization of deep networks. Some works attempted to artificially simulate SGN by injecting random noise to improve deep learning. However, it turned out that the injected simple random noise cannot work as well as SGN, which is anisotropic and parameter-dependent. For simulating SGN at low computational costs and without changing the learning rate or batch size, we propose the Positive-Negative Momentum (PNM) approach that is a powerful alternative to conventional Momentum in classic optimizers. The introduced PNM method maintains two approximate independent momentum terms. Then, we can control the magnitude of SGN explicitly by adjusting the momentum difference. We theoretically prove the convergence guarantee and the generalization advantage of PNM over Stochastic Gradient Descent (SGD). By incorporating PNM into the two conventional optimizers, SGD with Momentum and Adam, our extensive experiments empirically verified the significant advantage of the PNM-based variants over the corresponding conventional Momentum-based optimizers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 227,810 |
2211.13844 | Ladder Siamese Network: a Method and Insights for Multi-level
Self-Supervised Learning | Siamese-network-based self-supervised learning (SSL) suffers from slow convergence and instability in training. To alleviate this, we propose a framework to exploit intermediate self-supervisions in each stage of deep nets, called the Ladder Siamese Network. Our self-supervised losses encourage the intermediate layers to be consistent with different data augmentations to single samples, which facilitates training progress and enhances the discriminative ability of the intermediate layers themselves. While some existing work has already utilized multi-level self supervisions in SSL, ours is different in that 1) we reveal its usefulness with non-contrastive Siamese frameworks in both theoretical and empirical viewpoints, and 2) ours improves image-level classification, instance-level detection, and pixel-level segmentation simultaneously. Experiments show that the proposed framework can improve BYOL baselines by 1.0% points in ImageNet linear classification, 1.2% points in COCO detection, and 3.1% points in PASCAL VOC segmentation. In comparison with the state-of-the-art methods, our Ladder-based model achieves competitive and balanced performances in all tested benchmarks without causing large degradation in one. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 332,618 |
2203.08182 | DSOL: A Fast Direct Sparse Odometry Scheme | In this paper, we describe Direct Sparse Odometry Lite (DSOL), an improved version of Direct Sparse Odometry (DSO). We propose several algorithmic and implementation enhancements which speed up computation by a significant factor (on average 5x) even on resource constrained platforms. The increase in speed allows us to process images at higher frame rates, which in turn provides better results on rapid motions. Our open-source implementation is available at https://github.com/versatran01/dsol. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 285,701 |
1510.07932 | Downlink Power Control in Two-Tier Cellular Networks with
Energy-Harvesting Small Cells as Stochastic Games | Energy harvesting in cellular networks is an emerging technique to enhance the sustainability of power-constrained wireless devices. This paper considers the co-channel deployment of a macrocell overlaid with small cells. The small cell base stations (SBSs) harvest energy from environmental sources whereas the macrocell base station (MBS) uses conventional power supply. Given a stochastic energy arrival process for the SBSs, we derive a power control policy for the downlink transmission of both MBS and SBSs such that they can achieve their objectives (e.g., maintain the signal-to-interference-plus-noise ratio (SINR) at an acceptable level) on a given transmission channel. We consider a centralized energy harvesting mechanism for SBSs, i.e., there is a central energy storage (CES) where energy is harvested and then distributed to the SBSs. When the number of SBSs is small, the game between the CES and the MBS is modeled as a single-controller stochastic game and the equilibrium policies are obtained as a solution of a quadratic programming problem. However, when the number of SBSs tends to infinity (i.e., a highly dense network), the centralized scheme becomes infeasible, and therefore, we use a mean field stochastic game to obtain a distributed power control policy for each SBS. By solving a system of partial differential equations, we derive the power control policy of SBSs given the knowledge of mean field distribution and the available harvested energy levels in the batteries of the SBSs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 48,245 |
2307.02623 | FLuID: Mitigating Stragglers in Federated Learning using Invariant
Dropout | Federated Learning (FL) allows machine learning models to train locally on individual mobile devices, synchronizing model updates via a shared server. This approach safeguards user privacy; however, it also generates a heterogeneous training environment due to the varying performance capabilities across devices. As a result, straggler devices with lower performance often dictate the overall training time in FL. In this work, we aim to alleviate this performance bottleneck due to stragglers by dynamically balancing the training load across the system. We introduce Invariant Dropout, a method that extracts a sub-model based on the weight update threshold, thereby minimizing potential impacts on accuracy. Building on this dropout technique, we develop an adaptive training framework, Federated Learning using Invariant Dropout (FLuID). FLuID offers a lightweight sub-model extraction to regulate computational intensity, thereby reducing the load on straggler devices without affecting model quality. Our method leverages neuron updates from non-straggler devices to construct a tailored sub-model for each straggler based on client performance profiling. Furthermore, FLuID can dynamically adapt to changes in stragglers as runtime conditions shift. We evaluate FLuID using five real-world mobile clients. The evaluations show that Invariant Dropout maintains baseline model efficiency while alleviating the performance bottleneck of stragglers through a dynamic, runtime approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 377,747 |
2406.03999 | Unveiling the Dynamics of Information Interplay in Supervised Learning | In this paper, we use matrix information theory as an analytical tool to analyze the dynamics of the information interplay between data representations and classification head vectors in the supervised learning process. Specifically, inspired by the theory of Neural Collapse, we introduce matrix mutual information ratio (MIR) and matrix entropy difference ratio (HDR) to assess the interactions of data representation and class classification heads in supervised learning, and we determine the theoretical optimal values for MIR and HDR when Neural Collapse happens. Our experiments show that MIR and HDR can effectively explain many phenomena occurring in neural networks, for example, the standard supervised training dynamics, linear mode connectivity, and the performance of label smoothing and pruning. Additionally, we use MIR and HDR to gain insights into the dynamics of grokking, which is an intriguing phenomenon observed in supervised training, where the model demonstrates generalization capabilities long after it has learned to fit the training data. Furthermore, we introduce MIR and HDR as loss terms in supervised and semi-supervised learning to optimize the information interactions among samples and classification heads. The empirical results provide evidence of the method's effectiveness, demonstrating that the utilization of MIR and HDR not only aids in comprehending the dynamics throughout the training process but can also enhances the training procedure itself. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 461,480 |
1910.02673 | Interpretable Disentanglement of Neural Networks by Extracting
Class-Specific Subnetwork | We propose a novel perspective to understand deep neural networks in an interpretable disentanglement form. For each semantic class, we extract a class-specific functional subnetwork from the original full model, with compressed structure while maintaining comparable prediction performance. The structure representations of extracted subnetworks display a resemblance to their corresponding class semantic similarities. We also apply extracted subnetworks in visual explanation and adversarial example detection tasks by merely replacing the original full model with class-specific subnetworks. Experiments demonstrate that this intuitive operation can effectively improve explanation saliency accuracy for gradient-based explanation methods, and increase the detection rate for confidence score-based adversarial example detection methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 148,310 |
2108.06701 | Reference Service Model for Federated Identity Management | With the pandemic of COVID-19, people around the world increasingly work from home. Each natural person typically has several digital identities with different associated information. During the last years, various identity and access management approaches have gained attraction, helping for example to access other organization's services within trust boundaries. The resulting heterogeneity creates a high complexity to differentiate between these approaches and scenarios as participating entity; combining them is even harder. Last but not least, various actors have a different understanding or perspective of the terms, like 'service', in this context. Our paper describes a reference service with standard components in generic federated identity management. This is utilized with modern Enterprise Architecture using the framework ArchiMate. The proposed universal federated identity management service model (FIMSM) is applied to describe various federated identity management scenarios in a generic service-oriented way. The presented reference design is approved in multiple aspects and is easily applicable in numerous scenarios. | false | false | false | false | false | false | false | false | false | false | true | true | true | false | false | false | false | true | 250,690 |
2206.14092 | Learning the Solution Operator of Boundary Value Problems using Graph
Neural Networks | As an alternative to classical numerical solvers for partial differential equations (PDEs) subject to boundary value constraints, there has been a surge of interest in investigating neural networks that can solve such problems efficiently. In this work, we design a general solution operator for two different time-independent PDEs using graph neural networks (GNNs) and spectral graph convolutions. We train the networks on simulated data from a finite elements solver on a variety of shapes and inhomogeneities. In contrast to previous works, we focus on the ability of the trained operator to generalize to previously unseen scenarios. Specifically, we test generalization to meshes with different shapes and superposition of solutions for a different number of inhomogeneities. We find that training on a diverse dataset with lots of variation in the finite element meshes is a key ingredient for achieving good generalization results in all cases. With this, we believe that GNNs can be used to learn solution operators that generalize over a range of properties and produce solutions much faster than a generic solver. Our dataset, which we make publicly available, can be used and extended to verify the robustness of these models under varying conditions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 305,174 |
1905.04638 | Kyrix: Interactive Visual Data Exploration at Scale | Scalable interactive visual data exploration is crucial in many domains due to increasingly large datasets generated at rapid rates. Details-on-demand provides a useful interaction paradigm for exploring large datasets, where users start at an overview, find regions of interest, zoom in to see detailed views, zoom out and then repeat. This paradigm is the primary user interaction mode of widely-used systems such as Google Maps, Aperture Tiles and ForeCache. These earlier systems, however, are highly customized with hardcoded visual representations and optimizations. A more general framework is needed to facilitate the development of visual data exploration systems at scale. In this paper, we present Kyrix, an end-to-end system for developing scalable details-on-demand data exploration applications. Kyrix provides developers with a declarative model for easy specification of general visualizations. Behind the scenes, Kyrix utilizes a suite of performance optimization techniques to achieve a response time within 500ms for various user interactions. We also report results from a performance study which shows that a novel dynamic fetching scheme adopted by Kyrix outperforms tile-based fetching used in earlier systems. | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 130,522 |
1912.07618 | Deep Learning for Cardiologist-level Myocardial Infarction Detection in
Electrocardiograms | Myocardial infarction is the leading cause of death worldwide. In this paper, we design domain-inspired neural network models to detect myocardial infarction. First, we study the contribution of various leads. This systematic analysis, first of its kind in the literature, indicates that out of 15 ECG leads, data from the v6, vz, and ii leads are critical to correctly identify myocardial infarction. Second, we use this finding and adapt the ConvNetQuake neural network model--originally designed to identify earthquakes--to attain state-of-the-art classification results for myocardial infarction, achieving $99.43\%$ classification accuracy on a record-wise split, and $97.83\%$ classification accuracy on a patient-wise split. These two results represent cardiologist-level performance level for myocardial infarction detection after feeding only 10 seconds of raw ECG data into our model. Third, we show that our multi-ECG-channel neural network achieves cardiologist-level performance without the need of any kind of manual feature extraction or data pre-processing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 157,642 |
1911.05426 | Mean-Field Transmission Power Control in Dense Networks, Part II --
Social Welfare Evaluation | We consider uplink power control in wireless communication when massive users compete over the channel resources. In Part I, we have formulated massive transmission power control contest in a mean-field game framework. In this part, our goal is to investigate whether the power-domain non-orthogonal multiple access (NOMA) protocol can regulate the non-cooperative channel access behaviors, i.e., steering the competition among the non-cooperative users in a direction with improved efficiency and fairness. It is compared with the CDMA protocol, which drives each user to fiercely compete against the population, hence the efficiency of channel usage is sacrificed. The existence and uniqueness of an equilibrium strategy under CDMA and NOMA have already been characterized in Part I. In this paper, we adopt the social welfare of the population as the performance metric, which is defined as the expectation of utility over the distribution of different types of channel users. It is shown that under the corresponding equilibrium strategies, NOMA outperforms CDMA in the social welfare achieved, which is illustrated through simulation with different unit price for power consumption. Moreover, it can be observed from numerical results that NOMA can improve the fairness of the achieved data rates among different users. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 153,256 |
2108.08211 | MBRS : Enhancing Robustness of DNN-based Watermarking by Mini-Batch of
Real and Simulated JPEG Compression | Based on the powerful feature extraction ability of deep learning architecture, recently, deep-learning based watermarking algorithms have been widely studied. The basic framework of such algorithm is the auto-encoder like end-to-end architecture with an encoder, a noise layer and a decoder. The key to guarantee robustness is the adversarial training with the differential noise layer. However, we found that none of the existing framework can well ensure the robustness against JPEG compression, which is non-differential but is an essential and important image processing operation. To address such limitations, we proposed a novel end-to-end training architecture, which utilizes Mini-Batch of Real and Simulated JPEG compression (MBRS) to enhance the JPEG robustness. Precisely, for different mini-batches, we randomly choose one of real JPEG, simulated JPEG and noise-free layer as the noise layer. Besides, we suggest to utilize the Squeeze-and-Excitation blocks which can learn better feature in embedding and extracting stage, and propose a "message processor" to expand the message in a more appreciate way. Meanwhile, to improve the robustness against crop attack, we propose an additive diffusion block into the network. The extensive experimental results have demonstrated the superior performance of the proposed scheme compared with the state-of-the-art algorithms. Under the JPEG compression with quality factor Q=50, our models achieve a bit error rate less than 0.01% for extracted messages, with PSNR larger than 36 for the encoded images, which shows the well-enhanced robustness against JPEG attack. Besides, under many other distortions such as Gaussian filter, crop, cropout and dropout, the proposed framework also obtains strong robustness. The code implemented by PyTorch \cite{2011torch7} is avaiable in https://github.com/jzyustc/MBRS. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 251,176 |
2303.15715 | Foundation Models and Fair Use | Existing foundation models are trained on copyrighted material. Deploying these models can pose both legal and ethical risks when data creators fail to receive appropriate attribution or compensation. In the United States and several other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine. However, there is a caveat: If the model produces output that is similar to copyrighted data, particularly in scenarios that affect the market of that data, fair use may no longer apply to the output of the model. In this work, we emphasize that fair use is not guaranteed, and additional work may be necessary to keep model development and deployment squarely in the realm of fair use. First, we survey the potential risks of developing and deploying foundation models based on copyrighted content. We review relevant U.S. case law, drawing parallels to existing and potential applications for generating text, source code, and visual art. Experiments confirm that popular foundation models can generate content considerably similar to copyrighted material. Second, we discuss technical mitigations that can help foundation models stay in line with fair use. We argue that more research is needed to align mitigation strategies with the current state of the law. Lastly, we suggest that the law and technical mitigations should co-evolve. For example, coupled with other policy mechanisms, the law could more explicitly consider safe harbors when strong technical tools are used to mitigate infringement harms. This co-evolution may help strike a balance between intellectual property and innovation, which speaks to the original goal of fair use. But we emphasize that the strategies we describe here are not a panacea and more work is needed to develop policies that address the potential harms of foundation models. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 354,593 |
2011.06777 | ROLL: Visual Self-Supervised Reinforcement Learning with Object
Reasoning | Current image-based reinforcement learning (RL) algorithms typically operate on the whole image without performing object-level reasoning. This leads to inefficient goal sampling and ineffective reward functions. In this paper, we improve upon previous visual self-supervised RL by incorporating object-level reasoning and occlusion reasoning. Specifically, we use unknown object segmentation to ignore distractors in the scene for better reward computation and goal generation; we further enable occlusion reasoning by employing a novel auxiliary loss and training scheme. We demonstrate that our proposed algorithm, ROLL (Reinforcement learning with Object Level Learning), learns dramatically faster and achieves better final performance compared with previous methods in several simulated visual control tasks. Project video and code are available at https://sites.google.com/andrew.cmu.edu/roll. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 206,337 |
2407.10274 | Enhancing Weakly-Supervised Histopathology Image Segmentation with
Knowledge Distillation on MIL-Based Pseudo-Labels | Segmenting tumors in histological images is vital for cancer diagnosis. While fully supervised models excel with pixel-level annotations, creating such annotations is labor-intensive and costly. Accurate histopathology image segmentation under weakly-supervised conditions with coarse-grained image labels is still a challenging problem. Although multiple instance learning (MIL) has shown promise in segmentation tasks, surprisingly, no previous pseudo-supervision methods have used MIL-based outputs as pseudo-masks for training. We suspect this stems from concerns over noises in MIL results affecting pseudo supervision quality. To explore the potential of leveraging MIL-based segmentation for pseudo supervision, we propose a novel distillation framework for histopathology image segmentation. This framework introduces a iterative fusion-knowledge distillation strategy, enabling the student model to learn directly from the teacher's comprehensive outcomes. Through dynamic role reversal between the fixed teacher and learnable student models and the incorporation of weighted cross-entropy loss for model optimization, our approach prevents performance deterioration and noise amplification during knowledge distillation. Experimental results on public histopathology datasets, Camelyon16 and Digestpath2019, demonstrate that our approach not only complements various MIL-based segmentation methods but also significantly enhances their performance. Additionally, our method achieves new SOTA in the field. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 472,913 |
1711.00950 | Beyond normality: Learning sparse probabilistic graphical models in the
non-Gaussian setting | We present an algorithm to identify sparse dependence structure in continuous and non-Gaussian probability distributions, given a corresponding set of data. The conditional independence structure of an arbitrary distribution can be represented as an undirected graph (or Markov random field), but most algorithms for learning this structure are restricted to the discrete or Gaussian cases. Our new approach allows for more realistic and accurate descriptions of the distribution in question, and in turn better estimates of its sparse Markov structure. Sparsity in the graph is of interest as it can accelerate inference, improve sampling methods, and reveal important dependencies between variables. The algorithm relies on exploiting the connection between the sparsity of the graph and the sparsity of transport maps, which deterministically couple one probability measure to another. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 83,804 |
2204.00976 | FedGBF: An efficient vertical federated learning framework via gradient
boosting and bagging | Federated learning, conducive to solving data privacy and security problems, has attracted increasing attention recently. However, the existing federated boosting model sequentially builds a decision tree model with the weak base learner, resulting in redundant boosting steps and high interactive communication costs. In contrast, the federated bagging model saves time by building multi-decision trees in parallel, but it suffers from performance loss. With the aim of obtaining an outstanding performance with less time cost, we propose a novel model in a vertically federated setting termed as Federated Gradient Boosting Forest (FedGBF). FedGBF simultaneously integrates the boosting and bagging's preponderance by building the decision trees in parallel as a base learner for boosting. Subsequent to FedGBF, the problem of hyperparameters tuning is rising. Then we propose the Dynamic FedGBF, which dynamically changes each forest's parameters and thus reduces the complexity. Finally, the experiments based on the benchmark datasets demonstrate the superiority of our method. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | true | 289,459 |
1904.03961 | Filter Pruning by Switching to Neighboring CNNs with Good Attributes | Filter pruning is effective to reduce the computational costs of neural networks. Existing methods show that updating the previous pruned filter would enable large model capacity and achieve better performance. However, during the iterative pruning process, even if the network weights are updated to new values, the pruning criterion remains the same. In addition, when evaluating the filter importance, only the magnitude information of the filters is considered. However, in neural networks, filters do not work individually, but they would affect other filters. As a result, the magnitude information of each filter, which merely reflects the information of an individual filter itself, is not enough to judge the filter importance. To solve the above problems, we propose Meta-attribute-based Filter Pruning (MFP). First, to expand the existing magnitude information based pruning criteria, we introduce a new set of criteria to consider the geometric distance of filters. Additionally, to explicitly assess the current state of the network, we adaptively select the most suitable criteria for pruning via a meta-attribute, a property of the neural network at the current state. Experiments on two image classification benchmarks validate our method. For ResNet-50 on ILSVRC-2012, we could reduce more than 50% FLOPs with only 0.44% top-5 accuracy loss. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 126,895 |
1810.08944 | MS-BACO: A new Model Selection algorithm using Binary Ant Colony
Optimization for neural complexity and error reduction | Stabilizing the complexity of Feedforward Neural Networks (FNNs) for the given approximation task can be managed by defining an appropriate model magnitude which is also greatly correlated with the generalization quality and computational efficiency. However, deciding on the right level of model complexity can be highly challenging in FNN applications. In this paper, a new Model Selection algorithm using Binary Ant Colony Optimization (MS-BACO) is proposed in order to achieve the optimal FNN model in terms of neural complexity and cross-entropy error. MS-BACO is a meta-heuristic algorithm that treats the problem as a combinatorial optimization problem. By quantifying both the amount of correlation exists among hidden neurons and the sensitivity of the FNN output to the hidden neurons using a sample-based sensitivity analysis method called, extended Fourier amplitude sensitivity test, the algorithm mostly tends to select the FNN model containing hidden neurons with most distinct hyperplanes and high contribution percentage. Performance of the proposed algorithm with three different designs of heuristic information is investigated. Comparison of the findings verifies that the newly introduced algorithm is able to provide more compact and accurate FNN model. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 110,949 |
2105.04807 | ORCEA: Object Recognition by Continuous Evidence Assimilation | ORCEA is a novel object recognition method applicable for objects describable by a generative model. The primary goal of ORCEA is to maintain a probability density distribution of possible matches over the object parameter space, while continuously updating it with incoming evidence; detection and regression are by-products of this process. ORCEA can project primitive evidence of various types (edge element, area patches etc.) directly on the object parameter space; this made possible by the study phase where ORCEA builds a probabilistic model, for each evidence type, that links evidence and the object-parameters under which they were created. The detection phase consists of building the joint distribution of possible matches resulting from the set of given evidence, including possible grouping to signal/noise; no additional algorithmic steps are needed, as the resulting PDF encapsulates all knowledge about possible solutions. ORCEA represents the match distribution over the parameter space as a set of Gaussian distributions, each representing a concrete probabilistic hypothesis about the object, which can be used outside its scope as well. ORCEA was tested on synthetic images with varying levels of complexity and noise, and shows satisfactory results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 234,625 |
2102.10063 | Probabilistically Guaranteed Satisfaction of Temporal Logic Constraints
During Reinforcement Learning | We propose a novel constrained reinforcement learning method for finding optimal policies in Markov Decision Processes while satisfying temporal logic constraints with a desired probability throughout the learning process. An automata-theoretic approach is proposed to ensure the probabilistic satisfaction of the constraint in each episode, which is different from penalizing violations to achieve constraint satisfaction after a sufficiently large number of episodes. The proposed approach is based on computing a lower bound on the probability of constraint satisfaction and adjusting the exploration behavior as needed. We present theoretical results on the probabilistic constraint satisfaction achieved by the proposed approach. We also numerically demonstrate the proposed idea in a drone scenario, where the constraint is to perform periodically arriving pick-up and delivery tasks and the objective is to fly over high-reward zones to simultaneously perform aerial monitoring. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 220,967 |
2011.10577 | Deep learning insights into cosmological structure formation | The evolution of linear initial conditions present in the early universe into extended halos of dark matter at late times can be computed using cosmological simulations. However, a theoretical understanding of this complex process remains elusive; in particular, the role of anisotropic information in the initial conditions in establishing the final mass of dark matter halos remains a long-standing puzzle. Here, we build a deep learning framework to investigate this question. We train a three-dimensional convolutional neural network (CNN) to predict the mass of dark matter halos from the initial conditions, and quantify in full generality the amounts of information in the isotropic and anisotropic aspects of the initial density field about final halo masses. We find that anisotropies add a small, albeit statistically significant amount of information over that contained within spherical averages of the density field about final halo mass. However, the overall scatter in the final mass predictions does not change qualitatively with this additional information, only decreasing from 0.9 dex to 0.7 dex. Given such a small improvement, our results demonstrate that isotropic aspects of the initial density field essentially saturate the relevant information about final halo mass. Therefore, instead of searching for information directly encoded in initial conditions anisotropies, a more promising route to accurate, fast halo mass predictions is to add approximate dynamical information based e.g. on perturbation theory. More broadly, our results indicate that deep learning frameworks can provide a powerful tool for extracting physical insight into cosmological structure formation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 207,547 |
2306.04949 | Robust Learning with Progressive Data Expansion Against Spurious
Correlation | While deep learning models have shown remarkable performance in various tasks, they are susceptible to learning non-generalizable spurious features rather than the core features that are genuinely correlated to the true label. In this paper, beyond existing analyses of linear models, we theoretically examine the learning process of a two-layer nonlinear convolutional neural network in the presence of spurious features. Our analysis suggests that imbalanced data groups and easily learnable spurious features can lead to the dominance of spurious features during the learning process. In light of this, we propose a new training algorithm called PDE that efficiently enhances the model's robustness for a better worst-group performance. PDE begins with a group-balanced subset of training data and progressively expands it to facilitate the learning of the core features. Experiments on synthetic and real-world benchmark datasets confirm the superior performance of our method on models such as ResNets and Transformers. On average, our method achieves a 2.8% improvement in worst-group accuracy compared with the state-of-the-art method, while enjoying up to 10x faster training efficiency. Codes are available at https://github.com/uclaml/PDE. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,983 |
2407.20708 | Integer-Valued Training and Spike-Driven Inference Spiking Neural
Network for High-performance and Energy-efficient Object Detection | Brain-inspired Spiking Neural Networks (SNNs) have bio-plausibility and low-power advantages over Artificial Neural Networks (ANNs). Applications of SNNs are currently limited to simple classification tasks because of their poor performance. In this work, we focus on bridging the performance gap between ANNs and SNNs on object detection. Our design revolves around network architecture and spiking neuron. First, the overly complex module design causes spike degradation when the YOLO series is converted to the corresponding spiking version. We design a SpikeYOLO architecture to solve this problem by simplifying the vanilla YOLO and incorporating meta SNN blocks. Second, object detection is more sensitive to quantization errors in the conversion of membrane potentials into binary spikes by spiking neurons. To address this challenge, we design a new spiking neuron that activates Integer values during training while maintaining spike-driven by extending virtual timesteps during inference. The proposed method is validated on both static and neuromorphic object detection datasets. On the static COCO dataset, we obtain 66.2% mAP@50 and 48.9% mAP@50:95, which is +15.0% and +18.7% higher than the prior state-of-the-art SNN, respectively. On the neuromorphic Gen1 dataset, we achieve 67.2% mAP@50, which is +2.5% greater than the ANN with equivalent architecture, and the energy efficiency is improved by 5.7*. Code: https://github.com/BICLab/SpikeYOLO | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 477,258 |
2412.15998 | CNN-LSTM Hybrid Deep Learning Model for Remaining Useful Life Estimation | Remaining Useful Life (RUL) of a component or a system is defined as the length from the current time to the end of the useful life. Accurate RUL estimation plays a crucial role in Predictive Maintenance applications. Traditional regression methods, both linear and non-linear, have struggled to achieve high accuracy in this domain. While Convolutional Neural Networks (CNNs) have shown improved accuracy, they often overlook the sequential nature of the data, relying instead on features derived from sliding windows. Since RUL prediction inherently involves multivariate time series analysis, robust sequence learning is essential. In this work, we propose a hybrid approach combining Convolutional Neural Networks with Long Short-Term Memory (LSTM) networks for RUL estimation. Although CNN-based LSTM models have been applied to sequence prediction tasks in financial forecasting, this is the first attempt to adopt this approach for RUL estimation in prognostics. In this approach, CNN is first employed to efficiently extract features from the data, followed by LSTM, which uses these extracted features to predict RUL. This method effectively leverages sensor sequence information, uncovering hidden patterns within the data, even under multiple operating conditions and fault scenarios. Our results demonstrate that the hybrid CNN-LSTM model achieves the highest accuracy, offering a superior score compared to the other methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 519,333 |
2408.13723 | EMG-Based Hand Gesture Recognition through Diverse Domain Feature
Enhancement and Machine Learning-Based Approach | Surface electromyography (EMG) serves as a pivotal tool in hand gesture recognition and human-computer interaction, offering a non-invasive means of signal acquisition. This study presents a novel methodology for classifying hand gestures using EMG signals. To address the challenges associated with feature extraction where, we explored 23 distinct morphological, time domain and frequency domain feature extraction techniques. However, the substantial size of the features may increase the computational complexity issues that can hinder machine learning algorithm performance. We employ an efficient feature selection approach, specifically an extra tree classifier, to mitigate this. The selected potential feature fed into the various machine learning-based classification algorithms where our model achieved 97.43\% accuracy with the KNN algorithm and selected feature. By leveraging a comprehensive feature extraction and selection strategy, our methodology enhances the accuracy and usability of EMG-based hand gesture recognition systems. The higher performance accuracy proves the effectiveness of the proposed model over the existing system. \keywords{EMG signal, machine learning approach, hand gesture recognition. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 483,263 |
2207.10892 | Bi-directional Contrastive Learning for Domain Adaptive Semantic
Segmentation | We present a novel unsupervised domain adaptation method for semantic segmentation that generalizes a model trained with source images and corresponding ground-truth labels to a target domain. A key to domain adaptive semantic segmentation is to learn domain-invariant and discriminative features without target ground-truth labels. To this end, we propose a bi-directional pixel-prototype contrastive learning framework that minimizes intra-class variations of features for the same object class, while maximizing inter-class variations for different ones, regardless of domains. Specifically, our framework aligns pixel-level features and a prototype of the same object class in target and source images (i.e., positive pairs), respectively, sets them apart for different classes (i.e., negative pairs), and performs the alignment and separation processes toward the other direction with pixel-level features in the source image and a prototype in the target image. The cross-domain matching encourages domain-invariant feature representations, while the bidirectional pixel-prototype correspondences aggregate features for the same object class, providing discriminative features. To establish training pairs for contrastive learning, we propose to generate dynamic pseudo labels of target images using a non-parametric label transfer, that is, pixel-prototype correspondences across different domains. We also present a calibration method compensating class-wise domain biases of prototypes gradually during training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 309,431 |
2308.10641 | Statistical Analysis of Geometric Algorithms in Vehicular Visible Light
Positioning | Vehicular visible light positioning (VLP) methods find relative locations of vehicles by estimating the positions of intensity-modulated head/tail lights of one vehicle (target) with respect to another (ego). Estimation is done in two steps: 1) relative bearing or range of the transmitter-receiver link is measured over the received signal on the ego side, and 2) target position is estimated based on those measurements using a geometric algorithm that expresses position coordinates in terms of the bearing-range parameters. The primary source of statistical error for these non-linear algorithms is the channel noise on the received signals that contaminates parameter measurements with varying levels of sensitivity. In this paper, we present two such geometric vehicular VLP algorithms that were previously unexplored, compare their performance with state-of-the-art algorithms over simulations, and analyze theoretical performance of all algorithms against statistical channel noise by deriving the respective Cramer-Rao lower bounds. The two newly explored algorithms do not outperform existing state-of-the-art, but we present them alongside the statistical analyses for the sake of completeness and to motivate further research in vehicular VLP. Our main finding is that direct bearing-based algorithms provide higher accuracy against noise for estimating lateral position coordinates, and range-based algorithms provide higher accuracy in the longitudinal axis due to the non-linearity of the respective geometric algorithms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 386,822 |
2001.09842 | Electric Field Propagation Through Singular Value Decomposition | We demonstrate that the singular value decomposition algorithm in conjunction with the fast Fourier transform or finite difference procedures provides a straightforward and accurate method for rapidly propagating electric fields in the one-way Helmholtz formalism. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 161,680 |
2310.07630 | Differentiable Euler Characteristic Transforms for Shape Classification | The Euler Characteristic Transform (ECT) has proven to be a powerful representation, combining geometrical and topological characteristics of shapes and graphs. However, the ECT was hitherto unable to learn task-specific representations. We overcome this issue and develop a novel computational layer that enables learning the ECT in an end-to-end fashion. Our method, the Differentiable Euler Characteristic Transform (DECT), is fast and computationally efficient, while exhibiting performance on a par with more complex models in both graph and point cloud classification tasks. Moreover, we show that this seemingly simple statistic provides the same topological expressivity as more complex topological deep learning layers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 399,046 |
2308.10320 | Hyper Association Graph Matching with Uncertainty Quantification for
Coronary Artery Semantic Labeling | Coronary artery disease (CAD) is one of the primary causes leading to death worldwide. Accurate extraction of individual arterial branches on invasive coronary angiograms (ICA) is important for stenosis detection and CAD diagnosis. However, deep learning-based models face challenges in generating semantic segmentation for coronary arteries due to the morphological similarity among different types of coronary arteries. To address this challenge, we propose an innovative approach using the hyper association graph-matching neural network with uncertainty quantification (HAGMN-UQ) for coronary artery semantic labeling on ICAs. The graph-matching procedure maps the arterial branches between two individual graphs, so that the unlabeled arterial segments are classified by the labeled segments, and the coronary artery semantic labeling is achieved. By incorporating the anatomical structural loss and uncertainty, our model achieved an accuracy of 0.9345 for coronary artery semantic labeling with a fast inference speed, leading to an effective and efficient prediction in real-time clinical decision-making scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 386,678 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.