id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2501.07715 | Analyzing the Role of the DSO in Electricity Trading of VPPs via a
Stackelberg Game Model | The increasing penetration of distributed energy resources (DER) has sparked interest in promoting their participation in the power market. Here we consider a setting in which different virtual power plants (VPPs) with certain flexible resources take part in electricity trading, either by direct participation in the wholesale power market, or interfaced by the Distribution System Operator (DSO). Our goal is to examine the role and influence of the DSO as a stakeholder, for which we formulate a Stackelberg game via a bilevel optimization model: the DSO maximizes profits at the upper level, while VPPs minimize operating costs at the lower level. To solve this problem, we use the Karush-Kuhn-Tucke optimality conditions of the convex lower-level problems to achieve a single-level mixed-integer nonlinear program. The results show that the role of the DSO as an intermediary agent leads to a decrease in operating costs for the VPPs, while guaranteeing a profit for the DSO. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 524,477 |
2109.03821 | Recommend for a Reason: Unlocking the Power of Unsupervised
Aspect-Sentiment Co-Extraction | Compliments and concerns in reviews are valuable for understanding users' shopping interests and their opinions with respect to specific aspects of certain items. Existing review-based recommenders favor large and complex language encoders that can only learn latent and uninterpretable text representations. They lack explicit user attention and item property modeling, which however could provide valuable information beyond the ability to recommend items. Therefore, we propose a tightly coupled two-stage approach, including an Aspect-Sentiment Pair Extractor (ASPE) and an Attention-Property-aware Rating Estimator (APRE). Unsupervised ASPE mines Aspect-Sentiment pairs (AS-pairs) and APRE predicts ratings using AS-pairs as concrete aspect-level evidence. Extensive experiments on seven real-world Amazon Review Datasets demonstrate that ASPE can effectively extract AS-pairs which enable APRE to deliver superior accuracy over the leading baselines. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 254,184 |
2111.06271 | Multi-Resolution Elevation Mapping and Safe Landing Site Detection with
Applications to Planetary Rotorcraft | In this paper, we propose a resource-efficient approach to provide an autonomous UAV with an on-board perception method to detect safe, hazard-free landing sites during flights over complex 3D terrain. We aggregate 3D measurements acquired from a sequence of monocular images by a Structure-from-Motion approach into a local, robot-centric, multi-resolution elevation map of the overflown terrain, which fuses depth measurements according to their lateral surface resolution (pixel-footprint) in a probabilistic framework based on the concept of dynamic Level of Detail. Map aggregation only requires depth maps and the associated poses, which are obtained from an onboard Visual Odometry algorithm. An efficient landing site detection method then exploits the features of the underlying multi-resolution map to detect safe landing sites based on slope, roughness, and quality of the reconstructed terrain surface. The evaluation of the performance of the mapping and landing site detection modules are analyzed independently and jointly in simulated and real-world experiments in order to establish the efficacy of the proposed approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 266,028 |
cs/0701184 | Structure and Problem Hardness: Goal Asymmetry and DPLL Proofs in<br>
SAT-Based Planning | In Verification and in (optimal) AI Planning, a successful method is to formulate the application as boolean satisfiability (SAT), and solve it with state-of-the-art DPLL-based procedures. There is a lack of understanding of why this works so well. Focussing on the Planning context, we identify a form of problem structure concerned with the symmetrical or asymmetrical nature of the cost of achieving the individual planning goals. We quantify this sort of structure with a simple numeric parameter called AsymRatio, ranging between 0 and 1. We run experiments in 10 benchmark domains from the International Planning Competitions since 2000; we show that AsymRatio is a good indicator of SAT solver performance in 8 of these domains. We then examine carefully crafted synthetic planning domains that allow control of the amount of structure, and that are clean enough for a rigorous analysis of the combinatorial search space. The domains are parameterized by size, and by the amount of structure. The CNFs we examine are unsatisfiable, encoding one planning step less than the length of the optimal plan. We prove upper and lower bounds on the size of the best possible DPLL refutations, under different settings of the amount of structure, as a function of size. We also identify the best possible sets of branching variables (backdoors). With minimum AsymRatio, we prove exponential lower bounds, and identify minimal backdoors of size linear in the number of variables. With maximum AsymRatio, we identify logarithmic DPLL refutations (and backdoors), showing a doubly exponential gap between the two structural extreme cases. The reasons for this behavior -- the proof arguments -- illuminate the prototypical patterns of structure causing the empirical behavior observed in the competition benchmarks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,119 |
math/0406077 | A tutorial introduction to the minimum description length principle | This tutorial provides an overview of and introduction to Rissanen's Minimum Description Length (MDL) Principle. The first chapter provides a conceptual, entirely non-technical introduction to the subject. It serves as a basis for the technical introduction given in the second chapter, in which all the ideas of the first chapter are made mathematically precise. The main ideas are discussed in great conceptual and technical detail. This tutorial is an extended version of the first two chapters of the collection "Advances in Minimum Description Length: Theory and Application" (edited by P.Grunwald, I.J. Myung and M. Pitt, to be published by the MIT Press, Spring 2005). | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 540,688 |
2303.08446 | Task-specific Fine-tuning via Variational Information Bottleneck for
Weakly-supervised Pathology Whole Slide Image Classification | While Multiple Instance Learning (MIL) has shown promising results in digital Pathology Whole Slide Image (WSI) classification, such a paradigm still faces performance and generalization problems due to challenges in high computational costs on Gigapixel WSIs and limited sample size for model training. To deal with the computation problem, most MIL methods utilize a frozen pretrained model from ImageNet to obtain representations first. This process may lose essential information owing to the large domain gap and hinder the generalization of model due to the lack of image-level training-time augmentations. Though Self-supervised Learning (SSL) proposes viable representation learning schemes, the improvement of the downstream task still needs to be further explored in the conversion from the task-agnostic features of SSL to the task-specifics under the partial label supervised learning. To alleviate the dilemma of computation cost and performance, we propose an efficient WSI fine-tuning framework motivated by the Information Bottleneck theory. The theory enables the framework to find the minimal sufficient statistics of WSI, thus supporting us to fine-tune the backbone into a task-specific representation only depending on WSI-level weak labels. The WSI-MIL problem is further analyzed to theoretically deduce our fine-tuning method. Our framework is evaluated on five pathology WSI datasets on various WSI heads. The experimental results of our fine-tuned representations show significant improvements in both accuracy and generalization compared with previous works. Source code will be available at https://github.com/invoker-LL/WSI-finetuning. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,647 |
2108.04016 | Deep Learning methods for automatic evaluation of delayed
enhancement-MRI. The results of the EMIDEC challenge | A key factor for assessing the state of the heart after myocardial infarction (MI) is to measure whether the myocardium segment is viable after reperfusion or revascularization therapy. Delayed enhancement-MRI or DE-MRI, which is performed several minutes after injection of the contrast agent, provides high contrast between viable and nonviable myocardium and is therefore a method of choice to evaluate the extent of MI. To automatically assess myocardial status, the results of the EMIDEC challenge that focused on this task are presented in this paper. The challenge's main objectives were twofold. First, to evaluate if deep learning methods can distinguish between normal and pathological cases. Second, to automatically calculate the extent of myocardial infarction. The publicly available database consists of 150 exams divided into 50 cases with normal MRI after injection of a contrast agent and 100 cases with myocardial infarction (and then with a hyperenhanced area on DE-MRI), whatever their inclusion in the cardiac emergency department. Along with MRI, clinical characteristics are also provided. The obtained results issued from several works show that the automatic classification of an exam is a reachable task (the best method providing an accuracy of 0.92), and the automatic segmentation of the myocardium is possible. However, the segmentation of the diseased area needs to be improved, mainly due to the small size of these areas and the lack of contrast with the surrounding structures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 249,860 |
2403.17885 | Empirical Analysis of EIP-3675: Miner Dynamics, Transaction Fees, and
Transaction Time | The Ethereum Improvement Proposal 3675 (EIP-3675) marks a significant shift, transitioning from a Proof of Work (PoW) to a Proof of Stake (PoS) consensus mechanism. This transition resulted in a staggering 99.95% decrease in energy consumption. However, the transition prompts two critical questions: (1). How does EIP-3675 affect miners' dynamics? and (2). How do users determine priority fees, considering that paying too little may cause delays or non-inclusion, yet paying too much wastes money with little to no benefits? To address the first question, we present a comprehensive empirical study examining EIP-3675's effect on miner dynamics (i.e., miner participation, distribution, and the degree of randomness in miner selection). Our findings reveal that the transition has encouraged broader participation of miners in block append operation, resulting in a larger pool of unique miners ($\approx50\times$ PoW), and the change in miner distribution with the increased number of unique small category miners ($\approx60\times$ PoW). However, there is an unintended consequence: a reduction in the miner selection randomness, which signifies the negative impact of the transition to PoS-Ethereum on network decentralization. Regarding the second question, we employed regression-based machine learning models; the Gradient Boosting Regressor performed best in predicting priority fees, while the K-Neighbours Regressor was worst. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 441,675 |
2501.06887 | MedGrad E-CLIP: Enhancing Trust and Transparency in AI-Driven Skin
Lesion Diagnosis | As deep learning models gain attraction in medical data, ensuring transparent and trustworthy decision-making is essential. In skin cancer diagnosis, while advancements in lesion detection and classification have improved accuracy, the black-box nature of these methods poses challenges in understanding their decision processes, leading to trust issues among physicians. This study leverages the CLIP (Contrastive Language-Image Pretraining) model, trained on different skin lesion datasets, to capture meaningful relationships between visual features and diagnostic criteria terms. To further enhance transparency, we propose a method called MedGrad E-CLIP, which builds on gradient-based E-CLIP by incorporating a weighted entropy mechanism designed for complex medical imaging like skin lesions. This approach highlights critical image regions linked to specific diagnostic descriptions. The developed integrated pipeline not only classifies skin lesions by matching corresponding descriptions but also adds an essential layer of explainability developed especially for medical data. By visually explaining how different features in an image relates to diagnostic criteria, this approach demonstrates the potential of advanced vision-language models in medical image analysis, ultimately improving transparency, robustness, and trust in AI-driven diagnostic systems. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 524,171 |
2012.14517 | Comparison of different CNNs for breast tumor classification from
ultrasound images | Breast cancer is one of the deadliest cancer worldwide. Timely detection could reduce mortality rates. In the clinical routine, classifying benign and malignant tumors from ultrasound (US) imaging is a crucial but challenging task. An automated method, which can deal with the variability of data is therefore needed. In this paper, we compared different Convolutional Neural Networks (CNNs) and transfer learning methods for the task of automated breast tumor classification. The architectures investigated in this study were VGG-16 and Inception V3. Two different training strategies were investigated: the first one was using pretrained models as feature extractors and the second one was to fine-tune the pre-trained models. A total of 947 images were used, 587 corresponded to US images of benign tumors and 360 with malignant tumors. 678 images were used for the training and validation process, while 269 images were used for testing the models. Accuracy and Area Under the receiver operating characteristic Curve (AUC) were used as performance metrics. The best performance was obtained by fine tuning VGG-16, with an accuracy of 0.919 and an AUC of 0.934. The obtained results open the opportunity to further investigation with a view of improving cancer detection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 213,526 |
1804.02864 | Semantic Edge Detection with Diverse Deep Supervision | Semantic edge detection (SED), which aims at jointly extracting edges as well as their category information, has far-reaching applications in domains such as semantic segmentation, object proposal generation, and object recognition. SED naturally requires achieving two distinct supervision targets: locating fine detailed edges and identifying high-level semantics. Our motivation comes from the hypothesis that such distinct targets prevent state-of-the-art SED methods from effectively using deep supervision to improve results. To this end, we propose a novel fully convolutional neural network using diverse deep supervision (DDS) within a multi-task framework where bottom layers aim at generating category-agnostic edges, while top layers are responsible for the detection of category-aware semantic edges. To overcome the hypothesized supervision challenge, a novel information converter unit is introduced, whose effectiveness has been extensively evaluated on SBD and Cityscapes datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 94,514 |
2006.12495 | Using graph theory and social media data to assess cultural ecosystem
services in coastal areas: Method development and application | The use of social media (SM) data has emerged as a promising tool for the assessment of cultural ecosystem services (CES). Most studies have focused on the use of single SM platforms and on the analysis of photo content to assess the demand for CES. Here, we introduce a novel methodology for the assessment of CES using SM data through the application of graph theory network analyses (GTNA) on hashtags associated to SM posts and compare it to photo content analysis. We applied the proposed methodology on two SM platforms, Instagram and Twitter, on three worldwide known case study areas, namely Great Barrier Reef, Galapagos Islands and Easter Island. Our results indicate that the analysis of hashtags through graph theory offers similar capabilities to photo content analysis in the assessment of CES provision and the identification of CES providers. More importantly, GTNA provides greater capabilities at identifying relational values and eudaimonic aspects associated to nature, elusive aspects for photo content analysis. In addition, GTNA contributes to the reduction of the interpreter's bias associated to photo content analyses, since GTNA is based on the tags provided by the users themselves. The study also highlights the importance of considering data from different social media platforms, as the type of users and the information offered by these platforms can show different CES attributes. The ease of application and short computing processing times involved in the application of GTNA makes it a cost-effective method with the potential of being applied to large geographical scales. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 183,611 |
2404.08799 | Semantic Approach to Quantifying the Consistency of Diffusion Model
Image Generation | In this study, we identify the need for an interpretable, quantitative score of the repeatability, or consistency, of image generation in diffusion models. We propose a semantic approach, using a pairwise mean CLIP (Contrastive Language-Image Pretraining) score as our semantic consistency score. We applied this metric to compare two state-of-the-art open-source image generation diffusion models, Stable Diffusion XL and PixArt-{\alpha}, and we found statistically significant differences between the semantic consistency scores for the models. Agreement between the Semantic Consistency Score selected model and aggregated human annotations was 94%. We also explored the consistency of SDXL and a LoRA-fine-tuned version of SDXL and found that the fine-tuned model had significantly higher semantic consistency in generated images. The Semantic Consistency Score proposed here offers a measure of image generation alignment, facilitating the evaluation of model architectures for specific tasks and aiding in informed decision-making regarding model selection. | true | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 446,405 |
2403.08106 | V-PRISM: Probabilistic Mapping of Unknown Tabletop Scenes | The ability to construct concise scene representations from sensor input is central to the field of robotics. This paper addresses the problem of robustly creating a 3D representation of a tabletop scene from a segmented RGB-D image. These representations are then critical for a range of downstream manipulation tasks. Many previous attempts to tackle this problem do not capture accurate uncertainty, which is required to subsequently produce safe motion plans. In this paper, we cast the representation of 3D tabletop scenes as a multi-class classification problem. To tackle this, we introduce V-PRISM, a framework and method for robustly creating probabilistic 3D segmentation maps of tabletop scenes. Our maps contain both occupancy estimates, segmentation information, and principled uncertainty measures. We evaluate the robustness of our method in (1) procedurally generated scenes using open-source object datasets, and (2) real-world tabletop data collected from a depth camera. Our experiments show that our approach outperforms alternative continuous reconstruction approaches that do not explicitly reason about objects in a multi-class formulation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 437,172 |
2004.09725 | TrueBranch: Metric Learning-based Verification of Forest Conservation
Projects | International stakeholders increasingly invest in offsetting carbon emissions, for example, via issuing Payments for Ecosystem Services (PES) to forest conservation projects. Issuing trusted payments requires a transparent monitoring, reporting, and verification (MRV) process of the ecosystem services (e.g., carbon stored in forests). The current MRV process, however, is either too expensive (on-ground inspection of forest) or inaccurate (satellite). Recent works propose low-cost and accurate MRV via automatically determining forest carbon from drone imagery, collected by the landowners. The automation of MRV, however, opens up the possibility that landowners report untruthful drone imagery. To be robust against untruthful reporting, we propose TrueBranch, a metric learning-based algorithm that verifies the truthfulness of drone imagery from forest conservation projects. TrueBranch aims to detect untruthfully reported drone imagery by matching it with public satellite imagery. Preliminary results suggest that nominal distance metrics are not sufficient to reliably detect untruthfully reported imagery. TrueBranch leverages metric learning to create a feature embedding in which truthfully and untruthfully collected imagery is easily distinguishable by distance thresholding. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 173,432 |
2105.08587 | Adaptive ABAC Policy Learning: A Reinforcement Learning Approach | With rapid advances in computing systems, there is an increasing demand for more effective and efficient access control (AC) approaches. Recently, Attribute Based Access Control (ABAC) approaches have been shown to be promising in fulfilling the AC needs of such emerging complex computing environments. An ABAC model grants access to a requester based on attributes of entities in a system and an authorization policy; however, its generality and flexibility come with a higher cost. Further, increasing complexities of organizational systems and the need for federated accesses to their resources make the task of AC enforcement and management much more challenging. In this paper, we propose an adaptive ABAC policy learning approach to automate the authorization management task. We model ABAC policy learning as a reinforcement learning problem. In particular, we propose a contextual bandit system, in which an authorization engine adapts an ABAC model through a feedback control loop; it relies on interacting with users/administrators of the system to receive their feedback that assists the model in making authorization decisions. We propose four methods for initializing the learning model and a planning approach based on attribute value hierarchy to accelerate the learning process. We focus on developing an adaptive ABAC policy learning model for a home IoT environment as a running example. We evaluate our proposed approach over real and synthetic data. We consider both complete and sparse datasets in our evaluations. Our experimental results show that the proposed approach achieves performance that is comparable to ones based on supervised learning in many scenarios and even outperforms them in several situations. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 235,804 |
2307.15282 | AC-Norm: Effective Tuning for Medical Image Analysis via Affine
Collaborative Normalization | Driven by the latest trend towards self-supervised learning (SSL), the paradigm of "pretraining-then-finetuning" has been extensively explored to enhance the performance of clinical applications with limited annotations. Previous literature on model finetuning has mainly focused on regularization terms and specific policy models, while the misalignment of channels between source and target models has not received sufficient attention. In this work, we revisited the dynamics of batch normalization (BN) layers and observed that the trainable affine parameters of BN serve as sensitive indicators of domain information. Therefore, Affine Collaborative Normalization (AC-Norm) is proposed for finetuning, which dynamically recalibrates the channels in the target model according to the cross-domain channel-wise correlations without adding extra parameters. Based on a single-step backpropagation, AC-Norm can also be utilized to measure the transferability of pretrained models. We evaluated AC-Norm against the vanilla finetuning and state-of-the-art fine-tuning methods on transferring diverse pretrained models to the diabetic retinopathy grade classification, retinal vessel segmentation, CT lung nodule segmentation/classification, CT liver-tumor segmentation and MRI cardiac segmentation tasks. Extensive experiments demonstrate that AC-Norm unanimously outperforms the vanilla finetuning by up to 4% improvement, even under significant domain shifts where the state-of-the-art methods bring no gains. We also prove the capability of AC-Norm in fast transferability estimation. Our code is available at https://github.com/EndoluminalSurgicalVision-IMR/ACNorm. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 382,211 |
2401.13282 | RefreshNet: Learning Multiscale Dynamics through Hierarchical Refreshing | Forecasting complex system dynamics, particularly for long-term predictions, is persistently hindered by error accumulation and computational burdens. This study presents RefreshNet, a multiscale framework developed to overcome these challenges, delivering an unprecedented balance between computational efficiency and predictive accuracy. RefreshNet incorporates convolutional autoencoders to identify a reduced order latent space capturing essential features of the dynamics, and strategically employs multiple recurrent neural network (RNN) blocks operating at varying temporal resolutions within the latent space, thus allowing the capture of latent dynamics at multiple temporal scales. The unique "refreshing" mechanism in RefreshNet allows coarser blocks to reset inputs of finer blocks, effectively controlling and alleviating error accumulation. This design demonstrates superiority over existing techniques regarding computational efficiency and predictive accuracy, especially in long-term forecasting. The framework is validated using three benchmark applications: the FitzHugh-Nagumo system, the Reaction-Diffusion equation, and Kuramoto-Sivashinsky dynamics. RefreshNet significantly outperforms state-of-the-art methods in long-term forecasting accuracy and speed, marking a significant advancement in modeling complex systems and opening new avenues in understanding and predicting their behavior. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 423,678 |
2409.00053 | Accelerometer-Based Multivariate Time-Series Dataset for Calf Behavior
Classification | Getting new insights on pre-weaned calf behavioral adaptation to routine challenges (transport, group relocation, etc.) and diseases (respiratory diseases, diarrhea, etc.) is a promising way to improve calf welfare in dairy farms. A classic approach to automatically monitoring behavior is to equip animals with accelerometers attached to neck collars and to develop machine learning models from accelerometer time-series. However, to be used for model development, data must be equipped with labels. Obtaining these labels requires annotating behaviors from direct observation or videos, a time-consuming and labor-intensive process. To address this challenge, we propose the ActBeCalf (Accelerometer Time-Series for Calf Behaviour classification) dataset: 30 pre-weaned dairy calves (Holstein Friesian and Jersey) were equipped with a 3D-accelerometer sensor attached to a neck-collar from one week of birth for 13 weeks. The calves were simultaneously filmed with a camera in each pen. At the end of the trial, behaviors were manually annotated from the videos using the Behavioral Observation Research Interactive Software (BORIS) by 3 observers using an ethogram with 23 behaviors. ActBeCalf contains 27.4 hours of accelerometer data aligned adequately with calf behaviors. The dataset includes the main behaviors, like lying, standing, walking, and running, and less prominent behaviors, such as sniffing, social interaction, and grooming. Finally, ActBeCalf was used for behavior classification with machine learning models: (i)two classes of behaviors, [active and inactive; model 1] and (ii)four classes of behaviors [running, lying, drinking milk, and 'other' class; model 2] to demonstrate its reliability. We got a balanced accuracy of 92% [model1] and 84% [model2]. ActBeCalf is a comprehensive and ready-to-use dataset for classifying pre-weaned calf behaviour from the acceleration time series. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 484,743 |
2106.02831 | A novel method for recommendation systems using invasive weed
optimization | One of the popular approaches in recommendation systems is Collaborative Filtering (CF). The most significant step in CF is choosing the appropriate set of users. For this purpose, similarity measures are usually used for computing the similarity between a specific user and the other users. This paper proposes a new invasive weed optimization (IWO) based CF approach that uses users' context to identify important and effective users set. By using a newly defined similarity measure based on both rating values and a measure values called confidence, the proposed approach calculates the similarity between users and thus identifies and filters the most similar users to a specific user. It then uses IWO to calculate the importance degree of users and finally, by using the identified important users and their importance degrees it predicts unknown ratings. To evaluate the proposed method, several experiments have been performed on two known real world datasets and the results show that the proposed method improves the state of the art results up to 15% in terms of Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 239,050 |
1705.03852 | Caching with Partial Adaptive Matching | We study the caching problem when we are allowed to match each user to one of a subset of caches after its request is revealed. We focus on non-uniformly popular content, specifically when the file popularities obey a Zipf distribution. We study two extremal schemes, one focusing on coded server transmissions while ignoring matching capabilities, and the other focusing on adaptive matching while ignoring potential coding opportunities. We derive the rates achieved by these schemes and characterize the regimes in which one outperforms the other. We also compare them to information-theoretic outer bounds, and finally propose a hybrid scheme that generalizes ideas from the two schemes and performs at least as well as either of them in most memory regimes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 73,241 |
2205.02215 | FedNest: Federated Bilevel, Minimax, and Compositional Optimization | Standard federated optimization methods successfully apply to stochastic problems with single-level structure. However, many contemporary ML problems -- including adversarial robustness, hyperparameter tuning, and actor-critic -- fall under nested bilevel programming that subsumes minimax and compositional optimization. In this work, we propose \fedblo: A federated alternating stochastic gradient method to address general nested problems. We establish provable convergence rates for \fedblo in the presence of heterogeneous data and introduce variations for bilevel, minimax, and compositional optimization. \fedblo introduces multiple innovations including federated hypergradient computation and variance reduction to address inner-level heterogeneity. We complement our theory with experiments on hyperparameter \& hyper-representation learning and minimax optimization that demonstrate the benefits of our method in practice. Code is available at https://github.com/ucr-optml/FedNest. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 294,874 |
2003.00146 | WaveQ: Gradient-Based Deep Quantization of Neural Networks through
Sinusoidal Adaptive Regularization | As deep neural networks make their ways into different domains, their compute efficiency is becoming a first-order constraint. Deep quantization, which reduces the bitwidth of the operations (below 8 bits), offers a unique opportunity as it can reduce both the storage and compute requirements of the network super-linearly. However, if not employed with diligence, this can lead to significant accuracy loss. Due to the strong inter-dependence between layers and exhibiting different characteristics across the same network, choosing an optimal bitwidth per layer granularity is not a straight forward. As such, deep quantization opens a large hyper-parameter space, the exploration of which is a major challenge. We propose a novel sinusoidal regularization, called SINAREQ, for deep quantized training. Leveraging the sinusoidal properties, we seek to learn multiple quantization parameterization in conjunction during gradient-based training process. Specifically, we learn (i) a per-layer quantization bitwidth along with (ii) a scale factor through learning the period of the sinusoidal function. At the same time, we exploit the periodicity, differentiability, and the local convexity profile in sinusoidal functions to automatically propel (iii) network weights towards values quantized at levels that are jointly determined. We show how SINAREQ balance compute efficiency and accuracy, and provide a heterogeneous bitwidth assignment for quantization of a large variety of deep networks (AlexNet, CIFAR-10, MobileNet, ResNet-18, ResNet-20, SVHN, and VGG-11) that virtually preserves the accuracy. Furthermore, we carry out experimentation using fixed homogenous bitwidths with 3- to 5-bit assignment and show the versatility of SINAREQ in enhancing quantized training algorithms (DoReFa and WRPN) with about 4.8% accuracy improvements on average, and then outperforming multiple state-of-the-art techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 166,204 |
2006.15213 | Simulating human interactions in supermarkets to measure the risk of
COVID-19 contagion at scale | Taking the context of simulating a retail environment using agent based modelling, a theoretical model is presented that describes the probability distribution of customer "collisions" using a novel space transformation to the Torus $Tor^2$. A method for generating the distribution of customer paths based on historical basket data is developed. Finally a calculation of the number of simulations required for statistical significance is developed. An implementation of this modelling approach to run simulations on multiple store geometries at industrial scale is being developed with current progress detailed in the technical appendix. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 184,442 |
2412.12669 | Adaptive Prototype Replay for Class Incremental Semantic Segmentation | Class incremental semantic segmentation (CISS) aims to segment new classes during continual steps while preventing the forgetting of old knowledge. Existing methods alleviate catastrophic forgetting by replaying distributions of previously learned classes using stored prototypes or features. However, they overlook a critical issue: in CISS, the representation of class knowledge is updated continuously through incremental learning, whereas prototype replay methods maintain fixed prototypes. This mismatch between updated representation and fixed prototypes limits the effectiveness of the prototype replay strategy. To address this issue, we propose the Adaptive prototype replay (Adapter) for CISS in this paper. Adapter comprises an adaptive deviation compen sation (ADC) strategy and an uncertainty-aware constraint (UAC) loss. Specifically, the ADC strategy dynamically updates the stored prototypes based on the estimated representation shift distance to match the updated representation of old class. The UAC loss reduces prediction uncertainty, aggregating discriminative features to aid in generating compact prototypes. Additionally, we introduce a compensation-based prototype similarity discriminative (CPD) loss to ensure adequate differentiation between similar prototypes, thereby enhancing the efficiency of the adaptive prototype replay strategy. Extensive experiments on Pascal VOC and ADE20K datasets demonstrate that Adapter achieves state-of-the-art results and proves effective across various CISS tasks, particularly in challenging multi-step scenarios. The code and model is available at https://github.com/zhu-gl-ux/Adapter. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 517,974 |
1905.09446 | Rate-Distortion-Memory Trade-offs in Heterogeneous Caching Networks | Caching at the wireless edge can be used to keep up with the increasing demand for high-definition wireless video streaming. By prefetching popular content into memory at wireless access points or end-user devices, requests can be served locally, relieving strain on expensive backhaul. In addition, using network coding allows the simultaneous serving of distinct cache misses via common coded multicast transmissions, resulting in significantly larger load reductions compared to those achieved with traditional delivery schemes. Most prior works simply treat video content as fixed-size files that users would like to fully download. This work is motivated by the fact that video can be coded in a scalable fashion and that the decoded video quality depends on the number of layers a user receives in sequence. Using a Gaussian source model, caching and coded delivery methods are designed to minimize the squared error distortion at end-user devices in a rate-limited caching network. The framework is very general and accounts for heterogeneous cache sizes, video popularities and user-file play-back qualities. As part of the solution, a new decentralized scheme for lossy cache-aided delivery subject to preset user distortion targets is proposed, which further generalizes prior literature to a setting with file heterogeneity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 131,737 |
2203.10484 | Hierarchical Inductive Transfer for Continual Dialogue Learning | Pre-trained models have achieved excellent performance on the dialogue task. However, for the continual increase of online chit-chat scenarios, directly fine-tuning these models for each of the new tasks not only explodes the capacity of the dialogue system on the embedded devices but also causes knowledge forgetting on pre-trained models and knowledge interference among diverse dialogue tasks. In this work, we propose a hierarchical inductive transfer framework to learn and deploy the dialogue skills continually and efficiently. First, we introduce the adapter module into pre-trained models for learning new dialogue tasks. As the only trainable module, it is beneficial for the dialogue system on the embedded devices to acquire new dialogue skills with negligible additional parameters. Then, for alleviating knowledge interference between tasks yet benefiting the regularization between them, we further design hierarchical inductive transfer that enables new tasks to use general knowledge in the base adapter without being misled by diverse knowledge in task-specific adapters. Empirical evaluation and analysis indicate that our framework obtains comparable performance under deployment-friendly model capacity. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 286,559 |
1808.05535 | Combining time-series and textual data for taxi demand prediction in
event areas: a deep learning approach | Accurate time-series forecasting is vital for numerous areas of application such as transportation, energy, finance, economics, etc. However, while modern techniques are able to explore large sets of temporal data to build forecasting models, they typically neglect valuable information that is often available under the form of unstructured text. Although this data is in a radically different format, it often contains contextual explanations for many of the patterns that are observed in the temporal data. In this paper, we propose two deep learning architectures that leverage word embeddings, convolutional layers and attention mechanisms for combining text information with time-series data. We apply these approaches for the problem of taxi demand forecasting in event areas. Using publicly available taxi data from New York, we empirically show that by fusing these two complementary cross-modal sources of information, the proposed models are able to significantly reduce the error in the forecasts. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 105,369 |
2412.17387 | Singular Value Scaling: Efficient Generative Model Compression via
Pruned Weights Refinement | While pruning methods effectively maintain model performance without extra training costs, they often focus solely on preserving crucial connections, overlooking the impact of pruned weights on subsequent fine-tuning or distillation, leading to inefficiencies. Moreover, most compression techniques for generative models have been developed primarily for GANs, tailored to specific architectures like StyleGAN, and research into compressing Diffusion models has just begun. Even more, these methods are often applicable only to GANs or Diffusion models, highlighting the need for approaches that work across both model types. In this paper, we introduce Singular Value Scaling (SVS), a versatile technique for refining pruned weights, applicable to both model types. Our analysis reveals that pruned weights often exhibit dominant singular vectors, hindering fine-tuning efficiency and leading to suboptimal performance compared to random initialization. Our method enhances weight initialization by minimizing the disparities between singular values of pruned weights, thereby improving the fine-tuning process. This approach not only guides the compressed model toward superior solutions but also significantly speeds up fine-tuning. Extensive experiments on StyleGAN2, StyleGAN3 and DDPM demonstrate that SVS improves compression performance across model types without additional training costs. Our code is available at: https://github.com/LAIT-CVLab/Singular-Value-Scaling. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 519,962 |
2404.19442 | Does Generative AI speak Nigerian-Pidgin?: Issues about
Representativeness and Bias for Multilingualism in LLMs | Nigeria is a multilingual country with 500+ languages. Naija is a Nigerian Pidgin spoken by approximately 120M speakers and it is a mixed language (e.g., English, Portuguese, Yoruba, Hausa and Igbo). Although it has mainly been a spoken language until recently, there are some online platforms (e.g., Wikipedia), publishing in written Naija as well. West African Pidgin English (WAPE) is also spoken in Nigeria and it is used by BBC to broadcast news on the internet to a wider audience not only in Nigeria but also in other West African countries (e.g., Cameroon and Ghana). Through statistical analyses and Machine Translation experiments, our paper shows that these two pidgin varieties do not represent each other (i.e., there are linguistic differences in word order and vocabulary) and Generative AI operates only based on WAPE. In other words, Naija is underrepresented in Generative AI, and it is hard to teach LLMs with few examples. In addition to the statistical analyses, we also provide historical information on both pidgins as well as insights from the interviews conducted with volunteer Wikipedia contributors in Naija. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 450,646 |
1901.07136 | Linear Index Coding With Multiple Senders and Extension to a Cellular
Network | In this paper, linear index codes with multiple senders are studied, where every receiver receives encoded messages from all senders. A new fitting matrix for the multiple senders is proposed and it is proved that the minimum rank of the proposed fitting matrices is the optimal codelength of linear index codes for the multiple senders. In addition, a new type of a side information graph related with the optimal codelength is proposed and whether given side information is critical or not is studied. Furthermore, linear index codes for the cellular network scenario are studied, where each receiver can receive a subset of sub-codewords. Since some receivers cannot receive the entire codeword in the cellular network scenario, the encoding method based on the fitting matrix has to be modified. In the cellular network scenario, we propose another fitting matrix and prove that an optimal generator matrix can be found based on these fitting matrices. In addition, some properties on the optimal codelength of linear index codes for the cellular network case are studied. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 119,155 |
1910.09777 | Self-Correction for Human Parsing | Labeling pixel-level masks for fine-grained semantic segmentation tasks, e.g. human parsing, remains a challenging task. The ambiguous boundary between different semantic parts and those categories with similar appearance usually are confusing, leading to unexpected noises in ground truth masks. To tackle the problem of learning with label noises, this work introduces a purification strategy, called Self-Correction for Human Parsing (SCHP), to progressively promote the reliability of the supervised labels as well as the learned models. In particular, starting from a model trained with inaccurate annotations as initialization, we design a cyclically learning scheduler to infer more reliable pseudo-masks by iteratively aggregating the current learned model with the former optimal one in an online manner. Besides, those correspondingly corrected labels can in turn to further boost the model performance. In this way, the models and the labels will reciprocally become more robust and accurate during the self-correction learning cycles. Benefiting from the superiority of SCHP, we achieve the best performance on two popular single-person human parsing benchmarks, including LIP and Pascal-Person-Part datasets. Our overall system ranks 1st in CVPR2019 LIP Challenge. Code is available at https://github.com/PeikeLi/Self-Correction-Human-Parsing. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 150,303 |
2402.04362 | Neural Networks Learn Statistics of Increasing Complexity | The distributional simplicity bias (DSB) posits that neural networks learn low-order moments of the data distribution first, before moving on to higher-order correlations. In this work, we present compelling new evidence for the DSB by showing that networks automatically learn to perform well on maximum-entropy distributions whose low-order statistics match those of the training set early in training, then lose this ability later. We also extend the DSB to discrete domains by proving an equivalence between token $n$-gram frequencies and the moments of embedding vectors, and by finding empirical evidence for the bias in LLMs. Finally we use optimal transport methods to surgically edit the low-order statistics of one class to match those of another, and show that early-training networks treat the edited samples as if they were drawn from the target class. Code is available at https://github.com/EleutherAI/features-across-time. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 427,416 |
1808.00111 | Probability Calibration Trees | Obtaining accurate and well calibrated probability estimates from classifiers is useful in many applications, for example, when minimising the expected cost of classifications. Existing methods of calibrating probability estimates are applied globally, ignoring the potential for improvements by applying a more fine-grained model. We propose probability calibration trees, a modification of logistic model trees that identifies regions of the input space in which different probability calibration models are learned to improve performance. We compare probability calibration trees to two widely used calibration methods---isotonic regression and Platt scaling---and show that our method results in lower root mean squared error on average than both methods, for estimates produced by a variety of base learners. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 104,306 |
2405.15706 | The Impact of Geometric Complexity on Neural Collapse in Transfer
Learning | Many of the recent remarkable advances in computer vision and language models can be attributed to the success of transfer learning via the pre-training of large foundation models. However, a theoretical framework which explains this empirical success is incomplete and remains an active area of research. Flatness of the loss surface and neural collapse have recently emerged as useful pre-training metrics which shed light on the implicit biases underlying pre-training. In this paper, we explore the geometric complexity of a model's learned representations as a fundamental mechanism that relates these two concepts. We show through experiments and theory that mechanisms which affect the geometric complexity of the pre-trained network also influence the neural collapse. Furthermore, we show how this effect of the geometric complexity generalizes to the neural collapse of new classes as well, thus encouraging better performance on downstream tasks, particularly in the few-shot setting. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,053 |
2404.14463 | DAIC-WOZ: On the Validity of Using the Therapist's prompts in Automatic
Depression Detection from Clinical Interviews | Automatic depression detection from conversational data has gained significant interest in recent years. The DAIC-WOZ dataset, interviews conducted by a human-controlled virtual agent, has been widely used for this task. Recent studies have reported enhanced performance when incorporating interviewer's prompts into the model. In this work, we hypothesize that this improvement might be mainly due to a bias present in these prompts, rather than the proposed architectures and methods. Through ablation experiments and qualitative analysis, we discover that models using interviewer's prompts learn to focus on a specific region of the interviews, where questions about past experiences with mental health issues are asked, and use them as discriminative shortcuts to detect depressed participants. In contrast, models using participant responses gather evidence from across the entire interview. Finally, to highlight the magnitude of this bias, we achieve a 0.90 F1 score by intentionally exploiting it, the highest result reported to date on this dataset using only textual information. Our findings underline the need for caution when incorporating interviewers' prompts into models, as they may inadvertently learn to exploit targeted prompts, rather than learning to characterize the language and behavior that are genuinely indicative of the patient's mental health condition. | false | false | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | 448,705 |
2309.09477 | How Much Freedom Does An Effectiveness Metric Really Have? | It is tempting to assume that because effectiveness metrics have free choice to assign scores to search engine result pages (SERPs) there must thus be a similar degree of freedom as to the relative order that SERP pairs can be put into. In fact that second freedom is, to a considerable degree, illusory. That's because if one SERP in a pair has been given a certain score by a metric, fundamental ordering constraints in many cases then dictate that the score for the second SERP must be either not less than, or not greater than, the score assigned to the first SERP. We refer to these fixed relationships as innate pairwise SERP orderings. Our first goal in this work is to describe and defend those pairwise SERP relationship constraints, and tabulate their relative occurrence via both exhaustive and empirical experimentation. We then consider how to employ such innate pairwise relationships in IR experiments, leading to a proposal for a new measurement paradigm. Specifically, we argue that tables of results in which many different metrics are listed for champion versus challenger system comparisons should be avoided; and that instead a single metric be argued for in principled terms, with any relationships identified by that metric then reinforced via an assessment of the innate relationship as to whether other metrics - indeed, all other metrics - are likely to yield the same system-vs-system outcome. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 392,626 |
2205.01649 | Learning Enriched Features for Fast Image Restoration and Enhancement | Given a degraded input image, image restoration aims to recover the missing high-quality image content. Numerous applications demand effective image restoration, e.g., computational photography, surveillance, autonomous vehicles, and remote sensing. Significant advances in image restoration have been made in recent years, dominated by convolutional neural networks (CNNs). The widely-used CNN-based methods typically operate either on full-resolution or on progressively low-resolution representations. In the former case, spatial details are preserved but the contextual information cannot be precisely encoded. In the latter case, generated outputs are semantically reliable but spatially less accurate. This paper presents a new architecture with a holistic goal of maintaining spatially-precise high-resolution representations through the entire network, and receiving complementary contextual information from the low-resolution representations. The core of our approach is a multi-scale residual block containing the following key elements: (a) parallel multi-resolution convolution streams for extracting multi-scale features, (b) information exchange across the multi-resolution streams, (c) non-local attention mechanism for capturing contextual information, and (d) attention based multi-scale feature aggregation. Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details. Extensive experiments on six real image benchmark datasets demonstrate that our method, named as MIRNet-v2 , achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement. The source code and pre-trained models are available at https://github.com/swz30/MIRNetv2 | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 294,663 |
2406.00628 | Transforming Computer Security and Public Trust Through the Exploration
of Fine-Tuning Large Language Models | Large language models (LLMs) have revolutionized how we interact with machines. However, this technological advancement has been paralleled by the emergence of "Mallas," malicious services operating underground that exploit LLMs for nefarious purposes. Such services create malware, phishing attacks, and deceptive websites, escalating the cyber security threats landscape. This paper delves into the proliferation of Mallas by examining the use of various pre-trained language models and their efficiency and vulnerabilities when misused. Building on a dataset from the Common Vulnerabilities and Exposures (CVE) program, it explores fine-tuning methodologies to generate code and explanatory text related to identified vulnerabilities. This research aims to shed light on the operational strategies and exploitation techniques of Mallas, leading to the development of more secure and trustworthy AI applications. The paper concludes by emphasizing the need for further research, enhanced safeguards, and ethical guidelines to mitigate the risks associated with the malicious application of LLMs. | false | false | false | false | false | false | true | false | true | false | false | false | true | true | false | false | false | false | 459,952 |
1903.12398 | Identification and Analysis of Cascading Failures in Power Grids with
Protective Actions | This paper aims to identify and analyze the initial contingencies or disturbances that could lead to the worst-case cascading failures of power grids. An optimal control approach is proposed to determine the most disruptive disturbances on the branch of power transmission system by regarding the disturbances as the control inputs. Moreover, protective actions such as load shedding and generation dispatch are taken into account in a convex optimization framework to prevent the cascading outages of power grids. In theory, the necessary conditions for identifying the most disruptive disturbances are obtained by solving an integrated system of algebraic equations. Finally, numerical simulations are carried out to validate the proposed approach on the IEEE RTS 24 Bus System. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 125,722 |
2308.05078 | On The Capacity of Low-Rank Dyadic Fading Channels in the Low-SNR Regime | We characterize the capacity of a low-rank wireless channel with varying fading severity at low signal-to-noise ratios (SNRs). The channel rank deficiency is achieved by incorporating pinhole condition. The capacity degradation with fading severity at high SNRs is well known: the probability of deep fades increases significantly with higher fading severity resulting in poor performance. Our analysis of the dyadic pinhole channel at low-SNR shows a very counter-intuitive result that - \emph{higher fading severity enables higher capacity at sufficiently low SNR}. The underlying reason is that at low SNRs, ergodic capacity depends crucially on the probability distribution of channel peaks (tail distribution); for the pinhole channel, the tail distribution improves with fading severity. This allows a transmitter operating at low SNR to exploit channel peaks `more efficiently' and hence improves spectral efficiency. We derive a new key result quantifying the above dependence for the double-Nakagami-$m$ fading pinhole channel - the capacity ${C} \propto (m_T m_R)^{-1}$ at low SNR, where $m_T m_R$ is the severity parameters (product) of the fadings involved. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 384,676 |
0811.0623 | Algorithmic complexity and randomness in elastic solids | A system comprised of an elastic solid and its response to an external random force sequence is shown to behave based on the principles of the theory of algorithmic complexity and randomness. The solid distorts the randomness of an input force sequence in a way proportional to its algorithmic complexity. We demonstrate this by numerical analysis of a one-dimensional vibrating elastic solid (the system) on which we apply a maximally random input force. The level of complexity of the system is controlled via external parameters. The output response is the field of displacements observed at several positions on the body. The algorithmic complexity and stochasticity of the resulting output displacement sequence is measured and compared against the complexity of the system. The results show that the higher the system complexity the more random-deficient the output sequence. This agrees with the theory introduced in [16] which states that physical systems such as this behave as algorithmic selection-rules which act on random actions in their surroundings. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 2,626 |
2206.06372 | Acceleration of cerebral blood flow and arterial transit time maps
estimation from multiple post-labeling delay arterial spin-labeled MRI via
deep learning | Purpose: Arterial spin labeling (ASL) perfusion imaging indicates direct and absolute measurement of cerebral blood flow (CBF). Arterial transit time (ATT) is a related physiological parameter reflecting the duration for the labeled spins to reach the brain region of interest. Multiple post-labeling delay (PLDs) can provide robust measures of both CBF and ATT, allowing for optimization of regional CBF modeling based on ATT. The prolonged acquisition time can potentially reduce the quality and accuracy of the CBF and ATT estimation. We proposed a novel network to significantly reduce the number of PLDs with higher signal-to-noise ratio (SNR). Method: CBF and ATT estimations were performed for one PLD and two PLDs sepa-rately. Each model was trained independently to learn the nonlinear transformation from perfusion weighted image (PWI) to CBF and ATT images. Results: Both one-PLD and two-PLD models outperformed the conventional method visually on CBF and two-PLD model showed more accurate structure on ATT estima-tion. The proposed method significantly reduces the number of PLDs from 6 to 2 on ATT and even to single PLD on CBF without sacrificing the SNR. Conclusion: It is feasible to generate CBF and ATT maps with reduced PLDs using deep learning with high quality. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,355 |
2307.08686 | An R package for parametric estimation of causal effects | This article explains the usage of R package CausalModels, which is publicly available on the Comprehensive R Archive Network. While packages are available for sufficiently estimating causal effects, there lacks a package that provides a collection of structural models using the conventional statistical approach developed by Hernan and Robins (2020). CausalModels addresses this deficiency of software in R concerning causal inference by offering tools for methods that account for biases in observational data without requiring extensive statistical knowledge. These methods should not be ignored and may be more appropriate or efficient in solving particular problems. While implementations of these statistical models are distributed among a number of causal packages, CausalModels introduces a simple and accessible framework for a consistent modeling pipeline among a variety of statistical methods for estimating causal effects in a single R package. It consists of common methods including standardization, IP weighting, G-estimation, outcome regression, instrumental variables and propensity matching. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 379,901 |
2009.04083 | Generalizing Complex/Hyper-complex Convolutions to Vector Map
Convolutions | We show that the core reasons that complex and hypercomplex valued neural networks offer improvements over their real-valued counterparts is the weight sharing mechanism and treating multidimensional data as a single entity. Their algebra linearly combines the dimensions, making each dimension related to the others. However, both are constrained to a set number of dimensions, two for complex and four for quaternions. Here we introduce novel vector map convolutions which capture both of these properties provided by complex/hypercomplex convolutions, while dropping the unnatural dimensionality constraints they impose. This is achieved by introducing a system that mimics the unique linear combination of input dimensions, such as the Hamilton product for quaternions. We perform three experiments to show that these novel vector map convolutions seem to capture all the benefits of complex and hyper-complex networks, such as their ability to capture internal latent relations, while avoiding the dimensionality restriction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | 194,960 |
2502.12627 | DAMamba: Vision State Space Model with Dynamic Adaptive Scan | State space models (SSMs) have recently garnered significant attention in computer vision. However, due to the unique characteristics of image data, adapting SSMs from natural language processing to computer vision has not outperformed the state-of-the-art convolutional neural networks (CNNs) and Vision Transformers (ViTs). Existing vision SSMs primarily leverage manually designed scans to flatten image patches into sequences locally or globally. This approach disrupts the original semantic spatial adjacency of the image and lacks flexibility, making it difficult to capture complex image structures. To address this limitation, we propose Dynamic Adaptive Scan (DAS), a data-driven method that adaptively allocates scanning orders and regions. This enables more flexible modeling capabilities while maintaining linear computational complexity and global modeling capacity. Based on DAS, we further propose the vision backbone DAMamba, which significantly outperforms current state-of-the-art vision Mamba models in vision tasks such as image classification, object detection, instance segmentation, and semantic segmentation. Notably, it surpasses some of the latest state-of-the-art CNNs and ViTs. Code will be available at https://github.com/ltzovo/DAMamba. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 534,979 |
1806.01433 | On Computing the Multiplicity of Cycles in Bipartite Graphs Using the
Degree Distribution and the Spectrum of the Graph | Counting short cycles in bipartite graphs is a fundamental problem of interest in the analysis and design of low-density parity-check (LDPC) codes. The vast majority of research in this area is focused on algorithmic techniques. Most recently, Blake and Lin proposed a computational technique to count the number of cycles of length $g$ in a bi-regular bipartite graph, where $g$ is the girth of the graph. The information required for the computation is the node degree and the multiplicity of the nodes on both sides of the partition, as well as the eigenvalues of the adjacency matrix of the graph (graph spectrum). In this paper, the result of Blake and Lin is extended to compute the number of cycles of length $g+2, \ldots, 2g-2$, for bi-regular bipartite graphs, as well as the number of $4$-cycles and $6$-cycles in irregular and half-regular bipartite graphs, with $g \geq 4$ and $g \geq 6$, respectively. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 99,555 |
1701.07657 | Operationalizing Declarative and Procedural Knowledge: a Benchmark on
Logic Programming Petri Nets (LPPNs) | Modelling, specifying and reasoning about complex systems requires to process in an integrated fashion declarative and procedural aspects of the target domain. The paper reports on an experiment conducted with a propositional version of Logic Programming Petri Nets (LPPNs), a notation extending Petri Nets with logic programming constructs. Two semantics are presented: a denotational semantics that fully maps the notation to ASP via Event Calculus; and a hybrid operational semantics that process separately the causal mechanisms via Petri nets, and the constraints associated to objects and to events via Answer Set Programming (ASP). These two alternative specifications enable an empirical evaluation in terms of computational efficiency. Experimental results show that the hybrid semantics is more efficient w.r.t. sequences, whereas the two semantics follows the same behaviour w.r.t. branchings (although the denotational one performs better in absolute terms). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 67,330 |
2311.02256 | Image Recognition of Oil Leakage Area Based on Logical Semantic
Discrimination | Implementing precise detection of oil leaks in peak load equipment through image analysis can significantly enhance inspection quality and ensure the system's safety and reliability. However, challenges such as varying shapes of oil-stained regions, background noise, and fluctuating lighting conditions complicate the detection process. To address this, the integration of logical rule-based discrimination into image recognition has been proposed. This approach involves recognizing the spatial relationships among objects to semantically segment images of oil spills using a Mask RCNN network. The process begins with histogram equalization to enhance the original image, followed by the use of Mask RCNN to identify the preliminary positions and outlines of oil tanks, the ground, and areas of potential oil contamination. Subsequent to this identification, the spatial relationships between these objects are analyzed. Logical rules are then applied to ascertain whether the suspected areas are indeed oil spills. This method's effectiveness has been confirmed by testing on images captured from peak power equipment in the field. The results indicate that this approach can adeptly tackle the challenges in identifying oil-contaminated areas, showing a substantial improvement in accuracy compared to existing methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 405,359 |
2412.02493 | RelayGS: Reconstructing Dynamic Scenes with Large-Scale and Complex
Motions via Relay Gaussians | Reconstructing dynamic scenes with large-scale and complex motions remains a significant challenge. Recent techniques like Neural Radiance Fields and 3D Gaussian Splatting (3DGS) have shown promise but still struggle with scenes involving substantial movement. This paper proposes RelayGS, a novel method based on 3DGS, specifically designed to represent and reconstruct highly dynamic scenes. Our RelayGS learns a complete 4D representation with canonical 3D Gaussians and a compact motion field, consisting of three stages. First, we learn a fundamental 3DGS from all frames, ignoring temporal scene variations, and use a learnable mask to separate the highly dynamic foreground from the minimally moving background. Second, we replicate multiple copies of the decoupled foreground Gaussians from the first stage, each corresponding to a temporal segment, and optimize them using pseudo-views constructed from multiple frames within each segment. These Gaussians, termed Relay Gaussians, act as explicit relay nodes, simplifying and breaking down large-scale motion trajectories into smaller, manageable segments. Finally, we jointly learn the scene's temporal motion and refine the canonical Gaussians learned from the first two stages. We conduct thorough experiments on two dynamic scene datasets featuring large and complex motions, where our RelayGS outperforms state-of-the-arts by more than 1 dB in PSNR, and successfully reconstructs real-world basketball game scenes in a much more complete and coherent manner, whereas previous methods usually struggle to capture the complex motion of players. Code will be publicly available at https://github.com/gqk/RelayGS | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 513,556 |
2311.17810 | Coloring the Past: Neural Historical Buildings Reconstruction from
Archival Photography | Historical buildings are a treasure and milestone of human cultural heritage. Reconstructing the 3D models of these building hold significant value. The rapid development of neural rendering methods makes it possible to recover the 3D shape only based on archival photographs. However, this task presents considerable challenges due to the limitations of such datasets. Historical photographs are often limited in number and the scenes in these photos might have altered over time. The radiometric quality of these images is also often sub-optimal. To address these challenges, we introduce an approach to reconstruct the geometry of historical buildings, employing volumetric rendering techniques. We leverage dense point clouds as a geometric prior and introduce a color appearance embedding loss to recover the color of the building given limited available color images. We aim for our work to spark increased interest and focus on preserving historical buildings. Thus, we also introduce a new historical dataset of the Hungarian National Theater, providing a new benchmark for the reconstruction method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,423 |
1801.09042 | Image2GIF: Generating Cinemagraphs using Recurrent Deep Q-Networks | Given a still photograph, one can imagine how dynamic objects might move against a static background. This idea has been actualized in the form of cinemagraphs, where the motion of particular objects within a still image is repeated, giving the viewer a sense of animation. In this paper, we learn computational models that can generate cinemagraph sequences automatically given a single image. To generate cinemagraphs, we explore combining generative models with a recurrent neural network and deep Q-networks to enhance the power of sequence generation. To enable and evaluate these models we make use of two datasets, one synthetically generated and the other containing real video generated cinemagraphs. Both qualitative and quantitative evaluations demonstrate the effectiveness of our models on the synthetic and real datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 89,035 |
1808.07382 | Convergence of Cubic Regularization for Nonconvex Optimization under KL
Property | Cubic-regularized Newton's method (CR) is a popular algorithm that guarantees to produce a second-order stationary solution for solving nonconvex optimization problems. However, existing understandings of the convergence rate of CR are conditioned on special types of geometrical properties of the objective function. In this paper, we explore the asymptotic convergence rate of CR by exploiting the ubiquitous Kurdyka-Lojasiewicz (KL) property of nonconvex objective functions. In specific, we characterize the asymptotic convergence rate of various types of optimality measures for CR including function value gap, variable distance gap, gradient norm and least eigenvalue of the Hessian matrix. Our results fully characterize the diverse convergence behaviors of these optimality measures in the full parameter regime of the KL property. Moreover, we show that the obtained asymptotic convergence rates of CR are order-wise faster than those of first-order gradient descent algorithms under the KL property. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 105,732 |
2102.12310 | Hyperspectral Denoising Using Unsupervised Disentangled Spatio-Spectral
Deep Priors | Image denoising is often empowered by accurate prior information. In recent years, data-driven neural network priors have shown promising performance for RGB natural image denoising. Compared to classic handcrafted priors (e.g., sparsity and total variation), the "deep priors" are learned using a large number of training samples -- which can accurately model the complex image generating process. However, data-driven priors are hard to acquire for hyperspectral images (HSIs) due to the lack of training data. A remedy is to use the so-called unsupervised deep image prior (DIP). Under the unsupervised DIP framework, it is hypothesized and empirically demonstrated that proper neural network structures are reasonable priors of certain types of images, and the network weights can be learned without training data. Nonetheless, the most effective unsupervised DIP structures were proposed for natural images instead of HSIs. The performance of unsupervised DIP-based HSI denoising is limited by a couple of serious challenges, namely, network structure design and network complexity. This work puts forth an unsupervised DIP framework that is based on the classic spatio-spectral decomposition of HSIs. Utilizing the so-called linear mixture model of HSIs, two types of unsupervised DIPs, i.e., U-Net-like network and fully-connected networks, are employed to model the abundance maps and endmembers contained in the HSIs, respectively. This way, empirically validated unsupervised DIP structures for natural images can be easily incorporated for HSI denoising. Besides, the decomposition also substantially reduces network complexity. An efficient alternating optimization algorithm is proposed to handle the formulated denoising problem. Semi-real and real data experiments are employed to showcase the effectiveness of the proposed approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 221,691 |
2106.14608 | Ensembling Shift Detectors: an Extensive Empirical Evaluation | The term dataset shift refers to the situation where the data used to train a machine learning model is different from where the model operates. While several types of shifts naturally occur, existing shift detectors are usually designed to address only a specific type of shift. We propose a simple yet powerful technique to ensemble complementary shift detectors, while tuning the significance level of each detector's statistical test to the dataset. This enables a more robust shift detection, capable of addressing all different types of shift, which is essential in real-life settings where the precise shift type is often unknown. This approach is validated by a large-scale statistically sound benchmark study over various synthetic shifts applied to real-world structured datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 243,457 |
2304.02104 | Deep learning for diffusion in porous media | We adopt convolutional neural networks (CNN) to predict the basic properties of the porous media. Two different media types are considered: one mimics the sand packings, and the other mimics the systems derived from the extracellular space of biological tissues. The Lattice Boltzmann Method is used to obtain the labeled data necessary for performing supervised learning. We distinguish two tasks. In the first, networks based on the analysis of the system's geometry predict porosity and effective diffusion coefficient. In the second, networks reconstruct the concentration map. In the first task, we propose two types of CNN models: the C-Net and the encoder part of the U-Net. Both networks are modified by adding a self-normalization module [Graczyk \textit{et al.}, Sci Rep 12, 10583 (2022)]. The models predict with reasonable accuracy but only within the data type, they are trained on. For instance, the model trained on sand packings-like samples overshoots or undershoots for biological-like samples. In the second task, we propose the usage of the U-Net architecture. It accurately reconstructs the concentration fields. In contrast to the first task, the network trained on one data type works well for the other. For instance, the model trained on sand packings-like samples works perfectly on biological-like samples. Eventually, for both types of the data, we fit exponents in the Archie's law to find tortuosity that is used to describe the dependence of the effective diffusion on porosity. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 356,321 |
2201.05051 | Speech Resources in the Tamasheq Language | In this paper we present two datasets for Tamasheq, a developing language mainly spoken in Mali and Niger. These two datasets were made available for the IWSLT 2022 low-resource speech translation track, and they consist of collections of radio recordings from daily broadcast news in Niger (Studio Kalangou) and Mali (Studio Tamani). We share (i) a massive amount of unlabeled audio data (671 hours) in five languages: French from Niger, Fulfulde, Hausa, Tamasheq and Zarma, and (ii) a smaller 17 hours parallel corpus of audio recordings in Tamasheq, with utterance-level translations in the French language. All this data is shared under the Creative Commons BY-NC-ND 3.0 license. We hope these resources will inspire the speech community to develop and benchmark models using the Tamasheq language. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 275,270 |
1805.11978 | Kirchhoff-Love shell theory based on tangential differential calculus | The Kirchhoff-Love shell theory is recasted in the frame of the tangential differential calculus (TDC) where differential operators on surfaces are formulated based on global, three-dimensional coordinates. As a consequence, there is no need for a parametrization of the shell geometry implying curvilinear surface coordinates as used in the classical shell theory. Therefore, the proposed TDC-based formulation also applies to shell geometries which are zero-isosurfaces as in the level-set method where no parametrization is available in general. For the discretization, the TDC-based formulation may be used based on surface meshes implying element-wise parametrizations. Then, the results are equivalent to those obtained based on the classical theory. However, it may also be used in recent finite element approaches as the TraceFEM and CutFEM where shape functions are generated on a background mesh without any need for a parametrization. Numerical results presented herein are achieved with isogeometric analysis for classical and new benchmark tests. Higher-order convergence rates in the residual errors are achieved when the physical fields are sufficiently smooth. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 99,064 |
2407.05316 | Leveraging Topological Guidance for Improved Knowledge Distillation | Deep learning has shown its efficacy in extracting useful features to solve various computer vision tasks. However, when the structure of the data is complex and noisy, capturing effective information to improve performance is very difficult. To this end, topological data analysis (TDA) has been utilized to derive useful representations that can contribute to improving performance and robustness against perturbations. Despite its effectiveness, the requirements for large computational resources and significant time consumption in extracting topological features through TDA are critical problems when implementing it on small devices. To address this issue, we propose a framework called Topological Guidance-based Knowledge Distillation (TGD), which uses topological features in knowledge distillation (KD) for image classification tasks. We utilize KD to train a superior lightweight model and provide topological features with multiple teachers simultaneously. We introduce a mechanism for integrating features from different teachers and reducing the knowledge gap between teachers and the student, which aids in improving performance. We demonstrate the effectiveness of our approach through diverse empirical evaluations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,919 |
2401.06349 | ADAPT: Alzheimer Diagnosis through Adaptive Profiling Transformers | Automated diagnosis of Alzheimer Disease(AD) from brain imaging, such as magnetic resonance imaging (MRI), has become increasingly important and has attracted the community to contribute many deep learning methods. However, many of these methods are facing a trade-off that 3D models tend to be complicated while 2D models cannot capture the full 3D intricacies from the data. In this paper, we introduce a new model structure for diagnosing AD, and it can complete with performances of 3D models while essentially is a 2D method (thus computationally efficient). While the core idea lies in new perspective of cutting the 3D images into multiple 2D slices from three dimensions, we introduce multiple components that can further benefit the model in this new perspective, including adaptively selecting the number of sclices in each dimension, and the new attention mechanism. In addition, we also introduce a morphology augmentation, which also barely introduces new computational loads, but can help improve the diagnosis performances due to its alignment to the pathology of AD. We name our method ADAPT, which stands for Alzheimer Diagnosis through Adaptive Profiling Transformers. We test our model from a practical perspective (the testing domains do not appear in the training one): the diagnosis accuracy favors our ADAPT, while ADAPT uses less parameters than most 3D models use. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 421,127 |
1708.07336 | Active Sampling of Pairs and Points for Large-scale Linear Bipartite
Ranking | Bipartite ranking is a fundamental ranking problem that learns to order relevant instances ahead of irrelevant ones. The pair-wise approach for bi-partite ranking construct a quadratic number of pairs to solve the problem, which is infeasible for large-scale data sets. The point-wise approach, albeit more efficient, often results in inferior performance. That is, it is difficult to conduct bipartite ranking accurately and efficiently at the same time. In this paper, we develop a novel active sampling scheme within the pair-wise approach to conduct bipartite ranking efficiently. The scheme is inspired from active learning and can reach a competitive ranking performance while focusing only on a small subset of the many pairs during training. Moreover, we propose a general Combined Ranking and Classification (CRC) framework to accurately conduct bipartite ranking. The framework unifies point-wise and pair-wise approaches and is simply based on the idea of treating each instance point as a pseudo-pair. Experiments on 14 real-word large-scale data sets demonstrate that the proposed algorithm of Active Sampling within CRC, when coupled with a linear Support Vector Machine, usually outperforms state-of-the-art point-wise and pair-wise ranking approaches in terms of both accuracy and efficiency. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 79,470 |
2410.11646 | Feature-guided score diffusion for sampling conditional densities | Score diffusion methods can learn probability densities from samples. The score of the noise-corrupted density is estimated using a deep neural network, which is then used to iteratively transport a Gaussian white noise density to a target density. Variants for conditional densities have been developed, but correct estimation of the corresponding scores is difficult. We avoid these difficulties by introducing an algorithm that guides the diffusion with a projected score. The projection pushes the image feature vector towards the feature vector centroid of the target class. The projected score and the feature vectors are learned by the same network. Specifically, the image feature vector is defined as the spatial averages of the channels activations in select layers of the network. Optimizing the projected score for denoising loss encourages image feature vectors of each class to cluster around their centroids. It also leads to the separations of the centroids. We show that these centroids provide a low-dimensional Euclidean embedding of the class conditional densities. We demonstrate that the algorithm can generate high quality and diverse samples from the conditioning class. Conditional generation can be performed using feature vectors interpolated between those of the training set, demonstrating out-of-distribution generalization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 498,654 |
2405.18369 | PromptWizard: Task-Aware Prompt Optimization Framework | Large language models (LLMs) have transformed AI across diverse domains, with prompting being central to their success in guiding model outputs. However, manual prompt engineering is both labor-intensive and domain-specific, necessitating the need for automated solutions. We introduce PromptWizard, a novel, fully automated framework for discrete prompt optimization, utilizing a self-evolving, self-adapting mechanism. Through a feedback-driven critique and synthesis process, PromptWizard achieves an effective balance between exploration and exploitation, iteratively refining both prompt instructions and in-context examples to generate human-readable, task-specific prompts. This guided approach systematically improves prompt quality, resulting in superior performance across 45 tasks. PromptWizard excels even with limited training data, smaller LLMs, and various LLM architectures. Additionally, our cost analysis reveals a substantial reduction in API calls, token usage, and overall cost, demonstrating PromptWizard's efficiency, scalability, and advantages over existing prompt optimization strategies. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 458,396 |
2001.09994 | A Primer on Domain Adaptation | Standard supervised machine learning assumes that the distribution of the source samples used to train an algorithm is the same as the one of the target samples on which it is supposed to make predictions. However, as any data scientist will confirm, this is hardly ever the case in practice. The set of statistical and numerical methods that deal with such situations is known as domain adaptation, a field with a long and rich history. The myriad of methods available and the unfortunate lack of a clear and universally accepted terminology can however make the topic rather daunting for the newcomer. Therefore, rather than aiming at completeness, which leads to exhibiting a tedious catalog of methods, this pedagogical review aims at a coherent presentation of four important special cases: (1) prior shift, a situation in which training samples were selected according to their labels without any knowledge of their actual distribution in the target, (2) covariate shift which deals with a situation where training examples were picked according to their features but with some selection bias, (3) concept shift where the dependence of the labels on the features defers between the source and the target, and last but not least (4) subspace mapping which deals with a situation where features in the target have been subjected to an unknown distortion with respect to the source features. In each case we first build an intuition, next we provide the appropriate mathematical framework and eventually we describe a practical application. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 161,716 |
2310.15470 | Continual Event Extraction with Semantic Confusion Rectification | We study continual event extraction, which aims to extract incessantly emerging event information while avoiding forgetting. We observe that the semantic confusion on event types stems from the annotations of the same text being updated over time. The imbalance between event types even aggravates this issue. This paper proposes a novel continual event extraction model with semantic confusion rectification. We mark pseudo labels for each sentence to alleviate semantic confusion. We transfer pivotal knowledge between current and previous models to enhance the understanding of event types. Moreover, we encourage the model to focus on the semantics of long-tailed event types by leveraging other associated types. Experimental results show that our model outperforms state-of-the-art baselines and is proficient in imbalanced datasets. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 402,323 |
2410.01458 | From Reward Shaping to Q-Shaping: Achieving Unbiased Learning with
LLM-Guided Knowledge | Q-shaping is an extension of Q-value initialization and serves as an alternative to reward shaping for incorporating domain knowledge to accelerate agent training, thereby improving sample efficiency by directly shaping Q-values. This approach is both general and robust across diverse tasks, allowing for immediate impact assessment while guaranteeing optimality. We evaluated Q-shaping across 20 different environments using a large language model (LLM) as the heuristic provider. The results demonstrate that Q-shaping significantly enhances sample efficiency, achieving a \textbf{16.87\%} improvement over the best baseline in each environment and a \textbf{253.80\%} improvement compared to LLM-based reward shaping methods. These findings establish Q-shaping as a superior and unbiased alternative to conventional reward shaping in reinforcement learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 493,782 |
2111.04030 | A Weyl Criterion for Finite-State Dimension and Applications | Finite-state dimension, introduced early in this century as a finite-state version of classical Hausdorff dimension, is a quantitative measure of the lower asymptotic density of information in an infinite sequence over a finite alphabet, as perceived by finite automata. Finite-state dimension is a robust concept that now has equivalent formulations in terms of finite-state gambling, lossless finite-state data compression, finite-state prediction, entropy rates, and automatic Kolmogorov complexity. The Schnorr-Stimm dichotomy theorem gave the first automata-theoretic characterization of normal sequences, which had been studied in analytic number theory since Borel defined them. This theorem implies that a sequence (or a real number having this sequence as its base-b expansion) is normal if and only if it has finite-state dimension 1. One of the most powerful classical tools for investigating normal numbers is the Weyl criterion, which characterizes normality in terms of exponential sums. Such sums are well studied objects with many connections to other aspects of analytic number theory, and this has made use of Weyl criterion especially fruitful. This raises the question whether Weyl criterion can be generalized from finite-state dimension 1 to arbitrary finite-state dimensions, thereby making it a quantitative tool for studying data compression, prediction, etc. This paper does exactly this. We extend the Weyl criterion from a characterization of sequences with finite-state dimension 1 to a criterion that characterizes every finite-state dimension. This turns out not to be a routine generalization of the original Weyl criterion. Even though exponential sums may diverge for non-normal numbers, finite-state dimension can be characterized in terms of the dimensions of the subsequence limits of the exponential sums. We demonstrate the utility of our criterion though examples. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 265,355 |
2111.05120 | A Deep Learning Technique using Low Sampling rate for residential Non
Intrusive Load Monitoring | Individual device loads and energy consumption feedback is one of the important approaches for pursuing users to save energy in residences. This can help in identifying faulty devices and wasted energy by devices when left On unused. The main challenge is to identity and estimate the energy consumption of individual devices without intrusive sensors on each device. Non-intrusive load monitoring (NILM) or energy disaggregation, is a blind source separation problem which requires a system to estimate the electricity usage of individual appliances from the aggregated household energy consumption. In this paper, we propose a novel deep neural network-based approach for performing load disaggregation on low frequency power data obtained from residential households. We combine a series of one-dimensional Convolutional Neural Networks and Long Short Term Memory (1D CNN-LSTM) to extract features that can identify active appliances and retrieve their power consumption given the aggregated household power value. We used CNNs to extract features from main readings in a given time frame and then used those features to classify if a given appliance is active at that time period or not. Following that, the extracted features are used to model a generation problem using LSTM. We train the LSTM to generate the disaggregated energy consumption of a particular appliance. Our neural network is capable of generating detailed feedback of demand-side, providing vital insights to the end-user about their electricity consumption. The algorithm was designed for low power offline devices such as ESP32. Empirical calculations show that our model outperforms the state-of-the-art on the Reference Energy Disaggregation Dataset (REDD). | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,700 |
2405.19213 | EdgeSight: Enabling Modeless and Cost-Efficient Inference at the Edge | Traditional ML inference is evolving toward modeless inference, which abstracts the complexity of model selection from users, allowing the system to automatically choose the most appropriate model for each request based on accuracy and resource requirements. While prior studies have focused on modeless inference within data centers, this paper tackles the pressing need for cost-efficient modeless inference at the edge -- particularly within its unique constraints of limited device memory, volatile network conditions, and restricted power consumption. To overcome these challenges, we propose EdgeSight, a system that provides cost-efficient EdgeSight serving for diverse DNNs at the edge. EdgeSight employs an edge-data center (edge-DC) architecture, utilizing confidence scaling to reduce the number of model options while meeting diverse accuracy requirements. Additionally, it supports lossy inference in volatile network environments. Our experimental results show that EdgeSight outperforms existing systems by up to 1.6x in P99 latency for modeless services. Furthermore, our FPGA prototype demonstrates similar performance at certain accuracy levels, with a power consumption reduction of up to 3.34x. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | true | 458,793 |
2202.11354 | Low-complexity Joint Beamforming for RIS-Aided Multi-User Downlink over
Correlated Channels | This paper considers the reconfigurable intelligent surface (RIS)-assisted multi-user communications, where an RIS is used to assist the base station (BS) for serving multiple users. The RIS consisting of passive reflecting elements can manipulate the reflected direction of the incoming electromagnetic waves by adjusting the phase shifts of the reflecting elements. Alternating optimization (AO) based approach is commonly used to determine the phase shifts of the RIS elements. While AO-based approaches have shown significant gain of RIS, the complexity is quite high due to the coupled structure of the cascade channel from the BS through RIS to the user. In addition, the sub-wavelength structure of the RIS introduces spatial correlation that may cause strong interference to users. To handle severe multi-user interference over correlated channels, we consider adaptive user grouping previously proposed for massive mutli-input and multi-output (MIMO) systems and propose two low-complexity beamforming design methods, depending on whether the grouping result is taken into account. Simulation results demonstrate the superior sum rate achieved by the proposed methods than that without user grouping. Besides, the proposed methods can perform similarly to the AO-based approach but with much lower complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 281,862 |
2404.14642 | Uncertainty Quantification on Graph Learning: A Survey | Graphical models have demonstrated their exceptional capabilities across numerous applications, such as social networks, citation networks, and online recommendation systems. Despite these successes, their performance, confidence, and trustworthiness are often limited by the inherent randomness of data in nature and the challenges of accurately capturing and modeling real-world complexities. This has increased interest in developing uncertainty quantification (UQ) techniques tailored to graphical models. In this survey, we comprehensively examine these existing works on UQ in graphical models, focusing on key aspects such as foundational knowledge, sources, representation, handling, and measurement of uncertainty. This survey distinguishes itself from most existing UQ surveys by specifically concentrating on UQ in graphical models, particularly probabilistic graphical models (PGMs) and graph neural networks (GNNs). We elaborately categorize recent work into two primary areas: uncertainty representation and uncertainty handling. By offering a comprehensive overview of the current landscape, including both established methodologies and emerging trends, we aim to bridge gaps in understanding and highlight key challenges and opportunities in the field. Through in-depth discussion of existing works and promising directions for future research, we believe this survey serves as a valuable resource for researchers, inspiring them to cope with uncertainty issues in both academic research and real-world applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 448,750 |
1502.06874 | An Upper Bound on the Minimum Distance of LDPC Codes over GF(q) | In [1] a syndrome counting based upper bound on the minimum distance of regular binary LDPC codes is given. In this paper we extend the bound to the case of irregular and generalized LDPC codes over GF(q). The comparison to the lower bound for LDPC codes over GF(q) and to the upper bound for non-binary codes is done. The new bound is shown to lie under the Gilbert-Varshamov bound at high rates. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,531 |
1506.00999 | Combining Two And Three-Way Embeddings Models for Link Prediction in
Knowledge Bases | This paper tackles the problem of endogenous link prediction for Knowledge Base completion. Knowledge Bases can be represented as directed graphs whose nodes correspond to entities and edges to relationships. Previous attempts either consist of powerful systems with high capacity to model complex connectivity patterns, which unfortunately usually end up overfitting on rare relationships, or in approaches that trade capacity for simplicity in order to fairly model all relationships, frequent or not. In this paper, we propose Tatec a happy medium obtained by complementing a high-capacity model with a simpler one, both pre-trained separately and then combined. We present several variants of this model with different kinds of regularization and combination strategies and show that this approach outperforms existing methods on different types of relationships by achieving state-of-the-art results on four benchmarks of the literature. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 43,741 |
2502.10894 | Bridging the Sim-to-Real Gap for Athletic Loco-Manipulation | Achieving athletic loco-manipulation on robots requires moving beyond traditional tracking rewards - which simply guide the robot along a reference trajectory - to task rewards that drive truly dynamic, goal-oriented behaviors. Commands such as "throw the ball as far as you can" or "lift the weight as quickly as possible" compel the robot to exhibit the agility and power inherent in athletic performance. However, training solely with task rewards introduces two major challenges: these rewards are prone to exploitation (reward hacking), and the exploration process can lack sufficient direction. To address these issues, we propose a two-stage training pipeline. First, we introduce the Unsupervised Actuator Net (UAN), which leverages real-world data to bridge the sim-to-real gap for complex actuation mechanisms without requiring access to torque sensing. UAN mitigates reward hacking by ensuring that the learned behaviors remain robust and transferable. Second, we use a pre-training and fine-tuning strategy that leverages reference trajectories as initial hints to guide exploration. With these innovations, our robot athlete learns to lift, throw, and drag with remarkable fidelity from simulation to reality. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 534,094 |
2007.11257 | Deep-VFX: Deep Action Recognition Driven VFX for Short Video | Human motion is a key function to communicate information. In the application, short-form mobile video is so popular all over the world such as Tik Tok. The users would like to add more VFX so as to pursue creativity and personlity. Many special effects are added on the short video platform. These gives the users more possibility to show off these personality. The common and traditional way is to create the template of VFX. However, in order to synthesis the perfect, the users have to tedious attempt to grasp the timing and rhythm of new templates. It is not easy-to-use especially for the mobile app. This paper aims to change the VFX synthesis by motion driven instead of the traditional template matching. We propose the AI method to improve this VFX synthesis. In detail, in order to add the special effect on the human body. The skeleton extraction is essential in this system. We also propose a novel form of LSTM to find out the user's intention by action recognition. The experiment shows that our system enables to generate VFX for short video more easier and efficient. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 188,507 |
2403.02645 | DT-DDNN: A Physical Layer Security Attack Detector in 5G RF Domain for
CAVs | The Synchronization Signal Block (SSB) is a fundamental component of the 5G New Radio (NR) air interface, crucial for the initial access procedure of Connected and Automated Vehicles (CAVs), and serves several key purposes in the network's operation. However, due to the predictable nature of SSB transmission, including the Primary and Secondary Synchronization Signals (PSS and SSS), jamming attacks are critical threats. These attacks, which can be executed without requiring high power or complex equipment, pose substantial risks to the 5G network, particularly as a result of the unencrypted transmission of control signals. Leveraging RF domain knowledge, this work presents a novel deep learning-based technique for detecting jammers in CAV networks. Unlike the existing jamming detection algorithms that mostly rely on network parameters, we introduce a double-threshold deep learning jamming detector by focusing on the SSB. The detection method is focused on RF domain features and improves the robustness of the network without requiring integration with the pre-existing network infrastructure. By integrating a preprocessing block to extract PSS correlation and energy per null resource elements (EPNRE) characteristics, our method distinguishes between normal and jammed received signals with high precision. Additionally, by incorporating of Discrete Wavelet Transform (DWT), the efficacy of training and detection are optimized. A double-threshold double Deep Neural Network (DT-DDNN) is also introduced to the architecture complemented by a deep cascade learning model to increase the sensitivity of the model to variations of signal-to-jamming noise ratio (SJNR). Results show that the proposed method achieves 96.4% detection rate in extra low jamming power, i.e., SJNR between 15 to 30 dB. Further, performance of DT-DDNN is validated by analyzing real 5G signals obtained from a practical testbed. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 434,884 |
2108.02137 | Under the Radar -- Auditing Fairness in ML for Humanitarian Mapping | Humanitarian mapping from space with machine learning helps policy-makers to timely and accurately identify people in need. However, recent concerns around fairness and transparency of algorithmic decision-making are a significant obstacle for applying these methods in practice. In this paper, we study if humanitarian mapping approaches from space are prone to bias in their predictions. We map village-level poverty and electricity rates in India based on nighttime lights (NTLs) with linear regression and random forest and analyze if the predictions systematically show prejudice against scheduled caste or tribe communities. To achieve this, we design a causal approach to measure counterfactual fairness based on propensity score matching. This allows to compare villages within a community of interest to synthetic counterfactuals. Our findings indicate that poverty is systematically overestimated and electricity systematically underestimated for scheduled tribes in comparison to a synthetic counterfactual group of villages. The effects have the opposite direction for scheduled castes where poverty is underestimated and electrification overestimated. These results are a warning sign for a variety of applications in humanitarian mapping where fairness issues would compromise policy goals. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 249,230 |
2402.06187 | Premier-TACO is a Few-Shot Policy Learner: Pretraining Multitask
Representation via Temporal Action-Driven Contrastive Loss | We present Premier-TACO, a multitask feature representation learning approach designed to improve few-shot policy learning efficiency in sequential decision-making tasks. Premier-TACO leverages a subset of multitask offline datasets for pretraining a general feature representation, which captures critical environmental dynamics and is fine-tuned using minimal expert demonstrations. It advances the temporal action contrastive learning (TACO) objective, known for state-of-the-art results in visual control tasks, by incorporating a novel negative example sampling strategy. This strategy is crucial in significantly boosting TACO's computational efficiency, making large-scale multitask offline pretraining feasible. Our extensive empirical evaluation in a diverse set of continuous control benchmarks including Deepmind Control Suite, MetaWorld, and LIBERO demonstrate Premier-TACO's effectiveness in pretraining visual representations, significantly enhancing few-shot imitation learning of novel tasks. Our code, pretraining data, as well as pretrained model checkpoints will be released at https://github.com/PremierTACO/premier-taco. Our project webpage is at https://premiertaco.github.io. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 428,207 |
2402.09642 | Answer is All You Need: Instruction-following Text Embedding via
Answering the Question | This work aims to build a text embedder that can capture characteristics of texts specified by user instructions. Despite its tremendous potential to deploy user-oriented embeddings, none of previous approaches provides a concrete solution for it. This paper offers a new viewpoint, which treats the instruction as a question about the input text and encodes the expected answers to obtain the representation accordingly. Intuitively, texts with the same (implicit) semantics would share similar answers following the instruction, thus leading to more similar embeddings. Specifically, we propose InBedder that instantiates this embed-via-answering idea by only fine-tuning language models on abstractive question answering tasks. InBedder demonstrates significantly improved instruction-following capabilities according to our proposed instruction awareness tests and instruction robustness tests, when applied to both large language models (LLMs) (e.g., llama-2-7b) and smaller encoder-based LMs (e.g., roberta-large). Additionally, our qualitative analysis of clustering outcomes, achieved by applying different instructions to the same corpus, demonstrates a high degree of interpretability. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 429,611 |
2403.00986 | Merging Text Transformer Models from Different Initializations | Recent work on permutation-based model merging has shown impressive low- or zero-barrier mode connectivity between models from completely different initializations. However, this line of work has not yet extended to the Transformer architecture, despite its dominant popularity in the language domain. Therefore, in this work, we investigate the extent to which separate Transformer minima learn similar features, and propose a model merging technique to investigate the relationship between these minima in the loss landscape. The specifics of the architecture, like its residual connections, multi-headed attention, and discrete, sequential input, require specific interventions in order to compute model permutations that remain within the same functional equivalence class. In merging these models with our method, we consistently find lower loss barriers between minima compared to model averaging, across models trained on a masked-language modeling task or fine-tuned on a language understanding benchmark. Our results show that the minima of these models are less sharp and isolated than previously understood, and provide a basis for future work on merging separately trained Transformer models. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 434,211 |
1707.04390 | Symbolic Stochastic Chase Decoding of Reed-Solomon and BCH Codes | This paper proposes the Symbolic-Stochastic Chase Decoding Algorithm (S-SCA) for the Reed-Solomon (RS) and BCH codes. By efficient usage of void space between constellation points for $q$-ary modulations and using soft information at the input of the decoder, the S-SCA is capable of outperforming conventional Symbolic-Chase algorithm (S-CA) with less computational cost. Since the S-SCA starts with the randomized generation of likely test-vectors, it reduces the complexity to polynomial order and also it does not need to find the least reliable symbols to generate test-vectors. Our simulation results show that by increasing the number of test-vectors, the performance of the algorithm can approach the ML bound. The S-SCA($1K$) provides near $2$ dB gain in comparison with S-CA($1K$) for $(31, 25)$ RS code using $32$-QAM. Furthermore, the algorithm provides near $3$ dB further gain with $1K$ iteration compared with S-CA($65K$) when $(255, 239)$ RS code is used in an AWGN channel. For the Rayleigh fading channel and the same code, the algorithm provides more that $5$ dB gain. Also for $(63, 57)$ BCH codes and $8$-PSK modulation the proposed algorithm provides $3$dB gain with less complexity. This decoder is Soft-Input Soft-Output (SISO) decoder and is highly attractive in low power applications. Finally, the Symbolic-Search Bitwise-Transmission Stochastic Chase Algorithm (SSBT-SCA) was introduced for RS codes over BPSK transmission that is capable of generating symbolic test-vectors that reduce complexity and mitigate burst errors. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 77,033 |
2311.01806 | Sketching for Convex and Nonconvex Regularized Least Squares with Sharp
Guarantees | Randomized algorithms are important for solving large-scale optimization problems. In this paper, we propose a fast sketching algorithm for least square problems regularized by convex or nonconvex regularization functions, Sketching for Regularized Optimization (SRO). Our SRO algorithm first generates a sketch of the original data matrix, then solves the sketched problem. Different from existing randomized algorithms, our algorithm handles general Frechet subdifferentiable regularization functions in an unified framework. We present general theoretical result for the approximation error between the optimization results of the original problem and the sketched problem for regularized least square problems which can be convex or nonconvex. For arbitrary convex regularizer, relative-error bound is proved for the approximation error. Importantly, minimax rates for sparse signal estimation by solving the sketched sparse convex or nonconvex learning problems are also obtained using our general theoretical result under mild conditions. To the best of our knowledge, our results are among the first to demonstrate minimax rates for convex or nonconvex sparse learning problem by sketching under a unified theoretical framework. We further propose an iterative sketching algorithm which reduces the approximation error exponentially by iteratively invoking the sketching algorithm. Experimental results demonstrate the effectiveness of the proposed SRO and Iterative SRO algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 405,184 |
2004.10599 | Bayesian Optimization with Output-Weighted Optimal Sampling | In Bayesian optimization, accounting for the importance of the output relative to the input is a crucial yet challenging exercise, as it can considerably improve the final result but often involves inaccurate and cumbersome entropy estimations. We approach the problem from the perspective of importance-sampling theory, and advocate the use of the likelihood ratio to guide the search algorithm towards regions of the input space where the objective function to be minimized assumes abnormally small values. The likelihood ratio acts as a sampling weight and can be computed at each iteration without severely deteriorating the overall efficiency of the algorithm. In particular, it can be approximated in a way that makes the approach tractable in high dimensions. The "likelihood-weighted" acquisition functions introduced in this work are found to outperform their unweighted counterparts in a number of applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 173,679 |
2205.12598 | RobustLR: Evaluating Robustness to Logical Perturbation in Deductive
Reasoning | Transformers have been shown to be able to perform deductive reasoning on a logical rulebase containing rules and statements written in English natural language. While the progress is promising, it is currently unclear if these models indeed perform logical reasoning by understanding the underlying logical semantics in the language. To this end, we propose RobustLR, a suite of evaluation datasets that evaluate the robustness of these models to minimal logical edits in rulebases and some standard logical equivalence conditions. In our experiments with RoBERTa and T5, we find that the models trained in prior works do not perform consistently on the different perturbations in RobustLR, thus showing that the models are not robust to the proposed logical perturbations. Further, we find that the models find it especially hard to learn logical negation and disjunction operators. Overall, using our evaluation sets, we demonstrate some shortcomings of the deductive reasoning-based language models, which can eventually help towards designing better models for logical reasoning over natural language. All the datasets and code base have been made publicly available. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | 298,621 |
2407.02165 | WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation | Existing human datasets for avatar creation are typically limited to laboratory environments, wherein high-quality annotations (e.g., SMPL estimation from 3D scans or multi-view images) can be ideally provided. However, their annotating requirements are impractical for real-world images or videos, posing challenges toward real-world applications on current avatar creation methods. To this end, we propose the WildAvatar dataset, a web-scale in-the-wild human avatar creation dataset extracted from YouTube, with $10,000+$ different human subjects and scenes. WildAvatar is at least $10\times$ richer than previous datasets for 3D human avatar creation. We evaluate several state-of-the-art avatar creation methods on our dataset, highlighting the unexplored challenges in real-world applications on avatar creation. We also demonstrate the potential for generalizability of avatar creation methods, when provided with data at scale. We publicly release our data source links and annotations, to push forward 3D human avatar creation and other related fields for real-world applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 469,611 |
2101.06658 | Trilevel Neural Architecture Search for Efficient Single Image
Super-Resolution | Modern solutions to the single image super-resolution (SISR) problem using deep neural networks aim not only at better performance accuracy but also at a lighter and computationally efficient model. To that end, recently, neural architecture search (NAS) approaches have shown some tremendous potential. Following the same underlying, in this paper, we suggest a novel trilevel NAS method that provides a better balance between different efficiency metrics and performance to solve SISR. Unlike available NAS, our search is more complete, and therefore it leads to an efficient, optimized, and compressed architecture. We innovatively introduce a trilevel search space modeling, i.e., hierarchical modeling on network-, cell-, and kernel-level structures. To make the search on trilevel spaces differentiable and efficient, we exploit a new sparsestmax technique that is excellent at generating sparse distributions of individual neural architecture candidates so that they can be better disentangled for the final selection from the enlarged search space. We further introduce the sorting technique to the sparsestmax relaxation for better network-level compression. The proposed NAS optimization additionally facilitates simultaneous search and training in a single phase, reducing search time and train time. Comprehensive evaluations on the benchmark datasets show our method's clear superiority over the state-of-the-art NAS in terms of a good trade-off between model size, performance, and efficiency. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 215,798 |
2405.18015 | MultiADE: A Multi-domain Benchmark for Adverse Drug Event Extraction | Active adverse event surveillance monitors Adverse Drug Events (ADE) from different data sources, such as electronic health records, medical literature, social media and search engine logs. Over the years, many datasets have been created, and shared tasks have been organised to facilitate active adverse event surveillance. However, most - if not all - datasets or shared tasks focus on extracting ADEs from a particular type of text. Domain generalisation - the ability of a machine learning model to perform well on new, unseen domains (text types) - is under-explored. Given the rapid advancements in natural language processing, one unanswered question is how far we are from having a single ADE extraction model that is effective on various types of text, such as scientific literature and social media posts. We contribute to answering this question by building a multi-domain benchmark for adverse drug event extraction, which we named MultiADE. The new benchmark comprises several existing datasets sampled from different text types and our newly created dataset - CADECv2, which is an extension of CADEC, covering online posts regarding more diverse drugs than CADEC. Our new dataset is carefully annotated by human annotators following detailed annotation guidelines. Our benchmark results show that the generalisation of the trained models is far from perfect, making it infeasible to be deployed to process different types of text. In addition, although intermediate transfer learning is a promising approach to utilising existing resources, further investigation is needed on methods of domain adaptation, particularly cost-effective methods to select useful training instances. The newly created CADECv2 and the scripts for building the benchmark are publicly available at CSIRO's Data Portal. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 458,234 |
2311.02304 | Imitating and Finetuning Model Predictive Control for Robust and
Symmetric Quadrupedal Locomotion | Control of legged robots is a challenging problem that has been investigated by different approaches, such as model-based control and learning algorithms. This work proposes a novel Imitating and Finetuning Model Predictive Control (IFM) framework to take the strengths of both approaches. Our framework first develops a conventional model predictive controller (MPC) using Differential Dynamic Programming and Raibert heuristic, which serves as an expert policy. Then we train a clone of the MPC using imitation learning to make the controller learnable. Finally, we leverage deep reinforcement learning with limited exploration for further finetuning the policy on more challenging terrains. By conducting comprehensive simulation and hardware experiments, we demonstrate that the proposed IFM framework can significantly improve the performance of the given MPC controller on rough, slippery, and conveyor terrains that require careful coordination of footsteps. We also showcase that IFM can efficiently produce more symmetric, periodic, and energy-efficient gaits compared to Vanilla RL with a minimal burden of reward shaping. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 405,381 |
2309.17156 | Age Group Discrimination via Free Handwriting Indicators | The growing global elderly population is expected to increase the prevalence of frailty, posing significant challenges to healthcare systems. Frailty, a syndrome associated with ageing, is characterised by progressive health decline, increased vulnerability to stressors and increased risk of mortality. It represents a significant burden on public health and reduces the quality of life of those affected. The lack of a universally accepted method to assess frailty and a standardised definition highlights a critical research gap. Given this lack and the importance of early prevention, this study presents an innovative approach using an instrumented ink pen to ecologically assess handwriting for age group classification. Content-free handwriting data from 80 healthy participants in different age groups (20-40, 41-60, 61-70 and 70+) were analysed. Fourteen gesture- and tremor-related indicators were computed from the raw data and used in five classification tasks. These tasks included discriminating between adjacent and non-adjacent age groups using Catboost and Logistic Regression classifiers. Results indicate exceptional classifier performance, with accuracy ranging from 82.5% to 97.5%, precision from 81.8% to 100%, recall from 75% to 100% and ROC-AUC from 92.2% to 100%. Model interpretability, facilitated by SHAP analysis, revealed age-dependent sensitivity of temporal and tremor-related handwriting features. Importantly, this classification method offers potential for early detection of abnormal signs of ageing in uncontrolled settings such as remote home monitoring, thereby addressing the critical issue of frailty detection and contributing to improved care for older adults. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 395,651 |
2110.07875 | Graph Neural Networks with Learnable Structural and Positional
Representations | Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers. Possible graph PE are Laplacian eigenvectors. In this work, we propose to decouple structural and positional representations to make easy for the network to learn these two essential properties. We introduce a novel generic architecture which we call LSPE (Learnable Structural and Positional Encodings). We investigate several sparse and fully-connected (Transformer-like) GNNs, and observe a performance increase for molecular datasets, from 1.79% up to 64.14% when considering learnable PE for both GNN classes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 261,165 |
1307.6476 | Rigid Body Localization Using Sensor Networks: Position and Orientation
Estimation | In this paper, we propose a novel framework called rigid body localization for joint position and orientation estimation of a rigid body. We consider a setup in which a few sensors are mounted on a rigid body. The absolute position of the sensors on the rigid body, or the absolute position of the rigid body itself is not known. However, we know how the sensors are mounted on the rigid body, i.e., the sensor topology is known. Using range-only measurements between the sensors and a few anchors (nodes with known absolute positions), and without using any inertial measurements (e.g., accelerometers), we estimate the position and orientation of the rigid body. For this purpose, the absolute position of the sensors is expressed as an affine function of the Stiefel manifold. In other words, we represent the orientation as a rotation matrix, and absolute position as a translation vector. We propose a least-squares (LS), simplified unitarily constrained LS (SUC-LS), and optimal unitarily constrained least-squares (OUC-LS) estimator, where the latter is based on Newton's method. As a benchmark, we derive a unitarily constrained Cram\'er-Rao bound (UC-CRB). The known topology of the sensors can sometimes be perturbed during fabrication. To take these perturbations into account, a simplified unitarily constrained total-least-squares (SUC-TLS), and an optimal unitarily constrained total-least-squares (OUC-TLS) estimator are also proposed. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 26,028 |
2411.14358 | InCrowd-VI: A Realistic Visual-Inertial Dataset for Evaluating SLAM in
Indoor Pedestrian-Rich Spaces for Human Navigation | Simultaneous localization and mapping (SLAM) techniques can be used to navigate the visually impaired, but the development of robust SLAM solutions for crowded spaces is limited by the lack of realistic datasets. To address this, we introduce InCrowd-VI, a novel visual-inertial dataset specifically designed for human navigation in indoor pedestrian-rich environments. Recorded using Meta Aria Project glasses, it captures realistic scenarios without environmental control. InCrowd-VI features 58 sequences totaling a 5 km trajectory length and 1.5 hours of recording time, including RGB, stereo images, and IMU measurements. The dataset captures important challenges such as pedestrian occlusions, varying crowd densities, complex layouts, and lighting changes. Ground-truth trajectories, accurate to approximately 2 cm, are provided in the dataset, originating from the Meta Aria project machine perception SLAM service. In addition, a semi-dense 3D point cloud of scenes is provided for each sequence. The evaluation of state-of-the-art visual odometry (VO) and SLAM algorithms on InCrowd-VI revealed severe performance limitations in these realistic scenarios. Under challenging conditions, systems exceeded the required localization accuracy of 0.5 meters and the 1\% drift threshold, with classical methods showing drift up to 5-10\%. While deep learning-based approaches maintained high pose estimation coverage (>90\%), they failed to achieve real-time processing speeds necessary for walking pace navigation. These results demonstrate the need and value of a new dataset to advance SLAM research for visually impaired navigation in complex indoor environments. The dataset and associated tools are publicly available at https://incrowd-vi.cloudlab.zhaw.ch/. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 510,125 |
2210.05559 | Unifying Diffusion Models' Latent Space, with Applications to
CycleDiffusion and Guidance | Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs. The code is publicly available at https://github.com/ChenWu98/cycle-diffusion. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 322,905 |
2202.03407 | Investigating the fidelity of explainable artificial intelligence
methods for applications of convolutional neural networks in geoscience | Convolutional neural networks (CNNs) have recently attracted great attention in geoscience due to their ability to capture non-linear system behavior and extract predictive spatiotemporal patterns. Given their black-box nature however, and the importance of prediction explainability, methods of explainable artificial intelligence (XAI) are gaining popularity as a means to explain the CNN decision-making strategy. Here, we establish an intercomparison of some of the most popular XAI methods and investigate their fidelity in explaining CNN decisions for geoscientific applications. Our goal is to raise awareness of the theoretical limitations of these methods and gain insight into the relative strengths and weaknesses to help guide best practices. The considered XAI methods are first applied to an idealized attribution benchmark, where the ground truth of explanation of the network is known a priori, to help objectively assess their performance. Secondly, we apply XAI to a climate-related prediction setting, namely to explain a CNN that is trained to predict the number of atmospheric rivers in daily snapshots of climate simulations. Our results highlight several important issues of XAI methods (e.g., gradient shattering, inability to distinguish the sign of attribution, ignorance to zero input) that have previously been overlooked in our field and, if not considered cautiously, may lead to a distorted picture of the CNN decision-making strategy. We envision that our analysis will motivate further investigation into XAI fidelity and will help towards a cautious implementation of XAI in geoscience, which can lead to further exploitation of CNNs and deep learning for prediction problems. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 279,193 |
2001.08041 | Loci of 3-periodics in an Elliptic Billiard: why so many ellipses? | A triangle center such as the incenter, barycenter, etc., is specified by a function thrice- and cyclically applied on sidelengths and/or angles. Consider the 1d family of 3-periodics in the elliptic billiard, and the loci of its triangle centers. Some will sweep ellipses, and others higher-degree algebraic curves. We propose two rigorous methods to prove if the locus of a given center is an ellipse: one based on computer algebra, and another based on an algebro-geometric method. We also prove that if the triangle center function is rational on sidelengths, the locus is algebraic | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 161,187 |
2407.12739 | GroundUp: Rapid Sketch-Based 3D City Massing | We propose GroundUp, the first sketch-based ideation tool for 3D city massing of urban areas. We focus on early-stage urban design, where sketching is a common tool and the design starts from balancing building volumes (masses) and open spaces. With Human-Centered AI in mind, we aim to help architects quickly revise their ideas by easily switching between 2D sketches and 3D models, allowing for smoother iteration and sharing of ideas. Inspired by feedback from architects and existing workflows, our system takes as a first input a user sketch of multiple buildings in a top-down view. The user then draws a perspective sketch of the envisioned site. Our method is designed to exploit the complementarity of information in the two sketches and allows users to quickly preview and adjust the inferred 3D shapes. Our model has two main components. First, we propose a novel sketch-to-depth prediction network for perspective sketches that exploits top-down sketch shapes. Second, we use depth cues derived from the perspective sketch as a condition to our diffusion model, which ultimately completes the geometry in a top-down view. Thus, our final 3D geometry is represented as a heightfield, allowing users to construct the city `from the ground up'. | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,055 |
2502.04797 | Self-Rationalization in the Wild: A Large Scale Out-of-Distribution
Evaluation on NLI-related tasks | Free-text explanations are expressive and easy to understand, but many datasets lack annotated explanation data, making it challenging to train models for explainable predictions. To address this, we investigate how to use existing explanation datasets for self-rationalization and evaluate models' out-of-distribution (OOD) performance. We fine-tune T5-Large and OLMo-7B models and assess the impact of fine-tuning data quality, the number of fine-tuning samples, and few-shot selection methods. The models are evaluated on 19 diverse OOD datasets across three tasks: natural language inference (NLI), fact-checking, and hallucination detection in abstractive summarization. For the generated explanation evaluation, we conduct a human study on 13 selected models and study its correlation with the Acceptability score (T5-11B) and three other LLM-based reference-free metrics. Human evaluation shows that the Acceptability score correlates most strongly with human judgments, demonstrating its effectiveness in evaluating free-text explanations. Our findings reveal: 1) few annotated examples effectively adapt models for OOD explanation generation; 2) compared to sample selection strategies, fine-tuning data source has a larger impact on OOD performance; and 3) models with higher label prediction accuracy tend to produce better explanations, as reflected by higher Acceptability scores. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 531,326 |
2204.09594 | Predicting Clinical Intent from Free Text Electronic Health Records | After a patient consultation, a clinician determines the steps in the management of the patient. A clinician may for example request to see the patient again or refer them to a specialist. Whilst most clinicians will record their intent as "next steps" in the patient's clinical notes, in some cases the clinician may forget to indicate their intent as an order or request, e.g. failure to place the follow-up order. This consequently results in patients becoming lost-to-follow up and may in some cases lead to adverse consequences. In this paper we train a machine learning model to detect a clinician's intent to follow up with a patient from the patient's clinical notes. Annotators systematically identified 22 possible types of clinical intent and annotated 3000 Bariatric clinical notes. The annotation process revealed a class imbalance in the labeled data and we found that there was only sufficient labeled data to train 11 out of the 22 intents. We used the data to train a BERT based multilabel classification model and reported the following average accuracy metrics for all intents: macro-precision: 0.91, macro-recall: 0.90, macro-f1: 0.90. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 292,489 |
2409.16186 | System-Level Performance Metrics Sensitivity of an Electrified
Heavy-Duty Mobile Manipulator | The shift to electric and hybrid powertrains in vehicular systems has propelled advancements in mobile robotics and autonomous vehicles. This paper examines the sensitivity of key performance metrics in a electrified heavy-duty mobile manipulator (HDMM) driven by electromechanical linear actuators (EMLAs) powered by permanent magnet synchronous motors (PMSMs). The study evaluates power delivery, force dynamics, energy consumption, and overall efficiency of the actuation mechanisms. By computing partial derivatives (PD) with respect to the payload mass at the tool center point (TCP), it provides insights into these factors under various loading conditions. This research aids in the appropriate choice or design of EMLAs for HDMM electrification, addressing actuation mechanism selection challenge in vehicular system with mounted manipulator and determines the necessary battery capacity requirements. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 491,238 |
1307.4717 | Content Based Image Retrieval System using Feature Classification with
Modified KNN Algorithm | Feature means countenance, remote sensing scene objects with similar characteristics, associated to interesting scene elements in the image formation process. They are classified into three types in image processing, that is low, middle and high. Low level features are color, texture and middle level feature is shape and high level feature is semantic gap of objects. An image retrieval system is a computer system for browsing, searching and retrieving images from a large image database. Content Based Image Retrieval is a technique which uses visual features of image such as color, shape, texture to search user required image from large image database according to user requests in the form of a query. MKNN is an enhancing method of KNN. The proposed KNN classification is called MKNN. MKNN contains two parts for processing, they are validity of the train samples and applying weighted KNN. The validity of each point is computed according to its neighbors. In our proposal, Modified K-Nearest Neighbor can be considered a kind of weighted KNN so that the query label is approximated by weighting the neighbors of the query. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 25,904 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.