id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.06618 | Learning IMM Filter Parameters from Measurements using Gradient Descent | The performance of data fusion and tracking algorithms often depends on parameters that not only describe the sensor system, but can also be task-specific. While for the sensor system tuning these variables is time-consuming and mostly requires expert knowledge, intrinsic parameters of targets under track can even be completely unobservable until the system is deployed. With state-of-the-art sensor systems growing more and more complex, the number of parameters naturally increases, necessitating the automatic optimization of the model variables. In this paper, the parameters of an interacting multiple model (IMM) filter are optimized solely using measurements, thus without necessity for any ground-truth data. The resulting method is evaluated through an ablation study on simulated data, where the trained model manages to match the performance of a filter parametrized with ground-truth values. | false | false | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | 379,124 |
2304.01901 | Uncertainty Quantification for Recursive Estimation in Adaptive
Safety-Critical Control | In this paper, we present a framework for online parameter estimation and uncertainty quantification in the context of adaptive safety-critical control. The key insight enabling our approach is that the parameter estimate generated by the continuous-time recursive least squares (RLS) algorithm at any point in time is an affine transformation of the initial parameter estimate. This property allows for parameterizing such estimates using objects that are closed under affine transformation, such as zonotopes, and enables the efficient propagation of such set-based estimates as time progresses. We illustrate how such an approach facilitates the synthesis of safety-critical controllers for systems with parametric uncertainty and additive disturbances using control barrier functions, and demonstrate the utility of our approach through illustrative examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 356,248 |
2212.06607 | Model-driven Engineering of Manufacturing Automation Software Projects
-- A SysML-based Approach | This paper comprises a SysML-based approach to support the model-driven engineering (MDE) of Manufacturing Automation Software Projects (MASP). The Systems Modeling Language (SysML) is adapted to define the SysML-AT (SysML for automation), a specialized language profile that covers (non-)functional requirements, corresponding software applications and properties of proprietary hardware components. Furthermore, SysML-AT supports an automated software generation for run-time environments conforming to IEC 61131-3. A prototypical tool support was realized for adapted SysML Parametric Diagrams (PD) inside an industrial automation software development tool. Coupling the model editor and online data from the provided run-time environment enables direct debugging inside the model. The approach was evaluated by several case studies and additional usability experiments. With the latter, the suitability of the MDE approach for future users was proven. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 336,165 |
2403.06430 | AS-FIBA: Adaptive Selective Frequency-Injection for Backdoor Attack on
Deep Face Restoration | Deep learning-based face restoration models, increasingly prevalent in smart devices, have become targets for sophisticated backdoor attacks. These attacks, through subtle trigger injection into input face images, can lead to unexpected restoration outcomes. Unlike conventional methods focused on classification tasks, our approach introduces a unique degradation objective tailored for attacking restoration models. Moreover, we propose the Adaptive Selective Frequency Injection Backdoor Attack (AS-FIBA) framework, employing a neural network for input-specific trigger generation in the frequency domain, seamlessly blending triggers with benign images. This results in imperceptible yet effective attacks, guiding restoration predictions towards subtly degraded outputs rather than conspicuous targets. Extensive experiments demonstrate the efficacy of the degradation objective on state-of-the-art face restoration models. Additionally, it is notable that AS-FIBA can insert effective backdoors that are more imperceptible than existing backdoor attack methods, including WaNet, ISSBA, and FIBA. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 436,449 |
2302.05684 | Sequential Underspecified Instrument Selection for Cause-Effect
Estimation | Instrumental variable (IV) methods are used to estimate causal effects in settings with unobserved confounding, where we cannot directly experiment on the treatment variable. Instruments are variables which only affect the outcome indirectly via the treatment variable(s). Most IV applications focus on low-dimensional treatments and crucially require at least as many instruments as treatments. This assumption is restrictive: in the natural sciences we often seek to infer causal effects of high-dimensional treatments (e.g., the effect of gene expressions or microbiota on health and disease), but can only run few experiments with a limited number of instruments (e.g., drugs or antibiotics). In such underspecified problems, the full treatment effect is not identifiable in a single experiment even in the linear case. We show that one can still reliably recover the projection of the treatment effect onto the instrumented subspace and develop techniques to consistently combine such partial estimates from different sets of instruments. We then leverage our combined estimators in an algorithm that iteratively proposes the most informative instruments at each round of experimentation to maximize the overall information about the full causal effect. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 345,129 |
1805.09081 | Local Tomography of Large Networks under the Low-Observability Regime | This article studies the problem of reconstructing the topology of a network of interacting agents via observations of the state-evolution of the agents. We focus on the large-scale network setting with the additional constraint of $partial$ observations, where only a small fraction of the agents can be feasibly observed. The goal is to infer the underlying subnetwork of interactions and we refer to this problem as $local$ $tomography$. In order to study the large-scale setting, we adopt a proper stochastic formulation where the unobserved part of the network is modeled as an Erd\"{o}s-R\'enyi random graph, while the observable subnetwork is left arbitrary. The main result of this work is establishing that, under this setting, local tomography is actually possible with high probability, provided that certain conditions on the network model are met (such as stability and symmetry of the network combination matrix). Remarkably, such conclusion is established under the $low$-$observability$ $regime$, where the cardinality of the observable subnetwork is fixed, while the size of the overall network scales to infinity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | 98,335 |
2409.13449 | Minstrel: Structural Prompt Generation with Multi-Agents Coordination
for Non-AI Experts | LLMs have demonstrated commendable performance across diverse domains. Nevertheless, formulating high-quality prompts to assist them in their work poses a challenge for non-AI experts. Existing research in prompt engineering suggests somewhat scattered optimization principles and designs empirically dependent prompt optimizers. Unfortunately, these endeavors lack a structural design, incurring high learning costs and it is not conducive to the iterative updating of prompts, especially for non-AI experts. Inspired by structured reusable programming languages, we propose LangGPT, a structural prompt design framework. Furthermore, we introduce Minstrel, a multi-generative agent system with reflection to automate the generation of structural prompts. Experiments and the case study illustrate that structural prompts generated by Minstrel or written manually significantly enhance the performance of LLMs. Furthermore, we analyze the ease of use of structural prompts through a user survey in our online community. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 489,998 |
2410.02660 | How to Train Long-Context Language Models (Effectively) | We study continued training and supervised fine-tuning (SFT) of a language model (LM) to make effective use of long-context information. We first establish a reliable evaluation protocol to guide model development -- Instead of perplexity or simple needle-in-a-haystack (NIAH) tests, we use a broad set of long-context tasks, and we evaluate models after SFT with instruction data as this better reveals long-context abilities. Supported by our robust evaluations, we run thorough experiments to decide the data mix for continued pre-training, the instruction tuning dataset, and many other design choices. We find that (1) code repositories and books are excellent sources of long data, but it is crucial to combine them with high-quality short data; (2) training with a sequence length beyond the evaluation length boosts long-context performance; (3) for SFT, using only short instruction datasets yields strong performance on long-context tasks. Our final model, ProLong-8B, which is initialized from Llama-3 and trained on 40B tokens, demonstrates state-of-the-art long-context performance among similarly sized models at a length of 128K. ProLong outperforms Llama-3.18B-Instruct on the majority of long-context tasks despite having seen only 5% as many tokens during long-context training. Additionally, ProLong can effectively process up to 512K tokens, one of the longest context windows of publicly available LMs. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 494,381 |
2406.16728 | CausalMMM: Learning Causal Structure for Marketing Mix Modeling | In online advertising, marketing mix modeling (MMM) is employed to predict the gross merchandise volume (GMV) of brand shops and help decision-makers to adjust the budget allocation of various advertising channels. Traditional MMM methods leveraging regression techniques can fail in handling the complexity of marketing. Although some efforts try to encode the causal structures for better prediction, they have the strict restriction that causal structures are prior-known and unchangeable. In this paper, we define a new causal MMM problem that automatically discovers the interpretable causal structures from data and yields better GMV predictions. To achieve causal MMM, two essential challenges should be addressed: (1) Causal Heterogeneity. The causal structures of different kinds of shops vary a lot. (2) Marketing Response Patterns. Various marketing response patterns i.e., carryover effect and shape effect, have been validated in practice. We argue that causal MMM needs dynamically discover specific causal structures for different shops and the predictions should comply with the prior known marketing response patterns. Thus, we propose CausalMMM that integrates Granger causality in a variational inference framework to measure the causal relationships between different channels and predict the GMV with the regularization of both temporal and saturation marketing response patterns. Extensive experiments show that CausalMMM can not only achieve superior performance of causal structure learning on synthetic datasets with improvements of 5.7%\sim 7.1%, but also enhance the GMV prediction results on a representative E-commerce platform. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 467,248 |
2405.10166 | LFED: A Literary Fiction Evaluation Dataset for Large Language Models | The rapid evolution of large language models (LLMs) has ushered in the need for comprehensive assessments of their performance across various dimensions. In this paper, we propose LFED, a Literary Fiction Evaluation Dataset, which aims to evaluate the capability of LLMs on the long fiction comprehension and reasoning. We collect 95 literary fictions that are either originally written in Chinese or translated into Chinese, covering a wide range of topics across several centuries. We define a question taxonomy with 8 question categories to guide the creation of 1,304 questions. Additionally, we conduct an in-depth analysis to ascertain how specific attributes of literary fictions (e.g., novel types, character numbers, the year of publication) impact LLM performance in evaluations. Through a series of experiments with various state-of-the-art LLMs, we demonstrate that these models face considerable challenges in effectively addressing questions related to literary fictions, with ChatGPT reaching only 57.08% under the zero-shot setting. The dataset will be publicly available at https://github.com/tjunlp-lab/LFED.git | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 454,667 |
2306.05343 | A Data-Driven Approach to Positioning Grab Bars in the Sagittal Plane
for Elderly Persons | The placement of grab bars for elderly users is based largely on ADA building codes and does not reflect the large differences in height, mobility, and muscle power between individual persons. The goal of this study is to see if there are any correlations between an elderly user's preferred handlebar pose and various demographic indicators, self-rated mobility for tasks requiring postural change, and biomechanical markers. For simplicity, we consider only the case where the handlebar is positioned directly in front of the user, as this confines the relevant body kinematics to a 2D sagittal plane. Previous eldercare devices have been constructed to position a handlebar in various poses in space. Our work augments these devices and adds to the body of knowledge by assessing how the handlebar should be positioned based on data on actual elderly people instead of simulations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 372,157 |
1012.0529 | Spectra of Modular and Small-World Matrices | We compute spectra of symmetric random matrices describing graphs with general modular structure and arbitrary inter- and intra-module degree distributions, subject only to the constraint of finite mean connectivities. We also evaluate spectra of a certain class of small-world matrices generated from random graphs by introducing short-cuts via additional random connectivity components. Both adjacency matrices and the associated graph Laplacians are investigated. For the Laplacians, we find Lifshitz type singular behaviour of the spectral density in a localised region of small $|\lambda|$ values. In the case of modular networks, we can identify contributions local densities of state from individual modules. For small-world networks, we find that the introduction of short cuts can lead to the creation of satellite bands outside the central band of extended states, exhibiting only localised states in the band-gaps. Results for the ensemble in the thermodynamic limit are in excellent agreement with those obtained via a cavity approach for large finite single instances, and with direct diagonalisation results. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 8,403 |
1805.03146 | PAD-Net: A Perception-Aided Single Image Dehazing Network | In this work, we investigate the possibility of replacing the $\ell_2$ loss with perceptually derived loss functions (SSIM, MS-SSIM, etc.) in training an end-to-end dehazing neural network. Objective experimental results suggest that by merely changing the loss function we can obtain significantly higher PSNR and SSIM scores on the SOTS set in the RESIDE dataset, compared with a state-of-the-art end-to-end dehazing neural network (AOD-Net) that uses the $\ell_2$ loss. The best PSNR we obtained was 23.50 (4.2% relative improvement), and the best SSIM we obtained was 0.8747 (2.3% relative improvement.) | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 96,996 |
1804.09817 | Multiagent Soft Q-Learning | Policy gradient methods are often applied to reinforcement learning in continuous multiagent games. These methods perform local search in the joint-action space, and as we show, they are susceptable to a game-theoretic pathology known as relative overgeneralization. To resolve this issue, we propose Multiagent Soft Q-learning, which can be seen as the analogue of applying Q-learning to continuous controls. We compare our method to MADDPG, a state-of-the-art approach, and show that our method achieves better coordination in multiagent cooperative tasks, converging to better local optima in the joint action space. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 96,043 |
2012.05782 | A Study of Condition Numbers for First-Order Optimization | The study of first-order optimization algorithms (FOA) typically starts with assumptions on the objective functions, most commonly smoothness and strong convexity. These metrics are used to tune the hyperparameters of FOA. We introduce a class of perturbations quantified via a new norm, called *-norm. We show that adding a small perturbation to the objective function has an equivalently small impact on the behavior of any FOA, which suggests that it should have a minor impact on the tuning of the algorithm. However, we show that smoothness and strong convexity can be heavily impacted by arbitrarily small perturbations, leading to excessively conservative tunings and convergence issues. In view of these observations, we propose a notion of continuity of the metrics, which is essential for a robust tuning strategy. Since smoothness and strong convexity are not continuous, we propose a comprehensive study of existing alternative metrics which we prove to be continuous. We describe their mutual relations and provide their guaranteed convergence rates for the Gradient Descent algorithm accordingly tuned. Finally we discuss how our work impacts the theoretical understanding of FOA and their performances. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 210,895 |
2005.06068 | Deep Learning for Wireless Communications | Existing communication systems exhibit inherent limitations in translating theory to practice when handling the complexity of optimization for emerging wireless applications with high degrees of freedom. Deep learning has a strong potential to overcome this challenge via data-driven solutions and improve the performance of wireless systems in utilizing limited spectrum resources. In this chapter, we first describe how deep learning is used to design an end-to-end communication system using autoencoders. This flexible design effectively captures channel impairments and optimizes transmitter and receiver operations jointly in single-antenna, multiple-antenna, and multiuser communications. Next, we present the benefits of deep learning in spectrum situation awareness ranging from channel modeling and estimation to signal detection and classification tasks. Deep learning improves the performance when the model-based methods fail. Finally, we discuss how deep learning applies to wireless communication security. In this context, adversarial machine learning provides novel means to launch and defend against wireless attacks. These applications demonstrate the power of deep learning in providing novel means to design, optimize, adapt, and secure wireless communications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 176,905 |
2105.03511 | Bounds for the sum of distances of spherical sets of small size | We derive upper and lower bounds on the sum of distances of a spherical code of size $N$ in $n$ dimensions when $N\sim n^\alpha, 0<\alpha\le 2.$ The bounds are derived by specializing recent general, universal bounds on energy of spherical sets. We discuss asymptotic behavior of our bounds along with several examples of codes whose sum of distances closely follows the upper bound. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 234,166 |
2410.10865 | Generating Synthetic Datasets for Few-shot Prompt Tuning | A major limitation of prompt tuning is its dependence on large labeled training datasets. Under few-shot learning settings, prompt tuning lags far behind full-model fine-tuning, limiting its scope of application. In this paper, we leverage the powerful LLMs to synthesize task-specific labeled data for training the soft prompts. We first introduce a distribution-aligned weighted generator tuning (DawGen) method to encourage generating in-distribution data that aligns with the few-shot real data. Then, we train soft prompts on both synthetic and real datasets using a gradient surgery approach, which eliminates the conflicting gradients from different data sources. Experiments on seven sentence-pair classification datasets demonstrate the effectiveness of our proposed method for boosting prompt tuning in few-shot learning settings. Results on QQP, MRPC, and SICK datasets are even comparable to the performance of transfer learning from large real-world datasets, showing the promise of synthetic data as an alternative for enhancing soft prompt tuning. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,283 |
2106.13230 | Video Swin Transformer | The vision community is witnessing a modeling shift from CNNs to Transformers, where pure Transformer architectures have attained top accuracy on the major video recognition benchmarks. These video models are all built on Transformer layers that globally connect patches across the spatial and temporal dimensions. In this paper, we instead advocate an inductive bias of locality in video Transformers, which leads to a better speed-accuracy trade-off compared to previous approaches which compute self-attention globally even with spatial-temporal factorization. The locality of the proposed video architecture is realized by adapting the Swin Transformer designed for the image domain, while continuing to leverage the power of pre-trained image models. Our approach achieves state-of-the-art accuracy on a broad range of video recognition benchmarks, including on action recognition (84.9 top-1 accuracy on Kinetics-400 and 86.1 top-1 accuracy on Kinetics-600 with ~20x less pre-training data and ~3x smaller model size) and temporal modeling (69.6 top-1 accuracy on Something-Something v2). The code and models will be made publicly available at https://github.com/SwinTransformer/Video-Swin-Transformer. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 243,003 |
2211.15666 | Learning Visual Planning Models from Partially Observed Images | There has been increasing attention on planning model learning in classical planning. Most existing approaches, however, focus on learning planning models from structured data in symbolic representations. It is often difficult to obtain such structured data in real-world scenarios. Although a number of approaches have been developed for learning planning models from fully observed unstructured data (e.g., images), in many scenarios raw observations are often incomplete. In this paper, we provide a novel framework, \aType{Recplan}, for learning a transition model from partially observed raw image traces. More specifically, by considering the preceding and subsequent images in a trace, we learn the latent state representations of raw observations and then build a transition model based on such representations. Additionally, we propose a neural-network-based approach to learn a heuristic model that estimates the distance toward a given goal observation. Based on the learned transition model and heuristic model, we implement a classical planner for images. We exhibit empirically that our approach is more effective than a state-of-the-art approach of learning visual planning models in the environment with incomplete observations. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 333,356 |
2306.07512 | Noisy Positive-Unlabeled Learning with Self-Training for Speculative
Knowledge Graph Reasoning | This paper studies speculative reasoning task on real-world knowledge graphs (KG) that contain both \textit{false negative issue} (i.e., potential true facts being excluded) and \textit{false positive issue} (i.e., unreliable or outdated facts being included). State-of-the-art methods fall short in the speculative reasoning ability, as they assume the correctness of a fact is solely determined by its presence in KG, making them vulnerable to false negative/positive issues. The new reasoning task is formulated as a noisy Positive-Unlabeled learning problem. We propose a variational framework, namely nPUGraph, that jointly estimates the correctness of both collected and uncollected facts (which we call \textit{label posterior}) and updates model parameters during training. The label posterior estimation facilitates speculative reasoning from two perspectives. First, it improves the robustness of a label posterior-aware graph encoder against false positive links. Second, it identifies missing facts to provide high-quality grounds of reasoning. They are unified in a simple yet effective self-training procedure. Empirically, extensive experiments on three benchmark KG and one Twitter dataset with various degrees of false negative/positive cases demonstrate the effectiveness of nPUGraph. | false | false | false | true | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 373,041 |
2112.12463 | Comprehensive Movie Recommendation System | A recommender system, also known as a recommendation system, is a type of information filtering system that attempts to forecast a user's rating or preference for an item. This article designs and implements a complete movie recommendation system prototype based on the Genre, Pearson Correlation Coefficient, Cosine Similarity, KNN-Based, Content-Based Filtering using TFIDF and SVD, Collaborative Filtering using TFIDF and SVD, Surprise Library based recommendation system technology. Apart from that in this paper, we present a novel idea that applies machine learning techniques to construct a cluster for the movie based on genres and then observes the inertia value number of clusters were defined. The constraints of the approaches discussed in this work have been described, as well as how one strategy overcomes the disadvantages of another. The whole work has been done on the dataset Movie Lens present at the group lens website which contains 100836 ratings and 3683 tag applications across 9742 movies. These data were created by 610 users between March 29, 1996, and September 24, 2018. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 272,965 |
2411.18262 | Break the ID-Language Barrier: An Adaption Framework for Sequential
Recommendation | The recent breakthrough of large language models (LLMs) in natural language processing has sparked exploration in recommendation systems, however, their limited domain-specific knowledge remains a critical bottleneck. Specifically, LLMs lack key pieces of information crucial for sequential recommendations, such as user behavior patterns. To address this critical gap, we propose IDLE-Adapter, a novel framework that integrates pre-trained ID embeddings, rich in domain-specific knowledge, into LLMs to improve recommendation accuracy. IDLE-Adapter acts as a bridge, transforming sparse user-item interaction data into dense, LLM-compatible representations through a Pre-trained ID Sequential Model, Dimensionality Alignment, Layer-wise Embedding Refinement, and Layer-wise Distribution Alignment. Furthermore, IDLE-Adapter demonstrates remarkable flexibility by seamlessly integrating ID embeddings from diverse ID-based sequential models and LLM architectures. Extensive experiments across various datasets demonstrate the superiority of IDLE-Adapter, achieving over 10\% and 20\% improvements in HitRate@5 and NDCG@5 metrics, respectively, compared to state-of-the-art methods. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 511,797 |
2210.11061 | Analyzing the Robustness of Decentralized Horizontal and Vertical
Federated Learning Architectures in a Non-IID Scenario | Federated learning (FL) allows participants to collaboratively train machine and deep learning models while protecting data privacy. However, the FL paradigm still presents drawbacks affecting its trustworthiness since malicious participants could launch adversarial attacks against the training process. Related work has studied the robustness of horizontal FL scenarios under different attacks. However, there is a lack of work evaluating the robustness of decentralized vertical FL and comparing it with horizontal FL architectures affected by adversarial attacks. Thus, this work proposes three decentralized FL architectures, one for horizontal and two for vertical scenarios, namely HoriChain, VertiChain, and VertiComb. These architectures present different neural networks and training protocols suitable for horizontal and vertical scenarios. Then, a decentralized, privacy-preserving, and federated use case with non-IID data to classify handwritten digits is deployed to evaluate the performance of the three architectures. Finally, a set of experiments computes and compares the robustness of the proposed architectures when they are affected by different data poisoning based on image watermarks and gradient poisoning adversarial attacks. The experiments show that even though particular configurations of both attacks can destroy the classification performance of the architectures, HoriChain is the most robust one. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 325,169 |
2112.11679 | Ghost-dil-NetVLAD: A Lightweight Neural Network for Visual Place
Recognition | Visual place recognition (VPR) is a challenging task with the unbalance between enormous computational cost and high recognition performance. Thanks to the practical feature extraction ability of the lightweight convolution neural networks (CNNs) and the train-ability of the vector of locally aggregated descriptors (VLAD) layer, we propose a lightweight weakly supervised end-to-end neural network consisting of a front-ended perception model called GhostCNN and a learnable VLAD layer as a back-end. GhostCNN is based on Ghost modules that are lightweight CNN-based architectures. They can generate redundant feature maps using linear operations instead of the traditional convolution process, making a good trade-off between computation resources and recognition accuracy. To enhance our proposed lightweight model further, we add dilated convolutions to the Ghost module to get features containing more spatial semantic information, improving accuracy. Finally, rich experiments conducted on a commonly used public benchmark and our private dataset validate that the proposed neural network reduces the FLOPs and parameters of VGG16-NetVLAD by 99.04% and 80.16%, respectively. Besides, both models achieve similar accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 272,769 |
1912.11023 | Learning an Interpretable Traffic Signal Control Policy | Signalized intersections are managed by controllers that assign right of way (green, yellow, and red lights) to non-conflicting directions. Optimizing the actuation policy of such controllers is expected to alleviate traffic congestion and its adverse impact. Given such a safety-critical domain, the affiliated actuation policy is required to be interpretable in a way that can be understood and regulated by a human. This paper presents and analyzes several on-line optimization techniques for tuning interpretable control functions. Although these techniques are defined in a general way, this paper assumes a specific class of interpretable control functions (polynomial functions) for analysis purposes. We show that such an interpretable policy function can be as effective as a deep neural network for approximating an optimized signal actuation policy. We present empirical evidence that supports the use of value-based reinforcement learning for on-line training of the control function. Specifically, we present and study three variants of the Deep Q-learning algorithm that allow the training of an interpretable policy function. Our Deep Regulatable Hardmax Q-learning variant is shown to be particularly effective in optimizing our interpretable actuation policy, resulting in up to 19.4% reduced vehicles delay compared to commonly deployed actuated signal controllers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 158,463 |
2205.02223 | Semi-supervised learning approaches for predicting South African
political sentiment for local government elections | This study aims to understand the South African political context by analysing the sentiments shared on Twitter during the local government elections. An emphasis on the analysis was placed on understanding the discussions led around four predominant political parties ANC, DA, EFF and ActionSA. A semi-supervised approach by means of a graph-based technique to label the vast accessible Twitter data for the classification of tweets into negative and positive sentiment was used. The tweets expressing negative sentiment were further analysed through latent topic extraction to uncover hidden topics of concern associated with each of the political parties. Our findings demonstrated that the general sentiment across South African Twitter users is negative towards all four predominant parties with the worst negative sentiment among users projected towards the current ruling party, ANC, relating to concerns cantered around corruption, incompetence and loadshedding. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 294,878 |
1212.0096 | Predictive Control of a Permanent Magnet Synchronous Machine based on
Real-Time Dynamic Optimization | A predictive control scheme for a permanent-magnet synchronous machine (PMSM) is presented. It is based on a suboptimal method for computationally efficient trajectory generation based on continuous parameterization and linear programming. The torque controller optimizes a quadratic cost consisting of control error and machine losses in real-time respecting voltage and current limitations. The multivariable controller decouples the two current components and exploits cross-coupling effects in the long-range constrained predictive control strategy. The optimization results in fast and smooth torque dynamics while inherently using field-weakening to improve the power efficiency and the current dynamics in high speed operation. The performance of the scheme is demonstrated by experimental results. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 20,061 |
2205.06351 | Interpretable Climate Change Modeling With Progressive Cascade Networks | Typical deep learning approaches to modeling high-dimensional data often result in complex models that do not easily reveal a new understanding of the data. Research in the deep learning field is very actively pursuing new methods to interpret deep neural networks and to reduce their complexity. An approach is described here that starts with linear models and incrementally adds complexity only as supported by the data. An application is shown in which models that map global temperature and precipitation to years are trained to investigate patterns associated with changes in climate. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,212 |
2203.03959 | Enhancing Door-Status Detection for Autonomous Mobile Robots during
Environment-Specific Operational Use | Door-status detection, namely recognizing the presence of a door and its status (open or closed), can induce a remarkable impact on a mobile robot's navigation performance, especially for dynamic settings where doors can enable or disable passages, changing the topology of the map. In this work, we address the problem of building a door-status detector module for a mobile robot operating in the same environment for a long time, thus observing the same set of doors from different points of view. First, we show how to improve the mainstream approach based on object detection by considering the constrained perception setup typical of a mobile robot. Hence, we devise a method to build a dataset of images taken from a robot's perspective and we exploit it to obtain a door-status detector based on deep learning. We then leverage the typical working conditions of a robot to qualify the model for boosting its performance in the working environment via fine-tuning with additional data. Our experimental analysis shows the effectiveness of this method with results obtained both in simulation and in the real-world, that also highlight a trade-off between costs and benefits of the fine-tuning approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 284,292 |
2209.12064 | Face Super-Resolution Using Stochastic Differential Equations | Diffusion models have proven effective for various applications such as images, audio and graph generation. Other important applications are image super-resolution and the solution of inverse problems. More recently, some works have used stochastic differential equations (SDEs) to generalize diffusion models to continuous time. In this work, we introduce SDEs to generate super-resolution face images. To the best of our knowledge, this is the first time SDEs have been used for such an application. The proposed method provides an improved peak signal-to-noise ratio (PSNR), structural similarity index measure (SSIM), and consistency than the existing super-resolution methods based on diffusion models. In particular, we also assess the potential application of this method for the face recognition task. A generic facial feature extractor is used to compare the super-resolution images with the ground truth and superior results were obtained compared with other methods. Our code is publicly available at https://github.com/marcelowds/sr-sde | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 319,402 |
1409.3174 | Designing and Deploying Online Field Experiments | Online experiments are widely used to compare specific design alternatives, but they can also be used to produce generalizable knowledge and inform strategic decision making. Doing so often requires sophisticated experimental designs, iterative refinement, and careful logging and analysis. Few tools exist that support these needs. We thus introduce a language for online field experiments called PlanOut. PlanOut separates experimental design from application code, allowing the experimenter to concisely describe experimental designs, whether common "A/B tests" and factorial designs, or more complex designs involving conditional logic or multiple experimental units. These latter designs are often useful for understanding causal mechanisms involved in user behaviors. We demonstrate how experiments from the literature can be implemented in PlanOut, and describe two large field experiments conducted on Facebook with PlanOut. For common scenarios in which experiments are run iteratively and in parallel, we introduce a namespaced management system that encourages sound experimental practice. | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 35,965 |
2306.16541 | Envisioning a Next Generation Extended Reality Conferencing System with
Efficient Photorealistic Human Rendering | Meeting online is becoming the new normal. Creating an immersive experience for online meetings is a necessity towards more diverse and seamless environments. Efficient photorealistic rendering of human 3D dynamics is the core of immersive meetings. Current popular applications achieve real-time conferencing but fall short in delivering photorealistic human dynamics, either due to limited 2D space or the use of avatars that lack realistic interactions between participants. Recent advances in neural rendering, such as the Neural Radiance Field (NeRF), offer the potential for greater realism in metaverse meetings. However, the slow rendering speed of NeRF poses challenges for real-time conferencing. We envision a pipeline for a future extended reality metaverse conferencing system that leverages monocular video acquisition and free-viewpoint synthesis to enhance data and hardware efficiency. Towards an immersive conferencing experience, we explore an accelerated NeRF-based free-viewpoint synthesis algorithm for rendering photorealistic human dynamics more efficiently. We show that our algorithm achieves comparable rendering quality while performing training and inference 44.5% and 213% faster than state-of-the-art methods, respectively. Our exploration provides a design basis for constructing metaverse conferencing systems that can handle complex application scenarios, including dynamic scene relighting with customized themes and multi-user conferencing that harmonizes real-world people into an extended world. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 376,389 |
2005.07415 | MineReduce: an approach based on data mining for problem size reduction | Hybrid variations of metaheuristics that include data mining strategies have been utilized to solve a variety of combinatorial optimization problems, with superior and encouraging results. Previous hybrid strategies applied mined patterns to guide the construction of initial solutions, leading to more effective exploration of the solution space. Solving a combinatorial optimization problem is usually a hard task because its solution space grows exponentially with its size. Therefore, problem size reduction is also a useful strategy in this context, especially in the case of large-scale problems. In this paper, we build upon these ideas by presenting an approach named MineReduce, which uses mined patterns to perform problem size reduction. We present an application of MineReduce to improve a heuristic for the heterogeneous fleet vehicle routing problem. The results obtained in computational experiments show that this proposed heuristic demonstrates superior performance compared to the original heuristic and other state-of-the-art heuristics, achieving better solution costs with shorter run times. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 177,274 |
1006.2498 | On the Deterministic Code Capacity Region of an Arbitrarily Varying
Multiple-Access Channel Under List Decoding | We study the capacity region $C_L$ of an arbitrarily varying multiple-access channel (AVMAC) for deterministic codes with decoding into a list of a fixed size $L$ and for the average error probability criterion. Motivated by known results in the study of fixed size list decoding for a point-to-point arbitrarily varying channel, we define for every AVMAC whose capacity region for random codes has a nonempty interior, a nonnegative integer $\Omega$ called its symmetrizability. It is shown that for every $L \leq \Omega$, $C_L$ has an empty interior, and for every $L \geq (\Omega+1)^2$, $C_L$ equals the nondegenerate capacity region of the AVMAC for random codes with a known single-letter characterization. For a binary AVMAC with a nondegenerate random code capacity region, it is shown that the symmetrizability is always finite. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 6,771 |
2405.03816 | On the invariance of the Kolmogorov complexity of $\beta$-expansions | We establish a relationship between the algorithmic (Kolmogorov) complexity of the prefixes of any binary expansion and a specific $\beta$-expansion, for every computable $\beta \in (1,2)$. The proof of the main statement crucially hinges on the development of a new dynamical system that generates this specific $\beta$-expansion. This dynamical system can be implemented on a computer. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 452,322 |
1812.09044 | LEAFAGE: Example-based and Feature importance-based Explanationsfor
Black-box ML models | As machine learning models become more accurate, they typically become more complex and uninterpretable by humans. The black-box character of these models holds back its acceptance in practice, especially in high-risk domains where the consequences of failure could be catastrophic such as health-care or defense. Providing understandable and useful explanations behind ML models or predictions can increase the trust of the user. Example-based reasoning, which entails leveraging previous experience with analogous tasks to make a decision, is a well known strategy for problem solving and justification. This work presents a new explanation extraction method called LEAFAGE, for a prediction made by any black-box ML model. The explanation consists of the visualization of similar examples from the training set and the importance of each feature. Moreover, these explanations are contrastive which aims to take the expectations of the user into account. LEAFAGE is evaluated in terms of fidelity to the underlying black-box model and usefulness to the user. The results showed that LEAFAGE performs overall better than the current state-of-the-art method LIME in terms of fidelity, on ML models with non-linear decision boundary. A user-study was conducted which focused on revealing the differences between example-based and feature importance-based explanations. It showed that example-based explanations performed significantly better than feature importance-based explanation, in terms of perceived transparency, information sufficiency, competence and confidence. Counter-intuitively, when the gained knowledge of the participants was tested, it showed that they learned less about the black-box model after seeing a feature importance-based explanation than seeing no explanation at all. The participants found feature importance-based explanation vague and hard to generalize it to other instances. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 117,094 |
2410.15247 | Tensor-Fused Multi-View Graph Contrastive Learning | Graph contrastive learning (GCL) has emerged as a promising approach to enhance graph neural networks' (GNNs) ability to learn rich representations from unlabeled graph-structured data. However, current GCL models face challenges with computational demands and limited feature utilization, often relying only on basic graph properties like node degrees and edge attributes. This constrains their capacity to fully capture the complex topological characteristics of real-world phenomena represented by graphs. To address these limitations, we propose Tensor-Fused Multi-View Graph Contrastive Learning (TensorMV-GCL), a novel framework that integrates extended persistent homology (EPH) with GCL representations and facilitates multi-scale feature extraction. Our approach uniquely employs tensor aggregation and compression to fuse information from graph and topological features obtained from multiple augmented views of the same graph. By incorporating tensor concatenation and contraction modules, we reduce computational overhead by separating feature tensor aggregation and transformation. Furthermore, we enhance the quality of learned topological features and model robustness through noise-injected EPH. Experiments on molecular, bioinformatic, and social network datasets demonstrate TensorMV-GCL's superiority, outperforming 15 state-of-the-art methods in graph classification tasks across 9 out of 11 benchmarks while achieving comparable results on the remaining two. The code for this paper is publicly available at https://github.com/CS-SAIL/Tensor-MV-GCL.git. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,439 |
2408.12926 | Balancing AoI and Rate for Mission-Critical and eMBB Coexistence with
Puncturing, NOMA,and RSMA in Cellular Uplink | Through the lens of average and peak age-of-information (AoI), this paper takes a fresh look into the uplink medium access solutions for mission-critical (MC) communication coexisting with enhanced mobile broadband (eMBB) service. Considering the stochastic packet arrivals from an MC user, we study three access schemes: orthogonal multiple access (OMA) with eMBB preemption (puncturing), non-orthogonal multiple access (NOMA), and rate-splitting multiple access (RSMA), the latter two both with concurrent eMBB transmissions. Puncturing is found to reduce both average AoI and peak AoI (PAoI) violation probability but at the expense of decreased eMBB user rates and increased signaling complexity. Conversely, NOMA and RSMA offer higher eMBB rates but may lead to MC packet loss and AoI degradation. The paper systematically investigates the conditions under which NOMA or RSMA can closely match the average AoI and PAoI violation performance of puncturing while maintaining data rate gains. Closed-form expressions for average AoI and PAoI violation probability are derived, and conditions on the eMBB and MC channel gain difference with respect to the base station are analyzed. Additionally, optimal power and rate splitting factors in RSMA are determined through an exhaustive search to minimize MC outage probability. Notably, our results indicate that with a small loss in the average AoI and PAoI violation probability the eMBB rate in NOMA and RSMA can be approximately five times higher than that achieved through puncturing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 482,939 |
0806.0909 | Outage and Local Throughput and Capacity of Random Wireless Networks | Outage probabilities and single-hop throughput are two important performance metrics that have been evaluated for certain specific types of wireless networks. However, there is a lack of comprehensive results for larger classes of networks, and there is no systematic approach that permits the convenient comparison of the performance of networks with different geometries and levels of randomness. The uncertainty cube is introduced to categorize the uncertainty present in a network. The three axes of the cube represent the three main potential sources of uncertainty in interference-limited networks: the node distribution, the channel gains (fading), and the channel access (set of transmitting nodes). For the performance analysis, a new parameter, the so-called {\em spatial contention}, is defined. It measures the slope of the outage probability in an ALOHA network as a function of the transmit probability $p$ at $p=0$. Outage is defined as the event that the signal-to-interference ratio (SIR) is below a certain threshold in a given time slot. It is shown that the spatial contention is sufficient to characterize outage and throughput in large classes of wireless networks, corresponding to different positions on the uncertainty cube. Existing results are placed in this framework, and new ones are derived. Further, interpreting the outage probability as the SIR distribution, the ergodic capacity of unit-distance links is determined and compared to the throughput achievable for fixed (yet optimized) transmission rates. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 1,877 |
2204.12676 | The Multimarginal Optimal Transport Formulation of Adversarial
Multiclass Classification | We study a family of adversarial multiclass classification problems and provide equivalent reformulations in terms of: 1) a family of generalized barycenter problems introduced in the paper and 2) a family of multimarginal optimal transport problems where the number of marginals is equal to the number of classes in the original classification problem. These new theoretical results reveal a rich geometric structure of adversarial learning problems in multiclass classification and extend recent results restricted to the binary classification setting. A direct computational implication of our results is that by solving either the barycenter problem and its dual, or the MOT problem and its dual, we can recover the optimal robust classification rule and the optimal adversarial strategy for the original adversarial problem. Examples with synthetic and real data illustrate our results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 293,559 |
2312.03140 | FlexModel: A Framework for Interpretability of Distributed Large
Language Models | With the growth of large language models, now incorporating billions of parameters, the hardware prerequisites for their training and deployment have seen a corresponding increase. Although existing tools facilitate model parallelization and distributed training, deeper model interactions, crucial for interpretability and responsible AI techniques, still demand thorough knowledge of distributed computing. This often hinders contributions from researchers with machine learning expertise but limited distributed computing background. Addressing this challenge, we present FlexModel, a software package providing a streamlined interface for engaging with models distributed across multi-GPU and multi-node configurations. The library is compatible with existing model distribution libraries and encapsulates PyTorch models. It exposes user-registerable HookFunctions to facilitate straightforward interaction with distributed model internals, bridging the gap between distributed and single-device model paradigms. Primarily, FlexModel enhances accessibility by democratizing model interactions and promotes more inclusive research in the domain of large-scale neural networks. The package is found at https://github.com/VectorInstitute/flex_model. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | true | 413,149 |
1912.00462 | Statistical Economies of Scale in Battery Sharing | The goal of this paper is to shed light on the statistical economies of scale achievable from sharing of storage between renewable generators. We conduct an extensive study using real world wind data from a grid of equispaced wind generators sharing a common battery. We assume each generator is contracted to meet a certain demand profile to a prescribed level of reliability. We find that the statistical diversity in wind generation across different locations yields useful economies of scale once the grid spacing exceeds 200 km. When the grid spacing exceeds 500 km, we find that the economies grow dramatically: The shared battery size becomes insensitive to the number of participating generators. This means that the generators can access a common, shared battery and collectively achieve the same reliability they would have, had each of them had the entire battery to themselves. To provide a rigourous foundation for this remarkable observation, we propose a mathematical model that demonstrates this phenomenon, assuming that the net generation (generation minus demand) processes associated with the generators are statistically independent. The result is derived by characterizing the large deviations exponent of the loss of load probability with increasing battery size, and showing that this exponent is invariant with the number of generators. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 155,781 |
2102.10558 | Inconsistency thresholds for incomplete pairwise comparison matrices | Pairwise comparison matrices are increasingly used in settings where some pairs are missing. However, there exist few inconsistency indices for similar incomplete data sets and no reasonable measure has an associated threshold. This paper generalises the famous rule of thumb for the acceptable level of inconsistency, proposed by Saaty, to incomplete pairwise comparison matrices. The extension is based on choosing the missing elements such that the maximal eigenvalue of the incomplete matrix is minimised. Consequently, the well-established values of the random index cannot be adopted: the inconsistency of random matrices is found to be the function of matrix size and the number of missing elements, with a nearly linear dependence in the case of the latter variable. Our results can be directly built into decision-making software and used by practitioners as a statistical criterion for accepting or rejecting an incomplete pairwise comparison matrix. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 221,142 |
1812.01410 | Compressive Classification (Machine Learning without learning) | Compressive learning is a framework where (so far unsupervised) learning tasks use not the entire dataset but a compressed summary (sketch) of it. We propose a compressive learning classification method, and a novel sketch function for images. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 115,516 |
1903.10489 | Interplay Between NOMA and Other Emerging Technologies: A Survey | Non-orthogonal multiple access (NOMA) has been widely recognized as a promising way to scale up the number of users, enhance the spectral efficiency, and improve the user fairness in wireless networks, by allowing more than one user to share one wireless resource. NOMA can be flexibly combined with many existing wireless technologies and emerging ones including multiple-input multiple-output (MIMO), massive MIMO, millimeter wave communications, cognitive and cooperative communications, visible light communications, physical layer security, energy harvesting, wireless caching, and so on. Combination of NOMA with these technologies can further increase scalability, spectral efficiency, energy efficiency, and greenness of future communication networks. This paper provides a comprehensive survey of the interplay between NOMA and the above technologies. The emphasis is on how the above techniques can benefit from NOMA and vice versa. Moreover, challenges and future research directions are identified. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 125,285 |
2406.01153 | Safety-Critical Control of Euler-Lagrange Systems Subject to Multiple
Obstacles and Velocity Constraints | This paper studies the safety-critical control problem for Euler-Lagrange (EL) systems subject to multiple ball obstacles and velocity constraints in accordance with affordable velocity ranges. A key strategy is to exploit the underlying inner-outer-loop structure for the design of a new cascade controller for the class of EL systems. In particular, the outer-loop controller is developed based on quadratic programming (QP) to avoid ball obstacles and generate velocity reference signals fulfilling the velocity limitation. Taking full advantage of the conservation-of-energy property, a nonlinear velocity-tracking controller is designed to form the inner loop. One major difficulty is caused by the possible non-Lipschitz continuity of the standard QP algorithm when there are multiple constraints. To solve this problem, we propose a refined QP algorithm with the feasible set reshaped by an appropriately chosen positive basis such that the feasibility is retained while the resulting outer-loop controller is locally Lipschitz. It is proved that the constraint-satisfaction problem is solvable as long as the ball obstacles satisfy a mild distance condition. The proposed design is validated by numerical simulation and an experiment based on a $2$-link planar manipulator. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 460,193 |
2003.04845 | Hierarchical Human Parsing with Typed Part-Relation Reasoning | Human parsing is for pixel-wise human semantic understanding. As human bodies are underlying hierarchically structured, how to model human structures is the central theme in this task. Focusing on this, we seek to simultaneously exploit the representational capacity of deep graph networks and the hierarchical human structures. In particular, we provide following two contributions. First, three kinds of part relations, i.e., decomposition, composition, and dependency, are, for the first time, completely and precisely described by three distinct relation networks. This is in stark contrast to previous parsers, which only focus on a portion of the relations and adopt a type-agnostic relation modeling strategy. More expressive relation information can be captured by explicitly imposing the parameters in the relation networks to satisfy the specific characteristics of different relations. Second, previous parsers largely ignore the need for an approximation algorithm over the loopy human hierarchy, while we instead address an iterative reasoning process, by assimilating generic message-passing networks with their edge-typed, convolutional counterparts. With these efforts, our parser lays the foundation for more sophisticated and flexible human relation patterns of reasoning. Comprehensive experiments on five datasets demonstrate that our parser sets a new state-of-the-art on each. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 167,677 |
2410.01707 | Interpretable Contrastive Monte Carlo Tree Search Reasoning | We propose SC-MCTS*: a novel Monte Carlo Tree Search (MCTS) reasoning algorithm for Large Language Models (LLMs), significantly improves both reasoning accuracy and speed. Our motivation comes from: 1. Previous MCTS LLM reasoning works often overlooked its biggest drawback--slower speed compared to CoT; 2. Previous research mainly used MCTS as a tool for LLM reasoning on various tasks with limited quantitative analysis or ablation studies of its components from reasoning interpretability perspective. 3. The reward model is the most crucial component in MCTS, however previous work has rarely conducted in-depth study or improvement of MCTS's reward models. Thus, we conducted extensive ablation studies and quantitative analysis on components of MCTS, revealing the impact of each component on the MCTS reasoning performance of LLMs. Building on this, (i) we designed a highly interpretable reward model based on the principle of contrastive decoding and (ii) achieved an average speed improvement of 51.9% per node using speculative decoding. Additionally, (iii) we improved UCT node selection strategy and backpropagation used in previous works, resulting in significant performance improvement. We outperformed o1-mini by an average of 17.4% on the Blocksworld multi-step reasoning dataset using Llama-3.1-70B with SC-MCTS*. Our code is available at https://github.com/zitian-gao/SC-MCTS. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 493,904 |
1901.10837 | Noise-tolerant fair classification | Fairness-aware learning involves designing algorithms that do not discriminate with respect to some sensitive feature (e.g., race or gender). Existing work on the problem operates under the assumption that the sensitive feature available in one's training sample is perfectly reliable. This assumption may be violated in many real-world cases: for example, respondents to a survey may choose to conceal or obfuscate their group identity out of fear of potential discrimination. This poses the question of whether one can still learn fair classifiers given noisy sensitive features. In this paper, we answer the question in the affirmative: we show that if one measures fairness using the mean-difference score, and sensitive features are subject to noise from the mutually contaminated learning model, then owing to a simple identity we only need to change the desired fairness-tolerance. The requisite tolerance can be estimated by leveraging existing noise-rate estimators from the label noise literature. We finally show that our procedure is empirically effective on two case-studies involving sensitive feature censoring. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 120,119 |
2408.08218 | The Generating Idempotent Is a Minimum-Weight Codeword for Some Binary
BCH Codes | In a paper from 2015, Ding et al. (IEEE Trans. IT, May 2015) conjectured that for odd $m$, the minimum distance of the binary BCH code of length $2^m-1$ and designed distance $2^{m-2}+1$ is equal to the Bose distance calculated in the same paper. In this paper, we prove the conjecture. In fact, we prove a stronger result suggested by Ding et al.: the weight of the generating idempotent is equal to the Bose distance for both odd and even $m$. Our main tools are some new properties of the so-called fibbinary integers, in particular, the splitting field of related polynomials, and the relation of these polynomials to the idempotent of the BCH code. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 480,908 |
2206.08002 | The convergent Indian buffet process | We propose a new Bayesian nonparametric prior for latent feature models, which we call the convergent Indian buffet process (CIBP). We show that under the CIBP, the number of latent features is distributed as a Poisson distribution with the mean monotonically increasing but converging to a certain value as the number of objects goes to infinity. That is, the expected number of features is bounded above even when the number of objects goes to infinity, unlike the standard Indian buffet process under which the expected number of features increases with the number of objects. We provide two alternative representations of the CIBP based on a hierarchical distribution and a completely random measure, respectively, which are of independent interest. The proposed CIBP is assessed on a high-dimensional sparse factor model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,965 |
2003.03560 | Periodic event-triggered output regulation for linear multi-agent
systems | This study considers the problem of periodic event-triggered (PET) cooperative output regulation for a class of linear multi-agent systems. The advantage of the PET output regulation is that the data transmission and triggered condition are only needed to be monitored at discrete sampling instants. It is assumed that only a small number of agents can have access to the system matrix and states of the leader. Meanwhile, the PET mechanism is considered not only in the communication between various agents, but also in the sensor-to-controller and controller-to-actuator transmission channels for each agent. The above problem set-up will bring some challenges to the controller design and stability analysis. Based on a novel PET distributed observer, a PET dynamic output feedback control method is developed for each follower. Compared with the existing works, our method can naturally exclude the Zeno behavior, and the inter-event time becomes multiples of the sampling period. Furthermore, for every follower, the minimum inter-event time can be determined \textit{a prior}, and computed directly without the knowledge of the leader information. An example is given to verify and illustrate the effectiveness of the new design scheme. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 167,272 |
1910.04476 | Image Super-Resolution via Attention based Back Projection Networks | Deep learning based image Super-Resolution (SR) has shown rapid development due to its ability of big data digestion. Generally, deeper and wider networks can extract richer feature maps and generate SR images with remarkable quality. However, the more complex network we have, the more time consumption is required for practical applications. It is important to have a simplified network for efficient image SR. In this paper, we propose an Attention based Back Projection Network (ABPN) for image super-resolution. Similar to some recent works, we believe that the back projection mechanism can be further developed for SR. Enhanced back projection blocks are suggested to iteratively update low- and high-resolution feature residues. Inspired by recent studies on attention models, we propose a Spatial Attention Block (SAB) to learn the cross-correlation across features at different layers. Based on the assumption that a good SR image should be close to the original LR image after down-sampling. We propose a Refined Back Projection Block (RBPB) for final reconstruction. Extensive experiments on some public and AIM2019 Image Super-Resolution Challenge datasets show that the proposed ABPN can provide state-of-the-art or even better performance in both quantitative and qualitative measurements. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 148,780 |
1301.7402 | From Likelihood to Plausibility | Several authors have explained that the likelihood ratio measures the strength of the evidence represented by observations in statistical problems. This idea works fine when the goal is to evaluate the strength of the available evidence for a simple hypothesis versus another simple hypothesis. However, the applicability of this idea is limited to simple hypotheses because the likelihood function is primarily defined on points (simple hypotheses) of the parameter space. In this paper we define a general weight of evidence that is applicable to both simple and composite hypotheses. It is based on the Dempster-Shafer concept of plausibility and is shown to be a generalization of the likelihood ratio. Functional models are of a fundamental importance for the general weight of evidence proposed in this paper. The relevant concepts and ideas are explained by means of a familiar urn problem and the general analysis of a real-world medical problem is presented. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 21,635 |
2212.08339 | Generalization Bounds for Inductive Matrix Completion in Low-noise
Settings | We study inductive matrix completion (matrix completion with side information) under an i.i.d. subgaussian noise assumption at a low noise regime, with uniform sampling of the entries. We obtain for the first time generalization bounds with the following three properties: (1) they scale like the standard deviation of the noise and in particular approach zero in the exact recovery case; (2) even in the presence of noise, they converge to zero when the sample size approaches infinity; and (3) for a fixed dimension of the side information, they only have a logarithmic dependence on the size of the matrix. Differently from many works in approximate recovery, we present results both for bounded Lipschitz losses and for the absolute loss, with the latter relying on Talagrand-type inequalities. The proofs create a bridge between two approaches to the theoretical analysis of matrix completion, since they consist in a combination of techniques from both the exact recovery literature and the approximate recovery literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 336,715 |
2401.17958 | Convergence Analysis for General Probability Flow ODEs of Diffusion
Models in Wasserstein Distances | Score-based generative modeling with probability flow ordinary differential equations (ODEs) has achieved remarkable success in a variety of applications. While various fast ODE-based samplers have been proposed in the literature and employed in practice, the theoretical understandings about convergence properties of the probability flow ODE are still quite limited. In this paper, we provide the first non-asymptotic convergence analysis for a general class of probability flow ODE samplers in 2-Wasserstein distance, assuming accurate score estimates and smooth log-concave data distributions. We then consider various examples and establish results on the iteration complexity of the corresponding ODE-based samplers. Our proof technique relies on spelling out explicitly the contraction rate for the continuous-time ODE and analyzing the discretization and score-matching errors using synchronous coupling; the challenge in our analysis mainly arises from the inherent non-autonomy of the probability flow ODE and the specific exponential integrator that we study. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 425,383 |
1704.04198 | Room for improvement in automatic image description: an error analysis | In recent years we have seen rapid and significant progress in automatic image description but what are the open problems in this area? Most work has been evaluated using text-based similarity metrics, which only indicate that there have been improvements, without explaining what has improved. In this paper, we present a detailed error analysis of the descriptions generated by a state-of-the-art attention-based model. Our analysis operates on two levels: first we check the descriptions for accuracy, and then we categorize the types of errors we observe in the inaccurate descriptions. We find only 20% of the descriptions are free from errors, and surprisingly that 26% are unrelated to the image. Finally, we manually correct the most frequently occurring error types (e.g. gender identification) to estimate the performance reward for addressing these errors, observing gains of 0.2--1 BLEU point per type. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 71,766 |
2410.15045 | Distribution-Aware Compensation Design for Sustainable Data Rights in
Machine Learning | Modern distributed learning systems face a critical challenge when clients request the removal of their data influence from trained models, as this process can significantly destabilize system performance and affect remaining participants. We propose an innovative mechanism that views this challenge through the lens of game theory, establishing a leader-follower framework where a central coordinator provides strategic incentives to maintain system stability during data removal operations. Our approach quantifies the ripple effects of data removal through a comprehensive analytical model that captures both system-wide and participant-specific impacts. We establish mathematical foundations for measuring participant utility and system outcomes, revealing critical insights into how data diversity influences both individual decisions and overall system stability. The framework incorporates a computationally efficient solution method that addresses the inherent complexity of optimizing participant interactions and resource allocation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 500,340 |
1807.05307 | How Do Classifiers Induce Agents To Invest Effort Strategically? | Algorithms are often used to produce decision-making rules that classify or evaluate individuals. When these individuals have incentives to be classified a certain way, they may behave strategically to influence their outcomes. We develop a model for how strategic agents can invest effort in order to change the outcomes they receive, and we give a tight characterization of when such agents can be incentivized to invest specified forms of effort into improving their outcomes as opposed to "gaming" the classifier. We show that whenever any "reasonable" mechanism can do so, a simple linear mechanism suffices. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | true | 102,898 |
0905.2639 | Information-theoretic limits of selecting binary graphical models in
high dimensions | The problem of graphical model selection is to correctly estimate the graph structure of a Markov random field given samples from the underlying distribution. We analyze the information-theoretic limitations of the problem of graph selection for binary Markov random fields under high-dimensional scaling, in which the graph size $p$ and the number of edges $k$, and/or the maximal node degree $d$ are allowed to increase to infinity as a function of the sample size $n$. For pairwise binary Markov random fields, we derive both necessary and sufficient conditions for correct graph selection over the class $\mathcal{G}_{p,k}$ of graphs on $p$ vertices with at most $k$ edges, and over the class $\mathcal{G}_{p,d}$ of graphs on $p$ vertices with maximum degree at most $d$. For the class $\mathcal{G}_{p, k}$, we establish the existence of constants $c$ and $c'$ such that if $\numobs < c k \log p$, any method has error probability at least 1/2 uniformly over the family, and we demonstrate a graph decoder that succeeds with high probability uniformly over the family for sample sizes $\numobs > c' k^2 \log p$. Similarly, for the class $\mathcal{G}_{p,d}$, we exhibit constants $c$ and $c'$ such that for $n < c d^2 \log p$, any method fails with probability at least 1/2, and we demonstrate a graph decoder that succeeds with high probability for $n > c' d^3 \log p$. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 3,704 |
1907.08082 | Amortized Monte Carlo Integration | Current approaches to amortizing Bayesian inference focus solely on approximating the posterior distribution. Typically, this approximation is, in turn, used to calculate expectations for one or more target functions - a computational pipeline which is inefficient when the target function(s) are known upfront. In this paper, we address this inefficiency by introducing AMCI, a method for amortizing Monte Carlo integration directly. AMCI operates similarly to amortized inference but produces three distinct amortized proposals, each tailored to a different component of the overall expectation calculation. At runtime, samples are produced separately from each amortized proposal, before being combined to an overall estimate of the expectation. We show that while existing approaches are fundamentally limited in the level of accuracy they can achieve, AMCI can theoretically produce arbitrarily small errors for any integrable target function using only a single sample from each proposal at runtime. We further show that it is able to empirically outperform the theoretically optimal self-normalized importance sampler on a number of example problems. Furthermore, AMCI allows not only for amortizing over datasets but also amortizing over target functions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 139,027 |
2212.09008 | Hidden State Approximation in Recurrent Neural Networks Using Continuous
Particle Filtering | Using historical data to predict future events has many applications in the real world, such as stock price prediction; the robot localization. In the past decades, the Convolutional long short-term memory (LSTM) networks have achieved extraordinary success with sequential data in the related field. However, traditional recurrent neural networks (RNNs) keep the hidden states in a deterministic way. In this paper, we use the particles to approximate the distribution of the latent state and show how it can extend into a more complex form, i.e., the Encoder-Decoder mechanism. With the proposed continuous differentiable scheme, our model is capable of adaptively extracting valuable information and updating the latent state according to the Bayes rule. Our empirical studies demonstrate the effectiveness of our method in the prediction tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 336,960 |
2407.03548 | HiDiff: Hybrid Diffusion Framework for Medical Image Segmentation | Medical image segmentation has been significantly advanced with the rapid development of deep learning (DL) techniques. Existing DL-based segmentation models are typically discriminative; i.e., they aim to learn a mapping from the input image to segmentation masks. However, these discriminative methods neglect the underlying data distribution and intrinsic class characteristics, suffering from unstable feature space. In this work, we propose to complement discriminative segmentation methods with the knowledge of underlying data distribution from generative models. To that end, we propose a novel hybrid diffusion framework for medical image segmentation, termed HiDiff, which can synergize the strengths of existing discriminative segmentation models and new generative diffusion models. HiDiff comprises two key components: discriminative segmentor and diffusion refiner. First, we utilize any conventional trained segmentation models as discriminative segmentor, which can provide a segmentation mask prior for diffusion refiner. Second, we propose a novel binary Bernoulli diffusion model (BBDM) as the diffusion refiner, which can effectively, efficiently, and interactively refine the segmentation mask by modeling the underlying data distribution. Third, we train the segmentor and BBDM in an alternate-collaborative manner to mutually boost each other. Extensive experimental results on abdomen organ, brain tumor, polyps, and retinal vessels segmentation datasets, covering four widely-used modalities, demonstrate the superior performance of HiDiff over existing medical segmentation algorithms, including the state-of-the-art transformer- and diffusion-based ones. In addition, HiDiff excels at segmenting small objects and generalizing to new datasets. Source codes are made available at https://github.com/takimailto/HiDiff. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 470,189 |
2306.09481 | Leveraging Residue Number System for Designing High-Precision Analog
Deep Neural Network Accelerators | Achieving high accuracy, while maintaining good energy efficiency, in analog DNN accelerators is challenging as high-precision data converters are expensive. In this paper, we overcome this challenge by using the residue number system (RNS) to compose high-precision operations from multiple low-precision operations. This enables us to eliminate the information loss caused by the limited precision of the ADCs. Our study shows that RNS can achieve 99% FP32 accuracy for state-of-the-art DNN inference using data converters with only $6$-bit precision. We propose using redundant RNS to achieve a fault-tolerant analog accelerator. In addition, we show that RNS can reduce the energy consumption of the data converters within an analog accelerator by several orders of magnitude compared to a regular fixed-point approach. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 373,848 |
1110.2755 | Efficient Tracking of Large Classes of Experts | In the framework of prediction of individual sequences, sequential prediction methods are to be constructed that perform nearly as well as the best expert from a given class. We consider prediction strategies that compete with the class of switching strategies that can segment a given sequence into several blocks, and follow the advice of a different "base" expert in each block. As usual, the performance of the algorithm is measured by the regret defined as the excess loss relative to the best switching strategy selected in hindsight for the particular sequence to be predicted. In this paper we construct prediction strategies of low computational cost for the case where the set of base experts is large. In particular we provide a method that can transform any prediction algorithm $\A$ that is designed for the base class into a tracking algorithm. The resulting tracking algorithm can take advantage of the prediction performance and potential computational efficiency of $\A$ in the sense that it can be implemented with time and space complexity only $O(n^{\gamma} \ln n)$ times larger than that of $\A$, where $n$ is the time horizon and $\gamma \ge 0$ is a parameter of the algorithm. With $\A$ properly chosen, our algorithm achieves a regret bound of optimal order for $\gamma>0$, and only $O(\ln n)$ times larger than the optimal order for $\gamma=0$ for all typical regret bound types we examined. For example, for predicting binary sequences with switching parameters under the logarithmic loss, our method achieves the optimal $O(\ln n)$ regret rate with time complexity $O(n^{1+\gamma}\ln n)$ for any $\gamma\in (0,1)$. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 12,622 |
2107.10390 | Reinforcement Learning Agent Training with Goals for Real World Tasks | Reinforcement Learning (RL) is a promising approach for solving various control, optimization, and sequential decision making tasks. However, designing reward functions for complex tasks (e.g., with multiple objectives and safety constraints) can be challenging for most users and usually requires multiple expensive trials (reward function hacking). In this paper we propose a specification language (Inkling Goal Specification) for complex control and optimization tasks, which is very close to natural language and allows a practitioner to focus on problem specification instead of reward function hacking. The core elements of our framework are: (i) mapping the high level language to a predicate temporal logic tailored to control and optimization tasks, (ii) a novel automaton-guided dense reward generation that can be used to drive RL algorithms, and (iii) a set of performance metrics to assess the behavior of the system. We include a set of experiments showing that the proposed method provides great ease of use to specify a wide range of real world tasks; and that the reward generated is able to drive the policy training to achieve the specified goal. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 247,280 |
2010.00990 | An alternative proof of the vulnerability of retrieval in high intrinsic
dimensionality neighborhood | This paper investigates the vulnerability of the nearest neighbors search, which is a pivotal tool in data analysis and machine learning. The vulnerability is gauged as the relative amount of perturbation that an attacker needs to add onto a dataset point in order to modify its neighbor rank w.r.t. a query. The statistical distribution of this quantity is derived from simple assumptions. Experiments on six large scale datasets validate this model up to some outliers which are explained in term of violations of the assumptions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 198,468 |
2103.13944 | Preserve, Promote, or Attack? GNN Explanation via Topology Perturbation | Prior works on formalizing explanations of a graph neural network (GNN) focus on a single use case - to preserve the prediction results through identifying important edges and nodes. In this paper, we develop a multi-purpose interpretation framework by acquiring a mask that indicates topology perturbations of the input graphs. We pack the framework into an interactive visualization system (GNNViz) which can fulfill multiple purposes: Preserve,Promote, or Attack GNN's predictions. We illustrate our approach's novelty and effectiveness with three case studies: First, GNNViz can assist non expert users to easily explore the relationship between graph topology and GNN's decision (Preserve), or to manipulate the prediction (Promote or Attack) for an image classification task on MS-COCO; Second, on the Pokec social network dataset, our framework can uncover unfairness and demographic biases; Lastly, it compares with state-of-the-art GNN explainer baseline on a synthetic dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 226,672 |
2406.19102 | Statements: Universal Information Extraction from Tables with Large
Language Models for ESG KPIs | Environment, Social, and Governance (ESG) KPIs assess an organization's performance on issues such as climate change, greenhouse gas emissions, water consumption, waste management, human rights, diversity, and policies. ESG reports convey this valuable quantitative information through tables. Unfortunately, extracting this information is difficult due to high variability in the table structure as well as content. We propose Statements, a novel domain agnostic data structure for extracting quantitative facts and related information. We propose translating tables to statements as a new supervised deep-learning universal information extraction task. We introduce SemTabNet - a dataset of over 100K annotated tables. Investigating a family of T5-based Statement Extraction Models, our best model generates statements which are 82% similar to the ground-truth (compared to baseline of 21%). We demonstrate the advantages of statements by applying our model to over 2700 tables from ESG reports. The homogeneous nature of statements permits exploratory data analysis on expansive information found in large collections of ESG reports. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 468,302 |
2005.04817 | River environmental restoration based on random observations of a
non-smooth stochastic dynamical system | Earth and soils are indispensable elements of river environment. Dam-downstream environment and ecosystems have been severely affected by reduced or even stopped sediment supply from the upstream. Replenishing earth and soils from outside the river has been considered as an effective way to mitigate this issue. However, its cost-effective implementation has not been considered from a theoretical side. This paper presents a tractable new stochastic control model to deal with this issue. The sediment dynamics in the river environment follow non-smooth and continuous-time piecewise deterministic dynamics. The model assumes that the observation of the sediment dynamics is carried out only randomly and discretely, and that the sediment can be replenished at each observation time with cost. This partial observation assumption is consistent with the fact that continuously obtaining the environmental information is difficult in applications. The performance index to penalize the sediment depletion has a non-smooth term as well. We demonstrate that these non-smoothness factors harmonize with a dynamic programming principle, and derive the optimality equation in a degenerate elliptic form governing the most cost-efficient sediment replenishment policy. We analytically derive and verify an exact solution under a simplified condition for a discounted case, an Ergodic case, and a complete information case. A more realistic case is handled using a high-resolution finite difference scheme. We then provide the optimal sediment replenishment policy numerically. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 176,569 |
1804.09461 | Structured Pruning for Efficient ConvNets via Incremental Regularization | Parameter pruning is a promising approach for CNN compression and acceleration by eliminating redundant model parameters with tolerable performance degrade. Despite its effectiveness, existing regularization-based parameter pruning methods usually drive weights towards zero with large and constant regularization factors, which neglects the fragility of the expressiveness of CNNs, and thus calls for a more gentle regularization scheme so that the networks can adapt during pruning. To achieve this, we propose a new and novel regularization-based pruning method, named IncReg, to incrementally assign different regularization factors to different weights based on their relative importance. Empirical analysis on CIFAR-10 dataset verifies the merits of IncReg. Further extensive experiments with popular CNNs on CIFAR-10 and ImageNet datasets show that IncReg achieves comparable to even better results compared with state-of-the-arts. Our source codes and trained models are available here: https://github.com/mingsun-tse/caffe_increg. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 95,975 |
2306.16956 | MEMD-ABSA: A Multi-Element Multi-Domain Dataset for Aspect-Based
Sentiment Analysis | Aspect-based sentiment analysis is a long-standing research interest in the field of opinion mining, and in recent years, researchers have gradually shifted their focus from simple ABSA subtasks to end-to-end multi-element ABSA tasks. However, the datasets currently used in the research are limited to individual elements of specific tasks, usually focusing on in-domain settings, ignoring implicit aspects and opinions, and with a small data scale. To address these issues, we propose a large-scale Multi-Element Multi-Domain dataset (MEMD) that covers the four elements across five domains, including nearly 20,000 review sentences and 30,000 quadruples annotated with explicit and implicit aspects and opinions for ABSA research. Meanwhile, we evaluate generative and non-generative baselines on multiple ABSA subtasks under the open domain setting, and the results show that open domain ABSA as well as mining implicit aspects and opinions remain ongoing challenges to be addressed. The datasets are publicly released at \url{https://github.com/NUSTM/MEMD-ABSA}. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 376,547 |
2003.07642 | Scalable Traffic Models for Scheduling of Linear Periodic
Event-Triggered Controllers | This paper addresses the problem of modeling and scheduling the transmissions generated by multiple event-triggered control (ETC) loops sharing a network. We present a method to build a symbolic traffic model of periodic ETC (PETC), which by construction provides an exact simulation of such traffic. The model is made in such a way as to avoid the combinatorial explosion that is typical of symbolic models in many applications. It is augmented with early triggering actions that can be used by a scheduler to mitigate communication conflicts. The complete networked control system is then modeled as a network of timed game automata, for which existing tools can generate a strategy that avoids communication conflicts, while keeping early triggers to a minimum. By construction, our proposed symbolic model is a quotient model of the PETC. It is relatively fast to build, and it generates few to no spurious transitions. We finally demonstrate modeling and scheduling for a numerical example. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 168,497 |
2106.03849 | SIMONe: View-Invariant, Temporally-Abstracted Object Representations via
Unsupervised Video Decomposition | To help agents reason about scenes in terms of their building blocks, we wish to extract the compositional structure of any given scene (in particular, the configuration and characteristics of objects comprising the scene). This problem is especially difficult when scene structure needs to be inferred while also estimating the agent's location/viewpoint, as the two variables jointly give rise to the agent's observations. We present an unsupervised variational approach to this problem. Leveraging the shared structure that exists across different scenes, our model learns to infer two sets of latent representations from RGB video input alone: a set of "object" latents, corresponding to the time-invariant, object-level contents of the scene, as well as a set of "frame" latents, corresponding to global time-varying elements such as viewpoint. This factorization of latents allows our model, SIMONe, to represent object attributes in an allocentric manner which does not depend on viewpoint. Moreover, it allows us to disentangle object dynamics and summarize their trajectories as time-abstracted, view-invariant, per-object properties. We demonstrate these capabilities, as well as the model's performance in terms of view synthesis and instance segmentation, across three procedurally generated video datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 239,486 |
2201.02297 | Time Series Forecasting Using Fuzzy Cognitive Maps: A Survey | Among various soft computing approaches for time series forecasting, Fuzzy Cognitive Maps (FCM) have shown remarkable results as a tool to model and analyze the dynamics of complex systems. FCM have similarities to recurrent neural networks and can be classified as a neuro-fuzzy method. In other words, FCMs are a mixture of fuzzy logic, neural network, and expert system aspects, which act as a powerful tool for simulating and studying the dynamic behavior of complex systems. The most interesting features are knowledge interpretability, dynamic characteristics and learning capability. The goal of this survey paper is mainly to present an overview on the most relevant and recent FCM-based time series forecasting models proposed in the literature. In addition, this article considers an introduction on the fundamentals of FCM model and learning methodologies. Also, this survey provides some ideas for future research to enhance the capabilities of FCM in order to cover some challenges in the real-world experiments such as handling non-stationary data and scalability issues. Moreover, equipping FCMs with fast learning algorithms is one of the major concerns in this area. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 274,498 |
2405.07472 | GaussianVTON: 3D Human Virtual Try-ON via Multi-Stage Gaussian Splatting
Editing with Image Prompting | The increasing prominence of e-commerce has underscored the importance of Virtual Try-On (VTON). However, previous studies predominantly focus on the 2D realm and rely heavily on extensive data for training. Research on 3D VTON primarily centers on garment-body shape compatibility, a topic extensively covered in 2D VTON. Thanks to advances in 3D scene editing, a 2D diffusion model has now been adapted for 3D editing via multi-viewpoint editing. In this work, we propose GaussianVTON, an innovative 3D VTON pipeline integrating Gaussian Splatting (GS) editing with 2D VTON. To facilitate a seamless transition from 2D to 3D VTON, we propose, for the first time, the use of only images as editing prompts for 3D editing. To further address issues, e.g., face blurring, garment inaccuracy, and degraded viewpoint quality during editing, we devise a three-stage refinement strategy to gradually mitigate potential issues. Furthermore, we introduce a new editing strategy termed Edit Recall Reconstruction (ERR) to tackle the limitations of previous editing strategies in leading to complex geometric changes. Our comprehensive experiments demonstrate the superiority of GaussianVTON, offering a novel perspective on 3D VTON while also establishing a novel starting point for image-prompting 3D scene editing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,726 |
2010.06177 | COVID-19 Imaging Data Privacy by Federated Learning Design: A
Theoretical Framework | To address COVID-19 healthcare challenges, we need frequent sharing of health data, knowledge and resources at a global scale. However, in this digital age, data privacy is a big concern that requires the secure embedding of privacy assurance into the design of all technological solutions that use health data. In this paper, we introduce differential privacy by design (dPbD) framework and discuss its embedding into the federated machine learning system. To limit the scope of our paper, we focus on the problem scenario of COVID-19 imaging data privacy for disease diagnosis by computer vision and deep learning approaches. We discuss the evaluation of the proposed design of federated machine learning systems and discuss how differential privacy by design (dPbD) framework can enhance data privacy in federated learning systems with scalability and robustness. We argue that scalable differentially private federated learning design is a promising solution for building a secure, private and collaborative machine learning model such as required to combat COVID19 challenge. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 200,391 |
2502.07511 | Quantitative evaluation of unsupervised clustering algorithms for
dynamic total-body PET image analysis | Background. Recently, dynamic total-body positron emission tomography (PET) imaging has become possible due to new scanner devices. While clustering algorithms have been proposed for PET analysis already earlier, there is still little research systematically evaluating these algorithms for processing of dynamic total-body PET images. Materials and methods. Here, we compare the performance of 15 unsupervised clustering methods, including K-means either by itself or after principal component analysis (PCA) or independent component analysis (ICA), Gaussian mixture model (GMM), fuzzy c-means (FCM), agglomerative clustering, spectral clustering, and several newer clustering algorithms, for classifying time activity curves (TACs) in dynamic PET images. We use dynamic total-body $^{15}$O-water PET images collected from 30 patients with suspected or confirmed coronary artery disease. To evaluate the clustering algorithms in a quantitative way, we use them to classify 5000 TACs from each image based on whether the curve is taken from brain, right heart ventricle, right kidney, lower right lung lobe, or urinary bladder. Results. According to our results, the best methods are GMM, FCM, and ICA combined with mini batch K-means, which classified the TACs with a median accuracies of 89\%, 83\%, and 81\%, respectively, in a processing time of half a second or less on average for each image. Conclusion. GMM, FCM, and ICA with mini batch K-means show promise for dynamic total-body PET analysis. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 532,641 |
2407.03856 | Q-Adapter: Customizing Pre-trained LLMs to New Preferences with
Forgetting Mitigation | Large Language Models (LLMs), trained on a large amount of corpus, have demonstrated remarkable abilities. However, it may not be sufficient to directly apply open-source LLMs like Llama to certain real-world scenarios, since most of them are trained for \emph{general} purposes. Thus, the demands for customizing publicly available LLMs emerge, but are currently under-studied. In this work, we consider customizing pre-trained LLMs with new human preferences. Specifically, the LLM should not only meet the new preference but also preserve its original capabilities after customization. Drawing inspiration from the observation that human preference can be expressed as a reward model, we propose to cast LLM customization as optimizing the sum of two reward functions, one of which (denoted as $r_1$) was used to pre-train the LLM while the other (denoted as $r_2$) characterizes the new human preference. The obstacle here is that both reward functions are unknown, making the application of modern reinforcement learning methods infeasible. Thanks to the residual Q-learning framework, we can restore the customized LLM with the pre-trained LLM and the \emph{residual Q-function} without the reward function $r_1$. Moreover, we find that for a fixed pre-trained LLM, the reward function $r_2$ can be derived from the residual Q-function, enabling us to directly learn the residual Q-function from the new human preference data upon the Bradley-Terry model. We name our method Q-Adapter as it introduces an adapter module to approximate the residual Q-function for customizing the pre-trained LLM towards the new preference. Experiments based on the Llama-3.1 model on the DSP dataset and HH-RLHF dataset illustrate the superior effectiveness of Q-Adapter on both retaining existing knowledge and learning new preferences. Code is available at \url{https://github.com/mansicer/Q-Adapter}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 470,316 |
2011.07679 | Critical data analysis of COVID-19 spreading in Indonesia to measure the
readiness of new-normal policy | COVID-19 pandemic has become a global issue nowadays. Various efforts have been made to break the chain of the spread of the COVID-19. Indonesia's government issued a large-scale social restrictions policy to prevent the spread of the COVID-19. However, large-scale social restrictions policy impacted the economy of the Indonesian. After several considerations, the Indonesian government implemented a new-normal policy, which regulates the activities outside the home with strict health protocols. This study's objective is to measure Indonesia's readiness level after the large-scale social restrictions period towards the new-normal period. To specify the readiness level, the measurement parameters required in the form of statistical analysis and forecasting modeling. Based on the results of statistical analysis and forecasting, over the past month, new confirmed cases increased more than two times. Besides, the growth rate of new confirmed cases dramatically increased rapidly compared to the prediction results. Therefore, the government must review the new-normal policy again and emphasize economic factors and think about health factors | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 206,635 |
2008.10570 | Example-Based Named Entity Recognition | We present a novel approach to named entity recognition (NER) in the presence of scarce data that we call example-based NER. Our train-free few-shot learning approach takes inspiration from question-answering to identify entity spans in a new and unseen domain. In comparison with the current state-of-the-art, the proposed method performs significantly better, especially when using a low number of support examples. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 193,040 |
2407.02678 | Reasoning in Large Language Models: A Geometric Perspective | The advancement of large language models (LLMs) for real-world applications hinges critically on enhancing their reasoning capabilities. In this work, we explore the reasoning abilities of large language models (LLMs) through their geometrical understanding. We establish a connection between the expressive power of LLMs and the density of their self-attention graphs. Our analysis demonstrates that the density of these graphs defines the intrinsic dimension of the inputs to the MLP blocks. We demonstrate through theoretical analysis and toy examples that a higher intrinsic dimension implies a greater expressive capacity of the LLM. We further provide empirical evidence linking this geometric framework to recent advancements in methods aimed at enhancing the reasoning capabilities of LLMs. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 469,834 |
2206.03900 | Unsupervised Deformable Image Registration with Absent Correspondences
in Pre-operative and Post-Recurrence Brain Tumor MRI Scans | Registration of pre-operative and post-recurrence brain images is often needed to evaluate the effectiveness of brain gliomas treatment. While recent deep learning-based deformable registration methods have achieved remarkable success with healthy brain images, most of them would be unable to accurately align images with pathologies due to the absent correspondences in the reference image. In this paper, we propose a deep learning-based deformable registration method that jointly estimates regions with absent correspondence and bidirectional deformation fields. A forward-backward consistency constraint is used to aid in the localization of the resection and recurrence region from voxels with absence correspondences in the two images. Results on 3D clinical data from the BraTS-Reg challenge demonstrate our method can improve image alignment compared to traditional and deep learning-based registration approaches with or without cost function masking strategy. The source code is available at https://github.com/cwmok/DIRAC. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 301,446 |
1905.03433 | MAP Inference via L2-Sphere Linear Program Reformulation | Maximum a posteriori (MAP) inference is an important task for graphical models. Due to complex dependencies among variables in realistic model, finding an exact solution for MAP inference is often intractable. Thus, many approximation methods have been developed, among which the linear programming (LP) relaxation based methods show promising performance. However, one major drawback of LP relaxation is that it is possible to give fractional solutions. Instead of presenting a tighter relaxation, in this work we propose a continuous but equivalent reformulation of the original MAP inference problem, called LS-LP. We add the L2-sphere constraint onto the original LP relaxation, leading to an intersected space with the local marginal polytope that is equivalent to the space of all valid integer label configurations. Thus, LS-LP is equivalent to the original MAP inference problem. We propose a perturbed alternating direction method of multipliers (ADMM) algorithm to optimize the LS-LP problem, by adding a sufficiently small perturbation epsilon onto the objective function and constraints. We prove that the perturbed ADMM algorithm globally converges to the epsilon-Karush-Kuhn-Tucker (epsilon-KKT) point of the LS-LP problem. The convergence rate will also be analyzed. Experiments on several benchmark datasets from Probabilistic Inference Challenge (PIC 2011) and OpenGM 2 show competitive performance of our proposed method against state-of-the-art MAP inference methods. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 130,202 |
2402.03631 | CAT-SAM: Conditional Tuning for Few-Shot Adaptation of Segment Anything
Model | The recent Segment Anything Model (SAM) has demonstrated remarkable zero-shot capability and flexible geometric prompting in general image segmentation. However, SAM often struggles when handling various unconventional images, such as aerial, medical, and non-RGB images. This paper presents CAT-SAM, a ConditionAl Tuning network that adapts SAM toward various unconventional target tasks with just few-shot target samples. CAT-SAM freezes the entire SAM and adapts its mask decoder and image encoder simultaneously with a small number of learnable parameters. The core design is a prompt bridge structure that enables decoder-conditioned joint tuning of the heavyweight image encoder and the lightweight mask decoder. The bridging maps the prompt token of the mask decoder to the image encoder, fostering synergic adaptation of the encoder and the decoder with mutual benefits. We develop two representative tuning strategies for the image encoder which leads to two CAT-SAM variants: one injecting learnable prompt tokens in the input space and the other inserting lightweight adapter networks. Extensive experiments over 11 unconventional tasks show that both CAT-SAM variants achieve superior target segmentation performance consistently even under the very challenging one-shot adaptation setup. Project page: https://xiaoaoran.github.io/projects/CAT-SAM | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 427,105 |
1902.05128 | A nonlinear hyperelasticity model for single layer blue phosphorus based
on ab-initio calculations | A new hyperelastic membrane material model is proposed for single layer blue phosphorus ($\beta\text{-P}$), also known as blue phosphorene. The model is fully nonlinear and captures the anisotropy of $\beta\text{-P}$ at large strains. The material model is calibrated from density functional theory (DFT) calculations considering a set of elementary deformation states. Those are pure dilatation and uniaxial stretching along the armchair and zigzag directions. The material model is compared and validated with additional DFT results and existing DFT results from the literature, and the comparison shows good agreement. The new material model can be directly used within computational shell formulations that are for example based on rotation-free isogeometric finite elements. This is demonstrated by simulations of the indentation and vibration of single layer blue phosphorus sheets at micrometer scales. The elasticity constants at small deformations are also reported. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 121,482 |
1604.05225 | Annotation Order Matters: Recurrent Image Annotator for Arbitrary Length
Image Tagging | Automatic image annotation has been an important research topic in facilitating large scale image management and retrieval. Existing methods focus on learning image-tag correlation or correlation between tags to improve annotation accuracy. However, most of these methods evaluate their performance using top-k retrieval performance, where k is fixed. Although such setting gives convenience for comparing different methods, it is not the natural way that humans annotate images. The number of annotated tags should depend on image contents. Inspired by the recent progress in machine translation and image captioning, we propose a novel Recurrent Image Annotator (RIA) model that forms image annotation task as a sequence generation problem so that RIA can natively predict the proper length of tags according to image contents. We evaluate the proposed model on various image annotation datasets. In addition to comparing our model with existing methods using the conventional top-k evaluation measures, we also provide our model as a high quality baseline for the arbitrary length image tagging task. Moreover, the results of our experiments show that the order of tags in training phase has a great impact on the final annotation performance. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 54,781 |
2301.05938 | Deep Learning Provides Rapid Screen for Breast Cancer Metastasis with
Sentinel Lymph Nodes | Deep learning has been shown to be useful to detect breast cancer metastases by analyzing whole slide images of sentinel lymph nodes. However, it requires extensive scanning and analysis of all the lymph nodes slides for each case. Our deep learning study focuses on breast cancer screening with only a small set of image patches from any sentinel lymph node, positive or negative for metastasis, to detect changes in tumor environment and not in the tumor itself. We design a convolutional neural network in the Python language to build a diagnostic model for this purpose. The excellent results from this preliminary study provided a proof of concept for incorporating automated metastatic screen into the digital pathology workflow to augment the pathologists' productivity. Our approach is unique since it provides a very rapid screen rather than an exhaustive search for tumor in all fields of all sentinel lymph nodes. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 340,495 |
2303.06155 | Digital Twin-Assisted Knowledge Distillation Framework for Heterogeneous
Federated Learning | In this paper, to deal with the heterogeneity in federated learning (FL) systems, a knowledge distillation (KD) driven training framework for FL is proposed, where each user can select its neural network model on demand and distill knowledge from a big teacher model using its own private dataset. To overcome the challenge of train the big teacher model in resource limited user devices, the digital twin (DT) is exploit in the way that the teacher model can be trained at DT located in the server with enough computing resources. Then, during model distillation, each user can update the parameters of its model at either the physical entity or the digital agent. The joint problem of model selection and training offloading and resource allocation for users is formulated as a mixed integer programming (MIP) problem. To solve the problem, Q-learning and optimization are jointly used, where Q-learning selects models for users and determines whether to train locally or on the server, and optimization is used to allocate resources for users based on the output of Q-learning. Simulation results show the proposed DT-assisted KD framework and joint optimization method can significantly improve the average accuracy of users while reducing the total delay. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | true | 350,714 |
2311.08396 | Zero-shot audio captioning with audio-language model guidance and audio
context keywords | Zero-shot audio captioning aims at automatically generating descriptive textual captions for audio content without prior training for this task. Different from speech recognition which translates audio content that contains spoken language into text, audio captioning is commonly concerned with ambient sounds, or sounds produced by a human performing an action. Inspired by zero-shot image captioning methods, we propose ZerAuCap, a novel framework for summarising such general audio signals in a text caption without requiring task-specific training. In particular, our framework exploits a pre-trained large language model (LLM) for generating the text which is guided by a pre-trained audio-language model to produce captions that describe the audio content. Additionally, we use audio context keywords that prompt the language model to generate text that is broadly relevant to sounds. Our proposed framework achieves state-of-the-art results in zero-shot audio captioning on the AudioCaps and Clotho datasets. Our code is available at https://github.com/ExplainableML/ZerAuCap. | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 407,711 |
2001.06765 | Information Foraging for Enhancing Implicit Feedback in Content-based
Image Recommendation | User implicit feedback plays an important role in recommender systems. However, finding implicit features is a tedious task. This paper aims to identify users' preferences through implicit behavioural signals for image recommendation based on the Information Scent Model of Information Foraging Theory. In the first part, we hypothesise that the users' perception is improved with visual cues in the images as behavioural signals that provide users' information scent during information seeking. We designed a content-based image recommendation system to explore which image attributes (i.e., visual cues or bookmarks) help users find their desired image. We found that users prefer recommendations predicated by visual cues and therefore consider the visual cues as good information scent for their information seeking. In the second part, we investigated if visual cues in the images together with the images itself can be better perceived by the users than each of them on its own. We evaluated the information scent artifacts in image recommendation on the Pinterest image collection and the WikiArt dataset. We find our proposed image recommendation system supports the implicit signals through Information Foraging explanation of the information scent model. | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 160,875 |
1909.12567 | Cell-Free Massive MIMO for Wireless Federated Learning | This paper proposes a novel scheme for cell-free massive multiple-input multiple-output (CFmMIMO) networks to support any federated learning (FL) framework. This scheme allows each instead of all the iterations of the FL framework to happen in a large-scale coherence time to guarantee a stable operation of an FL process. To show how to optimize the FL performance using this proposed scheme, we consider an existing FL framework as an example and target FL training time minimization for this framework. An optimization problem is then formulated to jointly optimize the local accuracy, transmit power, data rate, and users' processing frequency. This mixed-timescale stochastic nonconvex problem captures the complex interactions among the training time, and transmission and computation of training updates of one FL process. By employing the online successive convex approximation approach, we develop a new algorithm to solve the formulated problem with proven convergence to the neighbourhood of its stationary points. Our numerical results confirm that the presented joint design reduces the training time by up to $55\%$ over baseline approaches. They also show that CFmMIMO here requires the lowest training time for FL processes compared with cell-free time-division multiple access massive MIMO and collocated massive MIMO. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 147,167 |
2401.11713 | Medical Image Debiasing by Learning Adaptive Agreement from a Biased
Council | Deep learning could be prone to learning shortcuts raised by dataset bias and result in inaccurate, unreliable, and unfair models, which impedes its adoption in real-world clinical applications. Despite its significance, there is a dearth of research in the medical image classification domain to address dataset bias. Furthermore, the bias labels are often agnostic, as identifying biases can be laborious and depend on post-hoc interpretation. This paper proposes learning Adaptive Agreement from a Biased Council (Ada-ABC), a debiasing framework that does not rely on explicit bias labels to tackle dataset bias in medical images. Ada-ABC develops a biased council consisting of multiple classifiers optimized with generalized cross entropy loss to learn the dataset bias. A debiasing model is then simultaneously trained under the guidance of the biased council. Specifically, the debiasing model is required to learn adaptive agreement with the biased council by agreeing on the correctly predicted samples and disagreeing on the wrongly predicted samples by the biased council. In this way, the debiasing model could learn the target attribute on the samples without spurious correlations while also avoiding ignoring the rich information in samples with spurious correlations. We theoretically demonstrated that the debiasing model could learn the target features when the biased model successfully captures dataset bias. Moreover, to our best knowledge, we constructed the first medical debiasing benchmark from four datasets containing seven different bias scenarios. Our extensive experiments practically showed that our proposed Ada-ABC outperformed competitive approaches, verifying its effectiveness in mitigating dataset bias for medical image classification. The codes and organized benchmark datasets will be made publicly available. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 423,117 |
1910.11378 | Underwater Cooperative MIMO Communications using Hybrid Acoustic and
Magnetic Induction Technique | Future smart ocean applications require long distance and reliable communications to connect underwater sensors/robots with remote surface base stations. It is challenging to achieve such goal due to the harsh and dynamic underwater acoustic channel. While Multiple-Input and Multiple-Output (MIMO) technique can enhance reliability and transmission range, it is difficult to place multiple acoustic transducers on one single underwater device due to the large wavelength. Although the cooperative MIMO technique that let multiple underwater devices form a virtual MIMO system could solve the issue, it was impossible to synchronize the distributed underwater devices due to the extremely large and dynamic propagation delay of acoustic waves. To this end, this paper proposes an underwater cooperative MIMO communication mechanism, which is based on a hybrid acoustic and Magnetic Induction (MI) technique. The inter-node synchronization problem can be perfectly solved by using the MI technique so that the distributed acoustic transducers can cooperatively form narrow beams for long distance underwater communications. The synchronization time and errors are significantly reduced since MI has negligible signal propagation delays. To quantitatively analyze the improvement, the closed-form expressions of the synchronization error, signal-to-noise ratio (SNR), effective communication time, and throughput of the proposed system are rigorously derived. The proposed hybrid system is implemented in a software-defined testbed under the beamforming and space-time coding scheme. Through both numerical analysis and real-world experiments, the paper shows that the proposed hybrid cooperative MIMO mechanism achieves much lower bit error rate and synchronization error than the conventional acoustic systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 150,759 |
2405.17346 | Prompt Optimization with Human Feedback | Large language models (LLMs) have demonstrated remarkable performances in various tasks. However, the performance of LLMs heavily depends on the input prompt, which has given rise to a number of recent works on prompt optimization. However, previous works often require the availability of a numeric score to assess the quality of every prompt. Unfortunately, when a human user interacts with a black-box LLM, attaining such a score is often infeasible and unreliable. Instead, it is usually significantly easier and more reliable to obtain preference feedback from a human user, i.e., showing the user the responses generated from a pair of prompts and asking the user which one is preferred. Therefore, in this paper, we study the problem of prompt optimization with human feedback (POHF), in which we aim to optimize the prompt for a black-box LLM using only human preference feedback. Drawing inspiration from dueling bandits, we design a theoretically principled strategy to select a pair of prompts to query for preference feedback in every iteration, and hence introduce our algorithm named automated POHF (APOHF). We apply our APOHF algorithm to various tasks, including optimizing user instructions, prompt optimization for text-to-image generative models, and response optimization with human feedback (i.e., further refining the response using a variant of our APOHF). The results demonstrate that our APOHF can efficiently find a good prompt using a small number of preference feedback instances. Our code can be found at \url{https://github.com/xqlin98/APOHF}. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 457,865 |
2311.03615 | CAFE: Carbon-Aware Federated Learning in Geographically Distributed Data
Centers | Training large-scale artificial intelligence (AI) models demands significant computational power and energy, leading to increased carbon footprint with potential environmental repercussions. This paper delves into the challenges of training AI models across geographically distributed (geo-distributed) data centers, emphasizing the balance between learning performance and carbon footprint. We consider Federated Learning (FL) as a solution, which prioritizes model parameter exchange over raw data, ensuring data privacy and compliance with local regulations. Given the variability in carbon intensity across regions, we propose a new framework called CAFE (short for Carbon-Aware Federated Learning) to optimize training within a fixed carbon footprint budget. Our approach incorporates coreset selection to assess learning performance, employs the Lyapunov drift-plus-penalty framework to address the unpredictability of future carbon intensity, and devises an efficient algorithm to address the combinatorial complexity of the data center selection. Through extensive simulations using real-world carbon intensity data, we demonstrate the efficacy of our algorithm, highlighting its superiority over existing methods in optimizing learning performance while minimizing environmental impact. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 405,913 |
2110.05610 | TSK Fuzzy System Towards Few Labeled Incomplete Multi-View Data
Classification | Data collected by multiple methods or from multiple sources is called multi-view data. To make full use of the multi-view data, multi-view learning plays an increasingly important role. Traditional multi-view learning methods rely on a large number of labeled and completed multi-view data. However, it is expensive and time-consuming to obtain a large number of labeled multi-view data in real-world applications. Moreover, multi-view data is often incomplete because of data collection failures, self-deficiency, or other reasons. Therefore, we may have to face the problem of fewer labeled and incomplete multi-view data in real application scenarios. In this paper, a transductive semi-supervised incomplete multi-view TSK fuzzy system modeling method (SSIMV_TSK) is proposed to address these challenges. First, in order to alleviate the dependency on labeled data and keep the model interpretable, the proposed method integrates missing view imputation, pseudo label learning of unlabeled data, and fuzzy system modeling into a single process to yield a model with interpretable fuzzy rules. Then, two new mechanisms, i.e. the bidirectional structural preservation of instance and label, as well as the adaptive multiple alignment collaborative learning, are proposed to improve the robustness of the model. The proposed method has the following distinctive characteristics: 1) it can deal with the incomplete and few labeled multi-view data simultaneously; 2) it integrates the missing view imputation and model learning as a single process, which is more efficient than the traditional two-step strategy; 3) attributed to the interpretable fuzzy inference rules, this method is more interpretable. Experimental results on real datasets show that the proposed method significantly outperforms the state-of-the-art methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,327 |
2412.10006 | The role of inhibitory control in garden-path sentence processing: A
Chinese-English bilingual perspective | In reading garden-path sentences, people must resolve competing interpretations, though initial misinterpretations can linger despite reanalysis. This study examines the role of inhibitory control (IC) in managing these misinterpretations among Chinese-English bilinguals. Using self-paced reading tasks, we investigated how IC influences recovery from garden-path sentences in Chinese (L1) and its interaction with language proficiency during English (L2) processing. Results indicate that IC does not affect garden-path recovery in Chinese, suggesting reliance on semantic context may reduce the need for IC. In contrast, findings for English L2 learners reveal a complex relationship between language proficiency and IC: Participants with low L2 proficiency but high IC showed lingering misinterpretations, while those with high proficiency exhibited none. These results support and extend the Model of Cognitive Control (Ness et al., 2023). Moreover, our comparison of three Stroop task versions identifies L1 colour-word Stroop task as the preferred measure of IC in bilingual research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 516,752 |
2203.02617 | How to Train Unstable Looped Tensor Network | A rising problem in the compression of Deep Neural Networks is how to reduce the number of parameters in convolutional kernels and the complexity of these layers by low-rank tensor approximation. Canonical polyadic tensor decomposition (CPD) and Tucker tensor decomposition (TKD) are two solutions to this problem and provide promising results. However, CPD often fails due to degeneracy, making the networks unstable and hard to fine-tune. TKD does not provide much compression if the core tensor is big. This motivates using a hybrid model of CPD and TKD, a decomposition with multiple Tucker models with small core tensor, known as block term decomposition (BTD). This paper proposes a more compact model that further compresses the BTD by enforcing core tensors in BTD identical. We establish a link between the BTD with shared parameters and a looped chain tensor network (TC). Unfortunately, such strongly constrained tensor networks (with loop) encounter severe numerical instability, as proved by y (Landsberg, 2012) and (Handschuh, 2015a). We study perturbation of chain tensor networks, provide interpretation of instability in TC, demonstrate the problem. We propose novel methods to gain the stability of the decomposition results, keep the network robust and attain better approximation. Experimental results will confirm the superiority of the proposed methods in compression of well-known CNNs, and TC decomposition under challenging scenarios | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 283,798 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.