id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2
classes | cs.CE bool 2
classes | cs.SD bool 2
classes | cs.SI bool 2
classes | cs.AI bool 2
classes | cs.IR bool 2
classes | cs.LG bool 2
classes | cs.RO bool 2
classes | cs.CL bool 2
classes | cs.IT bool 2
classes | cs.SY bool 2
classes | cs.CV bool 2
classes | cs.CR bool 2
classes | cs.CY bool 2
classes | cs.MA bool 2
classes | cs.NE bool 2
classes | cs.DB bool 2
classes | Other bool 2
classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2309.16141 | Align before Search: Aligning Ads Image to Text for Accurate Cross-Modal
Sponsored Search | Cross-Modal sponsored search displays multi-modal advertisements (ads) when consumers look for desired products by natural language queries in search engines. Since multi-modal ads bring complementary details for query-ads matching, the ability to align ads-specific information in both images and texts is crucial for accurate and flexible sponsored search. Conventional research mainly studies from the view of modeling the implicit correlations between images and texts for query-ads matching, ignoring the alignment of detailed product information and resulting in suboptimal search performance.In this work, we propose a simple alignment network for explicitly mapping fine-grained visual parts in ads images to the corresponding text, which leverages the co-occurrence structure consistency between vision and language spaces without requiring expensive labeled training data. Moreover, we propose a novel model for cross-modal sponsored search that effectively conducts the cross-modal alignment and query-ads matching in two separate processes. In this way, the model matches the multi-modal input in the same language space, resulting in a superior performance with merely half of the training data. Our model outperforms the state-of-the-art models by 2.57% on a large commercial dataset. Besides sponsored search, our alignment method is applicable for general cross-modal search. We study a typical cross-modal retrieval task on the MSCOCO dataset, which achieves consistent performance improvement and proves the generalization ability of our method. Our code is available at https://github.com/Pter61/AlignCMSS/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 395,233 |
2411.05433 | Computing the Low-Weight codewords of Punctured and Shortened
Pre-Transformed polar Codes | In this paper, we present a deterministic algorithm to count the low-weight codewords of punctured and shortened pure and pre-transformed polar codes. The method first evaluates the weight properties of punctured/shortened polar cosets. Then, a method that discards the cosets that have no impact on the computation of the low-weight codewords is introduced. A key advantage of this method is its applicability, regardless of the frozen bit set, puncturing/shortening pattern, or pretransformation. Results confirm the method's efficiency while showing reduced computational complexity compared to stateof-the-art algorithms. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 506,664 |
2405.18948 | Learning to Recover from Plan Execution Errors during Robot
Manipulation: A Neuro-symbolic Approach | Automatically detecting and recovering from failures is an important but challenging problem for autonomous robots. Most of the recent work on learning to plan from demonstrations lacks the ability to detect and recover from errors in the absence of an explicit state representation and/or a (sub-) goal check function. We propose an approach (blending learning with symbolic search) for automated error discovery and recovery, without needing annotated data of failures. Central to our approach is a neuro-symbolic state representation, in the form of dense scene graph, structured based on the objects present within the environment. This enables efficient learning of the transition function and a discriminator that not only identifies failures but also localizes them facilitating fast re-planning via computation of heuristic distance function. We also present an anytime version of our algorithm, where instead of recovering to the last correct state, we search for a sub-goal in the original plan minimizing the total distance to the goal given a re-planning budget. Experiments on a physics simulator with a variety of simulated failures show the effectiveness of our approach compared to existing baselines, both in terms of efficiency as well as accuracy of our recovery mechanism. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 458,682 |
1905.12859 | Heterogeneity in demand and optimal price conditioning for local rail
transport | This paper describes the results of research project on optimal pricing for LLC "Perm Local Rail Company". In this study we propose a regression tree based approach for estimation of demand function for local rail tickets considering high degree of demand heterogeneity by various trip directions and the goals of travel. Employing detailed data on ticket sales for 5 years we estimate the parameters of demand function and reveal the significant variation in price elasticity of demand. While in average the demand is elastic by price, near a quarter of trips is characterized by weakly elastic demand. Lower elasticity of demand is correlated with lower degree of competition with other transport and inflexible frequency of travel. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,904 |
2308.03253 | PaniniQA: Enhancing Patient Education Through Interactive Question
Answering | Patient portal allows discharged patients to access their personalized discharge instructions in electronic health records (EHRs). However, many patients have difficulty understanding or memorizing their discharge instructions. In this paper, we present PaniniQA, a patient-centric interactive question answering system designed to help patients understand their discharge instructions. PaniniQA first identifies important clinical content from patients' discharge instructions and then formulates patient-specific educational questions. In addition, PaniniQA is also equipped with answer verification functionality to provide timely feedback to correct patients' misunderstandings. Our comprehensive automatic and human evaluation results demonstrate our PaniniQA is capable of improving patients' mastery of their medical instructions through effective interactions | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 383,956 |
1904.03570 | Integration of Nonlinear Disturbance Observer within Proxy-based Sliding
Mode Control for Pneumatic Muscle Actuators | This paper presents an integration of nonlinear disturbance observer within proxy-based sliding mode control (IDO-PSMC) approach for Pneumatic Muscle Actuators (PMAs). Due to the nonlinearities, uncertainties, hysteresis, and time-varying characteristics of the PMA, the model parameters are difficult to be identified accurately, which results in unmeasurable uncertainties and disturbances of the system. To solve this problem, a novel design of proxy-based sliding mode controller (PSMC) combined with a nonlinear disturbance observer (DO) is used for the tracking control of the PMA. Our approach combines both the merits of the PSMC and the DO so that it is effective in both reducing the ``chattering" phenomenon and improving the system robustness. A constrained Firefly Algorithm is used to search for the optimal control parameters. Based on the Lyapunov theorem, the states of the PMA are shown to be globally uniformly ultimately bounded. Extensive experiments were conducted to verify the superior performance of our approach, in multiple tracking scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 126,772 |
2201.01488 | Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers | The exemplar-free class incremental learning requires classification models to learn new class knowledge incrementally without retaining any old samples. Recently, the framework based on parallel one-class classifiers (POC), which trains a one-class classifier (OCC) independently for each category, has attracted extensive attention, since it can naturally avoid catastrophic forgetting. POC, however, suffers from weak discriminability and comparability due to its independent training strategy for different OOCs. To meet this challenge, we propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL). DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD), because a trained VAE can not only identify the probability of an input sample belonging to a class but also generate pseudo samples of the class to assist in learning new tasks. With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the comparability. Furthermore, DisCOIL introduces a hinge reconstruction loss to ensure the discriminability. We evaluate our method extensively on MNIST, CIFAR10, and Tiny-ImageNet. The experimental results show that DisCOIL achieves state-of-the-art performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 274,269 |
2110.07045 | Investigating Health-Aware Smart-Nudging with Machine Learning to Help
People Pursue Healthier Eating-Habits | Food-choices and eating-habits directly contribute to our long-term health. This makes the food recommender system a potential tool to address the global crisis of obesity and malnutrition. Over the past decade, artificial-intelligence and medical researchers became more invested in researching tools that can guide and help people make healthy and thoughtful decisions around food and diet. In many typical (Recommender System) RS domains, smart nudges have been proven effective in shaping users' consumption patterns. In recent years, knowledgeable nudging and incentifying choices started getting attention in the food domain as well. To develop smart nudging for promoting healthier food choices, we combined Machine Learning and RS technology with food-healthiness guidelines from recognized health organizations, such as the World Health Organization, Food Standards Agency, and the National Health Service United Kingdom. In this paper, we discuss our research on, persuasive visualization for making users aware of the healthiness of the recommended recipes. Here, we propose three novel nudging technology, the WHO-BubbleSlider, the FSA-ColorCoading, and the DRCI-MLCP, that encourage users to choose healthier recipes. We also propose a Topic Modeling based portion-size recommendation algorithm. To evaluate our proposed smart-nudges, we conducted an online user study with 96 participants and 92250 recipes. Results showed that, during the food decision-making process, appropriate healthiness cues make users more likely to click, browse, and choose healthier recipes over less healthy ones. | true | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 260,840 |
2402.00263 | Does DetectGPT Fully Utilize Perturbation? Bridging Selective
Perturbation to Fine-tuned Contrastive Learning Detector would be Better | The burgeoning generative capabilities of large language models (LLMs) have raised growing concerns about abuse, demanding automatic machine-generated text detectors. DetectGPT, a zero-shot metric-based detector, first introduces perturbation and shows great performance improvement. However, in DetectGPT, the random perturbation strategy could introduce noise, and logit regression depends on the threshold, harming the generalizability and applicability of individual or small-batch inputs. Hence, we propose a novel fine-tuned detector, Pecola, bridging metric-based and fine-tuned methods by contrastive learning on selective perturbation. Selective strategy retains important tokens during perturbation and weights for multi-pair contrastive learning. The experiments show that Pecola outperforms the state-of-the-art (SOTA) by 1.20% in accuracy on average on four public datasets. And we further analyze the effectiveness, robustness, and generalization of the method. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 425,539 |
1602.07743 | Channel Models for Multi-Level Cell Flash Memories Based on Empirical
Error Analysis | We propose binary discrete parametric channel models for multi-level cell (MLC) flash memories that provide accurate ECC performance estimation by modeling the empirically observed error characteristics under program/erase (P/E) cycling stress. Through a detailed empirical error characterization of 1X-nm and 2Y-nm MLC flash memory chips from two different vendors, we observe and characterize the overdispersion phenomenon in the number of bit errors per ECC frame. A well studied channel model such as the binary asymmetric channel (BAC) model is unable to provide accurate ECC performance estimation. Hence we propose a channel model based on the beta-binomial probability distribution (2-BBM channel model) which is a good fit for the overdispersed empirical error characteristics and show through statistical tests and simulation results for BCH, LDPC and polar codes, that the 2-BBM channel model provides accurate ECC performance estimation in MLC flash memories. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 52,553 |
1808.07982 | Proximal Policy Optimization and its Dynamic Version for Sequence
Generation | In sequence generation task, many works use policy gradient for model optimization to tackle the intractable backpropagation issue when maximizing the non-differentiable evaluation metrics or fooling the discriminator in adversarial learning. In this paper, we replace policy gradient with proximal policy optimization (PPO), which is a proved more efficient reinforcement learning algorithm, and propose a dynamic approach for PPO (PPO-dynamic). We demonstrate the efficacy of PPO and PPO-dynamic on conditional sequence generation tasks including synthetic experiment and chit-chat chatbot. The results show that PPO and PPO-dynamic can beat policy gradient by stability and performance. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 105,845 |
1509.00406 | Layer-layer competition in multiplex complex networks | The coexistence of multiple types of interactions within social, technological and biological networks has moved the focus of the physics of complex systems towards a multiplex description of the interactions between their constituents. This novel approach has unveiled that the multiplex nature of complex systems has strong influence in the emergence of collective states and their critical properties. Here we address an important issue that is intrinsic to the coexistence of multiple means of interactions within a network: their competition. To this aim, we study a two-layer multiplex in which the activity of users can be localized in each of the layer or shared between them, favoring that neighboring nodes within a layer focus their activity on the same layer. This framework mimics the coexistence and competition of multiple communication channels, in a way that the prevalence of a particular communication platform emerges as a result of the localization of users activity in one single interaction layer. Our results indicate that there is a transition from localization (use of a preferred layer) to delocalization (combined usage of both layers) and that the prevalence of a particular layer (in the localized state) depends on their structural properties. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 46,492 |
1803.09375 | Correcting differences in multi-site neuroimaging data using Generative
Adversarial Networks | Magnetic Resonance Imaging (MRI) of the brain has been used to investigate a wide range of neurological disorders, but data acquisition can be expensive, time-consuming, and inconvenient. Multi-site studies present a valuable opportunity to advance research by pooling data in order to increase sensitivity and statistical power. However images derived from MRI are susceptible to both obvious and non-obvious differences between sites which can introduce bias and subject variance, and so reduce statistical power. To rectify these differences, we propose a data driven approach using a deep learning architecture known as generative adversarial networks (GANs). GANs learn to estimate two distributions, and can then be used to transform examples from one distribution into the other distribution. Here we transform T1-weighted brain images collected from two different sites into MR images from the same site. We evaluate whether our model can reduce site-specific differences without loss of information related to gender (male, female) or clinical diagnosis (schizophrenia, bipolar disorder, healthy). When trained appropriately, our model is able to normalise imaging sets to a common scanner set with less information loss compared to current approaches. An important advantage is our method can be treated as a black box that does not require any knowledge of the sources of bias but only needs at least two distinct imaging sets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 93,485 |
2405.06671 | Parameter-Efficient Instruction Tuning of Large Language Models For
Extreme Financial Numeral Labelling | We study the problem of automatically annotating relevant numerals (GAAP metrics) occurring in the financial documents with their corresponding XBRL tags. Different from prior works, we investigate the feasibility of solving this extreme classification problem using a generative paradigm through instruction tuning of Large Language Models (LLMs). To this end, we leverage metric metadata information to frame our target outputs while proposing a parameter efficient solution for the task using LoRA. We perform experiments on two recently released financial numeric labeling datasets. Our proposed model, FLAN-FinXC, achieves new state-of-the-art performances on both the datasets, outperforming several strong baselines. We explain the better scores of our proposed model by demonstrating its capability for zero-shot as well as the least frequently occurring tags. Also, even when we fail to predict the XBRL tags correctly, our generated output has substantial overlap with the ground-truth in majority of the cases. | false | true | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 453,387 |
2408.05285 | Semi-Supervised One-Shot Imitation Learning | One-shot Imitation Learning~(OSIL) aims to imbue AI agents with the ability to learn a new task from a single demonstration. To supervise the learning, OSIL typically requires a prohibitively large number of paired expert demonstrations -- i.e. trajectories corresponding to different variations of the same semantic task. To overcome this limitation, we introduce the semi-supervised OSIL problem setting, where the learning agent is presented with a large dataset of trajectories with no task labels (i.e. an unpaired dataset), along with a small dataset of multiple demonstrations per semantic task (i.e. a paired dataset). This presents a more realistic and practical embodiment of few-shot learning and requires the agent to effectively leverage weak supervision from a large dataset of trajectories. Subsequently, we develop an algorithm specifically applicable to this semi-supervised OSIL setting. Our approach first learns an embedding space where different tasks cluster uniquely. We utilize this embedding space and the clustering it supports to self-generate pairings between trajectories in the large unpaired dataset. Through empirical results on simulated control tasks, we demonstrate that OSIL models trained on such self-generated pairings are competitive with OSIL models trained with ground-truth labels, presenting a major advancement in the label-efficiency of OSIL. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 479,721 |
2404.16296 | Research on Splicing Image Detection Algorithms Based on Natural Image
Statistical Characteristics | With the development and widespread application of digital image processing technology, image splicing has become a common method of image manipulation, raising numerous security and legal issues. This paper introduces a new splicing image detection algorithm based on the statistical characteristics of natural images, aimed at improving the accuracy and efficiency of splicing image detection. By analyzing the limitations of traditional methods, we have developed a detection framework that integrates advanced statistical analysis techniques and machine learning methods. The algorithm has been validated using multiple public datasets, showing high accuracy in detecting spliced edges and locating tampered areas, as well as good robustness. Additionally, we explore the potential applications and challenges faced by the algorithm in real-world scenarios. This research not only provides an effective technological means for the field of image tampering detection but also offers new ideas and methods for future related research. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 449,428 |
1401.6543 | Pseudo-random Phase Precoded Spatial Modulation | Spatial modulation (SM) is a transmission scheme that uses multiple transmit antennas but only one transmit RF chain. At each time instant, only one among the transmit antennas will be active and the others remain silent. The index of the active transmit antenna will also convey information bits in addition to the information bits conveyed through modulation symbols (e.g.,QAM). Pseudo-random phase precoding (PRPP) is a technique that can achieve high diversity orders even in single antenna systems without the need for channel state information at the transmitter (CSIT) and transmit power control (TPC). In this paper, we exploit the advantages of both SM and PRPP simultaneously. We propose a pseudo-random phase precoded SM (PRPP-SM) scheme, where both the modulation bits and the antenna index bits are precoded by pseudo-random phases. The proposed PRPP-SM system gives significant performance gains over SM system without PRPP and PRPP system without SM. Since maximum likelihood (ML) detection becomes exponentially complex in large dimensions, we propose low complexity local search based detection (LSD) algorithm suited for PRPP-SM systems with large precoder sizes. Our simulation results show that with 4 transmit antennas, 1 receive antenna, $5\times 20$ pseudo-random phase precoder matrix and BPSK modulation, the performance of PRPP-SM using ML detection is better than SM without PRPP with ML detection by about 9 dB at $10^{-2}$ BER. This performance advantage gets even better for large precoding sizes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 30,371 |
2102.08411 | Darknet Traffic Big-Data Analysis and Network Management to Real-Time
Automating the Malicious Intent Detection Process by a Weight Agnostic Neural
Networks Framework | Attackers are perpetually modifying their tactics to avoid detection and frequently leverage legitimate credentials with trusted tools already deployed in a network environment, making it difficult for organizations to proactively identify critical security risks. Network traffic analysis products have emerged in response to attackers relentless innovation, offering organizations a realistic path forward for combatting creative attackers. Additionally, thanks to the widespread adoption of cloud computing, Device Operators processes, and the Internet of Things, maintaining effective network visibility has become a highly complex and overwhelming process. What makes network traffic analysis technology particularly meaningful is its ability to combine its core capabilities to deliver malicious intent detection. In this paper, we propose a novel darknet traffic analysis and network management framework to real-time automating the malicious intent detection process, using a weight agnostic neural networks architecture. It is an effective and accurate computational intelligent forensics tool for network traffic analysis, the demystification of malware traffic, and encrypted traffic identification in real-time. Based on Weight Agnostic Neural Networks methodology, we propose an automated searching neural net architectures strategy that can perform various tasks such as identify zero-day attacks. By automating the malicious intent detection process from the darknet, the advanced proposed solution is reducing the skills and effort barrier that prevents many organizations from effectively protecting their most critical assets. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 220,438 |
2406.17923 | PAFT: A Parallel Training Paradigm for Effective LLM Fine-Tuning | Large language models (LLMs) have shown remarkable abilities in diverse natural language processing (NLP) tasks. The LLMs generally undergo supervised fine-tuning (SFT) followed by preference alignment to be usable in downstream applications. However, this sequential training pipeline leads to alignment tax that degrades the LLM performance. This paper introduces PAFT, a new PArallel training paradigm for effective LLM Fine-Tuning, which independently performs SFT and preference alignment (e.g., DPO and ORPO, etc.) with the same pre-trained model on respective datasets. The model produced by SFT and the model from preference alignment are then merged into a final model by parameter fusing for use in downstream applications. This work reveals important findings that preference alignment like DPO naturally results in a sparse model while SFT leads to a natural dense model which needs to be sparsified for effective model merging. This paper introduces an effective interference resolution which reduces the redundancy by sparsifying the delta parameters. The LLM resulted from the new training paradigm achieved Rank #1 on the HuggingFace Open LLM Leaderboard. Comprehensive evaluation shows the effectiveness of the parallel training paradigm. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 467,794 |
2306.17529 | Locking On: Leveraging Dynamic Vehicle-Imposed Motion Constraints to
Improve Visual Localization | Most 6-DoF localization and SLAM systems use static landmarks but ignore dynamic objects because they cannot be usefully incorporated into a typical pipeline. Where dynamic objects have been incorporated, typical approaches have attempted relatively sophisticated identification and localization of these objects, limiting their robustness or general utility. In this research, we propose a middle ground, demonstrated in the context of autonomous vehicles, using dynamic vehicles to provide limited pose constraint information in a 6-DoF frame-by-frame PnP-RANSAC localization pipeline. We refine initial pose estimates with a motion model and propose a method for calculating the predicted quality of future pose estimates, triggered based on whether or not the autonomous vehicle's motion is constrained by the relative frame-to-frame location of dynamic vehicles in the environment. Our approach detects and identifies suitable dynamic vehicles to define these pose constraints to modify a pose filter, resulting in improved recall across a range of localization tolerances from $0.25m$ to $5m$, compared to a state-of-the-art baseline single image PnP method and its vanilla pose filtering. Our constraint detection system is active for approximately $35\%$ of the time on the Ford AV dataset and localization is particularly improved when the constraint detection is active. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 376,732 |
2111.02272 | Convolutional Motif Kernel Networks | Artificial neural networks show promising performance in detecting correlations within data that are associated with specific outcomes. However, the black-box nature of such models can hinder the knowledge advancement in research fields by obscuring the decision process and preventing scientist to fully conceptualize predicted outcomes. Furthermore, domain experts like healthcare providers need explainable predictions to assess whether a predicted outcome can be trusted in high stakes scenarios and to help them integrating a model into their own routine. Therefore, interpretable models play a crucial role for the incorporation of machine learning into high stakes scenarios like healthcare. In this paper we introduce Convolutional Motif Kernel Networks, a neural network architecture that involves learning a feature representation within a subspace of the reproducing kernel Hilbert space of the position-aware motif kernel function. The resulting model enables to directly interpret and evaluate prediction outcomes by providing a biologically and medically meaningful explanation without the need for additional post-hoc analysis. We show that our model is able to robustly learn on small datasets and reaches state-of-the-art performance on relevant healthcare prediction tasks. Our proposed method can be utilized on DNA and protein sequences. Furthermore, we show that the proposed method learns biologically meaningful concepts directly from data using an end-to-end learning scheme. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 264,817 |
2004.13465 | Nearly Optimal Regret for Stochastic Linear Bandits with Heavy-Tailed
Payoffs | In this paper, we study the problem of stochastic linear bandits with finite action sets. Most of existing work assume the payoffs are bounded or sub-Gaussian, which may be violated in some scenarios such as financial markets. To settle this issue, we analyze the linear bandits with heavy-tailed payoffs, where the payoffs admit finite $1+\epsilon$ moments for some $\epsilon\in(0,1]$. Through median of means and dynamic truncation, we propose two novel algorithms which enjoy a sublinear regret bound of $\widetilde{O}(d^{\frac{1}{2}}T^{\frac{1}{1+\epsilon}})$, where $d$ is the dimension of contextual information and $T$ is the time horizon. Meanwhile, we provide an $\Omega(d^{\frac{\epsilon}{1+\epsilon}}T^{\frac{1}{1+\epsilon}})$ lower bound, which implies our upper bound matches the lower bound up to polylogarithmic factors in the order of $d$ and $T$ when $\epsilon=1$. Finally, we conduct numerical experiments to demonstrate the effectiveness of our algorithms and the empirical results strongly support our theoretical guarantees. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 174,558 |
2309.00916 | BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment
of Continuation Writing | The emergence of large language models (LLMs) has sparked significant interest in extending their remarkable language capabilities to speech. However, modality alignment between speech and text still remains an open problem. Current solutions can be categorized into two strategies. One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text. The other is an end-to-end approach that relies on speech instruction data, which is very difficult to collect in large quantities. In this paper, we address these issues and propose the BLSP approach that Bootstraps Language-Speech Pre-training via behavior alignment of continuation writing. We achieve this by learning a lightweight modality adapter between a frozen speech encoder and an LLM, ensuring that the LLM exhibits the same generation behavior regardless of the modality of input: a speech segment or its transcript. The training process can be divided into two steps. The first step prompts an LLM to generate texts with speech transcripts as prefixes, obtaining text continuations. In the second step, these continuations are used as supervised signals to train the modality adapter in an end-to-end manner. We demonstrate that this straightforward process can extend the capabilities of LLMs to speech, enabling speech recognition, speech translation, spoken language understanding, and speech conversation, even in zero-shot cross-lingual scenarios. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 389,468 |
2406.03000 | Simplification of Risk Averse POMDPs with Performance Guarantees | Risk averse decision making under uncertainty in partially observable domains is a fundamental problem in AI and essential for reliable autonomous agents. In our case, the problem is modeled using partially observable Markov decision processes (POMDPs), when the value function is the conditional value at risk (CVaR) of the return. Calculating an optimal solution for POMDPs is computationally intractable in general. In this work we develop a simplification framework to speedup the evaluation of the value function, while providing performance guarantees. We consider as simplification a computationally cheaper belief-MDP transition model, that can correspond, e.g., to cheaper observation or transition models. Our contributions include general bounds for CVaR that allow bounding the CVaR of a random variable X, using a random variable Y, by assuming bounds between their cumulative distributions. We then derive bounds for the CVaR value function in a POMDP setting, and show how to bound the value function using the computationally cheaper belief-MDP transition model and without accessing the computationally expensive model in real-time. Then, we provide theoretical performance guarantees for the estimated bounds. Our results apply for a general simplification of a belief-MDP transition model and support simplification of both the observation and state transition models simultaneously. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 461,044 |
1102.0987 | Propagation on networks: an exact alternative perspective | By generating the specifics of a network structure only when needed (on-the-fly), we derive a simple stochastic process that exactly models the time evolution of susceptible-infectious dynamics on finite-size networks. The small number of dynamical variables of this birth-death Markov process greatly simplifies analytical calculations. We show how a dual analytical description, treating large scale epidemics with a Gaussian approximations and small outbreaks with a branching process, provides an accurate approximation of the distribution even for rather small networks. The approach also offers important computational advantages and generalizes to a vast class of systems. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 9,036 |
2212.08781 | Multi-Scale Relational Graph Convolutional Network for Multiple Instance
Learning in Histopathology Images | Graph convolutional neural networks have shown significant potential in natural and histopathology images. However, their use has only been studied in a single magnification or multi-magnification with late fusion. In order to leverage the multi-magnification information and early fusion with graph convolutional networks, we handle different embedding spaces at each magnification by introducing the Multi-Scale Relational Graph Convolutional Network (MS-RGCN) as a multiple instance learning method. We model histopathology image patches and their relation with neighboring patches and patches at other scales (i.e., magnifications) as a graph. To pass the information between different magnification embedding spaces, we define separate message-passing neural networks based on the node and edge type. We experiment on prostate cancer histopathology images to predict the grade groups based on the extracted features from patches. We also compare our MS-RGCN with multiple state-of-the-art methods with evaluations on several source and held-out datasets. Our method outperforms the state-of-the-art on all of the datasets and image types consisting of tissue microarrays, whole-mount slide regions, and whole-slide images. Through an ablation study, we test and show the value of the pertinent design features of the MS-RGCN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 336,872 |
2303.09556 | Efficient Diffusion Training via Min-SNR Weighting Strategy | Denoising diffusion models have been a mainstream approach for image generation, however, training these models often suffers from slow convergence. In this paper, we discovered that the slow convergence is partly due to conflicting optimization directions between timesteps. To address this issue, we treat the diffusion training as a multi-task learning problem, and introduce a simple yet effective approach referred to as Min-SNR-$\gamma$. This method adapts loss weights of timesteps based on clamped signal-to-noise ratios, which effectively balances the conflicts among timesteps. Our results demonstrate a significant improvement in converging speed, 3.4$\times$ faster than previous weighting strategies. It is also more effective, achieving a new record FID score of 2.06 on the ImageNet $256\times256$ benchmark using smaller architectures than that employed in previous state-of-the-art. The code is available at https://github.com/TiankaiHang/Min-SNR-Diffusion-Training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 352,096 |
cs/0307037 | Supporting Dynamic Ad hoc Collaboration Capabilities | Modern HENP experiments such as CMS and Atlas involve as many as 2000 collaborators around the world. Collaborations this large will be unable to meet often enough to support working closely together. Many of the tools currently available for collaboration focus on heavy-weight applications such as videoconferencing tools. While these are important, there is a more basic need for tools that support connecting physicists to work together on an ad hoc or continuous basis. Tools that support the day-to-day connectivity and underlying needs of a group of collaborators are important for providing light-weight, non-intrusive, and flexible ways to work collaboratively. Some example tools include messaging, file-sharing, and shared plot viewers. An important component of the environment is a scalable underlying communication framework. In this paper we will describe our current progress on building a dynamic and ad hoc collaboration environment and our vision for its evolution into a HENP collaboration environment. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 537,922 |
2306.09650 | Reconfigurable Intelligent Surface Assisted Semantic Communication
Systems | Semantic communication, which focuses on conveying the meaning of information rather than exact bit reconstruction, has gained considerable attention in recent years. Meanwhile, reconfigurable intelligent surface (RIS) is a promising technology that can achieve high spectral and energy efficiency by dynamically reflecting incident signals through programmable passive components. In this paper, we put forth a semantic communication scheme aided by RIS. Using text transmission as an example, experimental results demonstrate that the RIS-assisted semantic communication system outperforms the point-to-point semantic communication system in terms of bilingual evaluation understudy (BLEU) scores in Rayleigh fading channels, especially at low signal-to-noise ratio (SNR) regimes. In addition, the RIS-assisted semantic communication system exhibits superior robustness against channel estimation errors compared to its point-to-point counterpart. RIS can improve performance as it provides extra line-of-sight (LoS) paths and enhances signal propagation conditions compared to point-to-point systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 373,915 |
2305.06522 | Randomized Smoothing with Masked Inference for Adversarially Robust Text
Classifications | Large-scale pre-trained language models have shown outstanding performance in a variety of NLP tasks. However, they are also known to be significantly brittle against specifically crafted adversarial examples, leading to increasing interest in probing the adversarial robustness of NLP systems. We introduce RSMI, a novel two-stage framework that combines randomized smoothing (RS) with masked inference (MI) to improve the adversarial robustness of NLP systems. RS transforms a classifier into a smoothed classifier to obtain robust representations, whereas MI forces a model to exploit the surrounding context of a masked token in an input sequence. RSMI improves adversarial robustness by 2 to 3 times over existing state-of-the-art methods on benchmark datasets. We also perform in-depth qualitative analysis to validate the effectiveness of the different stages of RSMI and probe the impact of its components through extensive ablations. By empirically proving the stability of RSMI, we put it forward as a practical method to robustly train large-scale NLP models. Our code and datasets are available at https://github.com/Han8931/rsmi_nlp | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 363,560 |
2211.05276 | PhotoFourier: A Photonic Joint Transform Correlator-Based Neural Network
Accelerator | The last few years have seen a lot of work to address the challenge of low-latency and high-throughput convolutional neural network inference. Integrated photonics has the potential to dramatically accelerate neural networks because of its low-latency nature. Combined with the concept of Joint Transform Correlator (JTC), the computationally expensive convolution functions can be computed instantaneously (time of flight of light) with almost no cost. This 'free' convolution computation provides the theoretical basis of the proposed PhotoFourier JTC-based CNN accelerator. PhotoFourier addresses a myriad of challenges posed by on-chip photonic computing in the Fourier domain including 1D lenses and high-cost optoelectronic conversions. The proposed PhotoFourier accelerator achieves more than 28X better energy-delay product compared to state-of-art photonic neural network accelerators. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 329,496 |
2106.06770 | What can linearized neural networks actually say about generalization? | For certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization, but for the networks used in practice, the empirical NTK only provides a rough first-order approximation. Still, a growing body of work keeps leveraging this approximation to successfully analyze important deep learning phenomena and design algorithms for new applications. In our work, we provide strong empirical evidence to determine the practical validity of such approximation by conducting a systematic comparison of the behavior of different neural networks and their linear approximations on different tasks. We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks, even when they achieve very different performances. However, in contrast to what was previously reported, we discover that neural networks do not always perform better than their kernel approximations, and reveal that the performance gap heavily depends on architecture, dataset size and training task. We discover that networks overfit to these tasks mostly due to the evolution of their kernel during training, thus, revealing a new type of implicit bias. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 240,614 |
2211.01809 | Manipulation of individual judgments in the quantitative pairwise
comparisons method | Decision-making methods very often use the technique of comparing alternatives in pairs. In this approach, experts are asked to compare different options, and then a quantitative ranking is created from the results obtained. It is commonly believed that experts (decision-makers) are honest in their judgments. In our work, we consider a scenario in which experts are vulnerable to bribery. For this purpose, we define a framework that allows us to determine the intended manipulation and present three algorithms for achieving the intended goal. Analyzing these algorithms may provide clues to help defend against such attacks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 328,365 |
1908.10788 | PyFrac: A planar 3D hydraulic fracture simulator | Fluid driven fractures propagate in the upper earth crust either naturally or in response to engineered fluid injections. The quantitative prediction of their evolution is critical in order to better understand their dynamics as well as to optimize their creation. We present a Python implementation of an open-source hydraulic fracture propagation simulator based on the implicit level set algorithm originally developed by Peirce & Detournay (2008) -- "An implicit level set method for modeling hydraulically driven fractures". Comp. Meth. Appl. Mech. Engng, (33-40):2858--2885. This algorithm couples a finite discretization of the fracture with the use of the near tip asymptotic solutions of a steadily propagating semi-infinite hydraulic fracture. This allows to resolve the multi-scale processes governing hydraulic fracture growth accurately, even with relatively coarse meshes. We present an overview of the mathematical formulation, the numerical scheme and the details of our implementation. A series of problems including a radial hydraulic fracture verification benchmark, the propagation of a height contained hydraulic fracture, the lateral spreading of a magmatic dyke and the handling of fracture closure are presented to demonstrate the capabilities, accuracy and robustness of the implemented algorithm. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 143,212 |
2202.11781 | RadioTransformer: A Cascaded Global-Focal Transformer for Visual
Attention-guided Disease Classification | In this work, we present RadioTransformer, a novel visual attention-driven transformer framework, that leverages radiologists' gaze patterns and models their visuo-cognitive behavior for disease diagnosis on chest radiographs. Domain experts, such as radiologists, rely on visual information for medical image interpretation. On the other hand, deep neural networks have demonstrated significant promise in similar tasks even where visual interpretation is challenging. Eye-gaze tracking has been used to capture the viewing behavior of domain experts, lending insights into the complexity of visual search. However, deep learning frameworks, even those that rely on attention mechanisms, do not leverage this rich domain information. RadioTransformer fills this critical gap by learning from radiologists' visual search patterns, encoded as 'human visual attention regions' in a cascaded global-focal transformer framework. The overall 'global' image characteristics and the more detailed 'local' features are captured by the proposed global and focal modules, respectively. We experimentally validate the efficacy of our student-teacher approach for 8 datasets involving different disease classification tasks where eye-gaze data is not available during the inference phase. Code: https://github.com/bmi-imaginelab/radiotransformer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 281,993 |
2311.16876 | Digital Twin-Enhanced Deep Reinforcement Learning for Resource
Management in Networks Slicing | Network slicing-based communication systems can dynamically and efficiently allocate resources for diversified services. However, due to the limitation of the network interface on channel access and the complexity of the resource allocation, it is challenging to achieve an acceptable solution in the practical system without precise prior knowledge of the dynamics probability model of the service requests. Existing work attempts to solve this problem using deep reinforcement learning (DRL), however, such methods usually require a lot of interaction with the real environment in order to achieve good results. In this paper, a framework consisting of a digital twin and reinforcement learning agents is present to handle the issue. Specifically, we propose to use the historical data and the neural networks to build a digital twin model to simulate the state variation law of the real environment. Then, we use the data generated by the network slicing environment to calibrate the digital twin so that it is in sync with the real environment. Finally, DRL for slice optimization optimizes its own performance in this virtual pre-verification environment. We conducted an exhaustive verification of the proposed digital twin framework to confirm its scalability. Specifically, we propose to use loss landscapes to visualize the generalization of DRL solutions. We explore a distillation-based optimization scheme for lightweight slicing strategies. In addition, we also extend the framework to offline reinforcement learning, where solutions can be used to obtain intelligent decisions based solely on historical data. Numerical simulation experiments show that the proposed digital twin can significantly improve the performance of the slice optimization strategy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 411,078 |
2109.03230 | Self-supervised Tumor Segmentation through Layer Decomposition | In this paper, we target self-supervised representation learning for zero-shot tumor segmentation. We make the following contributions: First, we advocate a zero-shot setting, where models from pre-training should be directly applicable for the downstream task, without using any manual annotations. Second, we take inspiration from "layer-decomposition", and innovate on the training regime with simulated tumor data. Third, we conduct extensive ablation studies to analyse the critical components in data simulation, and validate the necessity of different proxy tasks. We demonstrate that, with sufficient texture randomization in simulation, model trained on synthetic data can effortlessly generalise to segment real tumor data. Forth, our approach achieves superior results for zero-shot tumor segmentation on different downstream datasets, BraTS2018 for brain tumor segmentation and LiTS2017 for liver tumor segmentation. While evaluating the model transferability for tumor segmentation under a low-annotation regime, the proposed approach also outperforms all existing self-supervised approaches, opening up the usage of self-supervised learning in practical scenarios. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 254,001 |
2203.12881 | Can Unsupervised Knowledge Transfer from Social Discussions Help
Argument Mining? | Identifying argument components from unstructured texts and predicting the relationships expressed among them are two primary steps of argument mining. The intrinsic complexity of these tasks demands powerful learning models. While pretrained Transformer-based Language Models (LM) have been shown to provide state-of-the-art results over different NLP tasks, the scarcity of manually annotated data and the highly domain-dependent nature of argumentation restrict the capabilities of such models. In this work, we propose a novel transfer learning strategy to overcome these challenges. We utilize argumentation-rich social discussions from the ChangeMyView subreddit as a source of unsupervised, argumentative discourse-aware knowledge by finetuning pretrained LMs on a selectively masked language modeling task. Furthermore, we introduce a novel prompt-based strategy for inter-component relation prediction that compliments our proposed finetuning method while leveraging on the discourse context. Exhaustive experiments show the generalization capability of our method on these two tasks over within-domain as well as out-of-domain datasets, outperforming several existing and employed strong baselines. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 287,432 |
2404.09586 | Mitigating the Curse of Dimensionality for Certified Robustness via Dual
Randomized Smoothing | Randomized Smoothing (RS) has been proven a promising method for endowing an arbitrary image classifier with certified robustness. However, the substantial uncertainty inherent in the high-dimensional isotropic Gaussian noise imposes the curse of dimensionality on RS. Specifically, the upper bound of ${\ell_2}$ certified robustness radius provided by RS exhibits a diminishing trend with the expansion of the input dimension $d$, proportionally decreasing at a rate of $1/\sqrt{d}$. This paper explores the feasibility of providing ${\ell_2}$ certified robustness for high-dimensional input through the utilization of dual smoothing in the lower-dimensional space. The proposed Dual Randomized Smoothing (DRS) down-samples the input image into two sub-images and smooths the two sub-images in lower dimensions. Theoretically, we prove that DRS guarantees a tight ${\ell_2}$ certified robustness radius for the original input and reveal that DRS attains a superior upper bound on the ${\ell_2}$ robustness radius, which decreases proportionally at a rate of $(1/\sqrt m + 1/\sqrt n )$ with $m+n=d$. Extensive experiments demonstrate the generalizability and effectiveness of DRS, which exhibits a notable capability to integrate with established methodologies, yielding substantial improvements in both accuracy and ${\ell_2}$ certified robustness baselines of RS on the CIFAR-10 and ImageNet datasets. Code is available at https://github.com/xiasong0501/DRS. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 446,749 |
2305.15114 | Thinking Twice: Clinical-Inspired Thyroid Ultrasound Lesion Detection
Based on Feature Feedback | Accurate detection of thyroid lesions is a critical aspect of computer-aided diagnosis. However, most existing detection methods perform only one feature extraction process and then fuse multi-scale features, which can be affected by noise and blurred features in ultrasound images. In this study, we propose a novel detection network based on a feature feedback mechanism inspired by clinical diagnosis. The mechanism involves first roughly observing the overall picture and then focusing on the details of interest. It comprises two parts: a feedback feature selection module and a feature feedback pyramid. The feedback feature selection module efficiently selects the features extracted in the first phase in both space and channel dimensions to generate high semantic prior knowledge, which is similar to coarse observation. The feature feedback pyramid then uses this high semantic prior knowledge to enhance feature extraction in the second phase and adaptively fuses the two features, similar to fine observation. Additionally, since radiologists often focus on the shape and size of lesions for diagnosis, we propose an adaptive detection head strategy to aggregate multi-scale features. Our proposed method achieves an AP of 70.3% and AP50 of 99.0% on the thyroid ultrasound dataset and meets the real-time requirement. The code is available at https://github.com/HIT-wanglingtao/Thinking-Twice. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 367,462 |
1603.08336 | Distributed Fusion of Labeled Multi-Object Densities Via Label Spaces
Matching | In this paper, we address the problem of the distributed multi-target tracking with labeled set filters in the framework of Generalized Covariance Intersection (GCI). Our analyses show that the label space mismatching (LS-DM) phenomenon, which means the same realization drawn from label spaces of different sensors does not have the same implication, is quite common in practical scenarios and may bring serious problems. Our contributions are two-fold. Firstly, we provide a principled mathematical definition of "label spaces matching (LS-DM)" based on information divergence, which is also referred to as LS-M criterion. Then, to handle the LS-DM, we propose a novel two-step distributed fusion algorithm, named as GCI fusion via label spaces matching (GCI-LSM). The first step is to match the label spaces from different sensors. To this end, we build a ranked assignment problem and design a cost function consistent with LS-M criterion to seek the optimal solution of matching correspondence between label spaces of different sensors. The second step is to perform the GCI fusion on the matched label space. We also derive the GCI fusion with generic labeled multi-object (LMO) densities based on LS-M, which is the foundation of labeled distributed fusion algorithms. Simulation results for Gaussian mixture implementation highlight the performance of the proposed GCI-LSM algorithm in two different tracking scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 53,772 |
2409.10021 | LithoHoD: A Litho Simulator-Powered Framework for IC Layout Hotspot
Detection | Recent advances in VLSI fabrication technology have led to die shrinkage and increased layout density, creating an urgent demand for advanced hotspot detection techniques. However, by taking an object detection network as the backbone, recent learning-based hotspot detectors learn to recognize only the problematic layout patterns in the training data. This fact makes these hotspot detectors difficult to generalize to real-world scenarios. We propose a novel lithography simulator-powered hotspot detection framework to overcome this difficulty. Our framework integrates a lithography simulator with an object detection backbone, merging the extracted latent features from both the simulator and the object detector via well-designed cross-attention blocks. Consequently, the proposed framework can be used to detect potential hotspot regions based on I) the variation of possible circuit shape deformation estimated by the lithography simulator, and ii) the problematic layout patterns already known. To this end, we utilize RetinaNet with a feature pyramid network as the object detection backbone and leverage LithoNet as the lithography simulator. Extensive experiments demonstrate that our proposed simulator-guided hotspot detection framework outperforms previous state-of-the-art methods on real-world data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 488,579 |
2305.13944 | Acquiring Frame Element Knowledge with Deep Metric Learning for Semantic
Frame Induction | The semantic frame induction tasks are defined as a clustering of words into the frames that they evoke, and a clustering of their arguments according to the frame element roles that they should fill. In this paper, we address the latter task of argument clustering, which aims to acquire frame element knowledge, and propose a method that applies deep metric learning. In this method, a pre-trained language model is fine-tuned to be suitable for distinguishing frame element roles through the use of frame-annotated data, and argument clustering is performed with embeddings obtained from the fine-tuned model. Experimental results on FrameNet demonstrate that our method achieves substantially better performance than existing methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 366,778 |
2205.08325 | GraphMapper: Efficient Visual Navigation by Scene Graph Generation | Understanding the geometric relationships between objects in a scene is a core capability in enabling both humans and autonomous agents to navigate in new environments. A sparse, unified representation of the scene topology will allow agents to act efficiently to move through their environment, communicate the environment state with others, and utilize the representation for diverse downstream tasks. To this end, we propose a method to train an autonomous agent to learn to accumulate a 3D scene graph representation of its environment by simultaneously learning to navigate through said environment. We demonstrate that our approach, GraphMapper, enables the learning of effective navigation policies through fewer interactions with the environment than vision-based systems alone. Further, we show that GraphMapper can act as a modular scene encoder to operate alongside existing Learning-based solutions to not only increase navigational efficiency but also generate intermediate scene representations that are useful for other future tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 296,892 |
1812.02425 | MEAL: Multi-Model Ensemble via Adversarial Learning | Often the best performing deep neural models are ensembles of multiple base-level networks. Unfortunately, the space required to store these many networks, and the time required to execute them at test-time, prohibits their use in applications where test sets are large (e.g., ImageNet). In this paper, we present a method for compressing large, complex trained ensembles into a single network, where knowledge from a variety of trained deep neural networks (DNNs) is distilled and transferred to a single DNN. In order to distill diverse knowledge from different trained (teacher) models, we propose to use adversarial-based learning strategy where we define a block-wise training loss to guide and optimize the predefined student network to recover the knowledge in teacher models, and to promote the discriminator network to distinguish teacher vs. student features simultaneously. The proposed ensemble method (MEAL) of transferring distilled knowledge with adversarial learning exhibits three important advantages: (1) the student network that learns the distilled knowledge with discriminators is optimized better than the original model; (2) fast inference is realized by a single forward pass, while the performance is even better than traditional ensembles from multi-original models; (3) the student network can learn the distilled knowledge from a teacher model that has arbitrary structures. Extensive experiments on CIFAR-10/100, SVHN and ImageNet datasets demonstrate the effectiveness of our MEAL method. On ImageNet, our ResNet-50 based MEAL achieves top-1/5 21.79%/5.99% val error, which outperforms the original model by 2.06%/1.14%. Code and models are available at: https://github.com/AaronHeee/MEAL | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 115,756 |
2410.00643 | Cross-Camera Data Association via GNN for Supervised Graph Clustering | Cross-camera data association is one of the cornerstones of the multi-camera computer vision field. Although often integrated into detection and tracking tasks through architecture design and loss definition, it is also recognized as an independent challenge. The ultimate goal is to connect appearances of one item from all cameras, wherever it is visible. Therefore, one possible perspective on this task involves supervised clustering of the affinity graph, where nodes are instances captured by all cameras. They are represented by appropriate visual features and positional attributes. We leverage the advantages of GNN (Graph Neural Network) architecture to examine nodes' relations and generate representative edge embeddings. These embeddings are then classified to determine the existence or non-existence of connections in node pairs. Therefore, the core of this approach is graph connectivity prediction. Experimental validation was conducted on multicamera pedestrian datasets across diverse environments such as the laboratory, basketball court, and terrace. Our proposed method, named SGC-CCA, outperformed the state-of-the-art method named GNN-CCA across all clustering metrics, offering an end-to-end clustering solution without the need for graph post-processing. The code is available at https://github.com/djordjened92/cca-gnnclust. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 493,449 |
2009.13659 | Network Analysis of the 2016 Presidential Campaign Tweets | We applied complex network analysis to ~27,000 tweets posted by the 2016 presidential election's principal participants in the USA. We identified the stages of the election campaigns and the recurring topics addressed by the candidates. Finally, we revealed the leader-follower relationships between the candidates. We conclude that Secretary Hillary Clinton's Twitter performance was subordinate to that of Donald Trump, which may have been one factor that led to her electoral defeat. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 197,798 |
2402.02358 | Repairing Reed-Solomon Codes over Prime Fields via Exponential Sums | This paper presents two repair schemes for low-rate Reed-Solomon (RS) codes over prime fields that can repair any node by downloading a constant number of bits from each surviving node. The total bandwidth resulting from these schemes is greater than that incurred during trivial repair; however, this is particularly relevant in the context of leakage-resilient secret sharing. In that framework, our results provide attacks showing that $k$-out-of-$n$ Shamir's Secret Sharing over prime fields for small $k$ is not leakage-resilient, even when the parties leak only a constant number of bits. To the best of our knowledge, these are the first such attacks. Our results are derived from a novel connection between exponential sums and the repair of RS codes. Specifically, we establish that non-trivial bounds on certain exponential sums imply the existence of explicit nonlinear repair schemes for RS codes over prime fields. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 426,518 |
2006.03236 | Funnel-Transformer: Filtering out Sequential Redundancy for Efficient
Language Processing | With the success of language pretraining, it is highly desirable to develop more efficient architectures of good scalability that can exploit the abundant unlabeled data at a lower cost. To improve the efficiency, we examine the much-overlooked redundancy in maintaining a full-length token-level presentation, especially for tasks that only require a single-vector presentation of the sequence. With this intuition, we propose Funnel-Transformer which gradually compresses the sequence of hidden states to a shorter one and hence reduces the computation cost. More importantly, by re-investing the saved FLOPs from length reduction in constructing a deeper or wider model, we further improve the model capacity. In addition, to perform token-level predictions as required by common pretraining objectives, Funnel-Transformer is able to recover a deep representation for each token from the reduced hidden sequence via a decoder. Empirically, with comparable or fewer FLOPs, Funnel-Transformer outperforms the standard Transformer on a wide variety of sequence-level prediction tasks, including text classification, language understanding, and reading comprehension. The code and pretrained checkpoints are available at https://github.com/laiguokun/Funnel-Transformer. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 180,259 |
2405.08039 | Vehicles Swarm Intelligence: Cooperation in both Longitudinal and
Lateral Dimensions | Longitudinal-only platooning methods are facing great challenges on running mobility, since they may be impeded by slow-moving vehicles from time to time. To address this issue, this paper proposes a vehicles swarming method coupled both longitudinal and lateral cooperation. The proposed method bears the following contributions: i) enhancing driving mobility by swarming like a bee colony; ii) ensuring the success rate of overtaking; iii) cruising as a string of platoon to preserve sustainability. Evaluations indicate that the proposed method is capable of maneuvering a vehicle swarm to overtake slow-moving vehicles safely and successfully. The proposed method is confirmed to improve running mobility by 12.04%. Swarming safety is ensured by a safe following distance. The proposed method's influence on traffic is limited within five upstream vehicles. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 453,969 |
2006.01415 | A Layered Learning Approach to Scaling in Learning Classifier Systems
for Boolean Problems | Learning classifier systems (LCSs) originated from cognitive-science research but migrated such that LCS became powerful classification techniques. Modern LCSs can be used to extract building blocks of knowledge to solve more difficult problems in the same or a related domain. Recent works on LCSs showed that the knowledge reuse through the adoption of Code Fragments, GP-like tree-based programs, into LCSs could provide advances in scaling. However, since solving hard problems often requires constructing high-level building blocks, which also results in an intractable search space, a limit of scaling will eventually be reached. Inspired by human problem-solving abilities, XCSCF* can reuse learned knowledge and learned functionality to scale to complex problems by transferring them from simpler problems using layered learning. However, this method was unrefined and suited to only the Multiplexer problem domain. In this paper, we propose improvements to XCSCF* to enable it to be robust across multiple problem domains. This is demonstrated on the benchmarks Multiplexer, Carry-one, Majority-on, and Even-parity domains. The required base axioms necessary for learning are proposed, methods for transfer learning in LCSs developed and learning recast as a decomposition into a series of subordinate problems. Results show that from a conventional tabula rasa, with only a vague notion of what subordinate problems might be relevant, it is possible to capture the general logic behind the tested domains, so the advanced system is capable of solving any individual n-bit Multiplexer, n-bit Carry-one, n-bit Majority-on, or n-bit Even-parity problem. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 179,768 |
1511.01259 | Transforming Wikipedia into an Ontology-based Information Retrieval
Search Engine for Local Experts using a Third-Party Taxonomy | Wikipedia is widely used for finding general information about a wide variety of topics. Its vocation is not to provide local information. For example, it provides plot, cast, and production information about a given movie, but not showing times in your local movie theatre. Here we describe how we can connect local information to Wikipedia, without altering its content. The case study we present involves finding local scientific experts. Using a third-party taxonomy, independent from Wikipedia's category hierarchy, we index information connected to our local experts, present in their activity reports, and we re-index Wikipedia content using the same taxonomy. The connections between Wikipedia pages and local expert reports are stored in a relational database, accessible through as public SPARQL endpoint. A Wikipedia gadget (or plugin) activated by the interested user, accesses the endpoint as each Wikipedia page is accessed. An additional tab on the Wikipedia page allows the user to open up a list of teams of local experts associated with the subject matter in the Wikipedia page. The technique, though presented here as a way to identify local experts, is generic, in that any third party taxonomy, can be used in this to connect Wikipedia to any non-Wikipedia data source. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 48,485 |
1707.06791 | Learning Task Priorities from Demonstrations | Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the Task-Parameterized Gaussian Mixture Model (TP-GMM) to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 77,486 |
2408.06158 | OmniCLIP: Adapting CLIP for Video Recognition with Spatial-Temporal
Omni-Scale Feature Learning | Recent Vision-Language Models (VLMs) \textit{e.g.} CLIP have made great progress in video recognition. Despite the improvement brought by the strong visual backbone in extracting spatial features, CLIP still falls short in capturing and integrating spatial-temporal features which is essential for video recognition. In this paper, we propose OmniCLIP, a framework that adapts CLIP for video recognition by focusing on learning comprehensive features encompassing spatial, temporal, and dynamic spatial-temporal scales, which we refer to as omni-scale features. This is achieved through the design of spatial-temporal blocks that include parallel temporal adapters (PTA), enabling efficient temporal modeling. Additionally, we introduce a self-prompt generator (SPG) module to capture dynamic object spatial features. The synergy between PTA and SPG allows OmniCLIP to discern varying spatial information across frames and assess object scales over time. We have conducted extensive experiments in supervised video recognition, few-shot video recognition, and zero-shot recognition tasks. The results demonstrate the effectiveness of our method, especially with OmniCLIP achieving a top-1 accuracy of 74.30\% on HMDB51 in a 16-shot setting, surpassing the recent MotionPrompt approach even with full training data. The code is available at \url{https://github.com/XiaoBuL/OmniCLIP}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 480,101 |
2202.03775 | Addressing Data Scarcity in Multimodal User State Recognition by
Combining Semi-Supervised and Supervised Learning | Detecting mental states of human users is crucial for the development of cooperative and intelligent robots, as it enables the robot to understand the user's intentions and desires. Despite their importance, it is difficult to obtain a large amount of high quality data for training automatic recognition algorithms as the time and effort required to collect and label such data is prohibitively high. In this paper we present a multimodal machine learning approach for detecting dis-/agreement and confusion states in a human-robot interaction environment, using just a small amount of manually annotated data. We collect a data set by conducting a human-robot interaction study and develop a novel preprocessing pipeline for our machine learning approach. By combining semi-supervised and supervised architectures, we are able to achieve an average F1-score of 81.1\% for dis-/agreement detection with a small amount of labeled data and a large unlabeled data set, while simultaneously increasing the robustness of the model compared to the supervised approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 279,339 |
2201.03186 | MyoPS: A Benchmark of Myocardial Pathology Segmentation Combining
Three-Sequence Cardiac Magnetic Resonance Images | Assessment of myocardial viability is essential in diagnosis and treatment management of patients suffering from myocardial infarction, and classification of pathology on myocardium is the key to this assessment. This work defines a new task of medical image analysis, i.e., to perform myocardial pathology segmentation (MyoPS) combining three-sequence cardiac magnetic resonance (CMR) images, which was first proposed in the MyoPS challenge, in conjunction with MICCAI 2020. The challenge provided 45 paired and pre-aligned CMR images, allowing algorithms to combine the complementary information from the three CMR sequences for pathology segmentation. In this article, we provide details of the challenge, survey the works from fifteen participants and interpret their methods according to five aspects, i.e., preprocessing, data augmentation, learning strategy, model architecture and post-processing. In addition, we analyze the results with respect to different factors, in order to examine the key obstacles and explore potential of solutions, as well as to provide a benchmark for future research. We conclude that while promising results have been reported, the research is still in the early stage, and more in-depth exploration is needed before a successful application to the clinics. Note that MyoPS data and evaluation tool continue to be publicly available upon registration via its homepage (www.sdspeople.fudan.edu.cn/zhuangxiahai/0/myops20/). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 274,783 |
2109.14657 | Understanding Egocentric Hand-Object Interactions from Hand Pose
Estimation | In this paper, we address the problem of estimating the hand pose from the egocentric view when the hand is interacting with objects. Specifically, we propose a method to label a dataset Ego-Siam which contains the egocentric images pair-wisely. We also use the collected pairwise data to train our encoder-decoder style network which has been proven efficient in. This could bring extra training efficiency and testing accuracy. Our network is lightweight and can be performed with over 30 FPS with an outdated GPU. We demonstrate that our method outperforms Mueller et al. which is the state of the art work dealing with egocentric hand-object interaction problems on the GANerated dataset. To show the ability to preserve the semantic information of our method, we also report the performance of grasp type classification on GUN-71 dataset and outperforms the benchmark by only using the predicted 3-d hand pose. | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 258,016 |
2406.14753 | A General Control-Theoretic Approach for Reinforcement Learning: Theory
and Algorithms | We devise a control-theoretic reinforcement learning approach to support direct learning of the optimal policy. We establish various theoretical properties of our approach, such as convergence and optimality of our analog of the Bellman operator and Q-learning, a new control-policy-variable gradient theorem, and a specific gradient ascent algorithm based on this theorem within the context of a specific control-theoretic framework. We empirically evaluate the performance of our control theoretic approach on several classical reinforcement learning tasks, demonstrating significant improvements in solution quality, sample complexity, and running time of our approach over state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 466,446 |
2102.02956 | DetectorGuard: Provably Securing Object Detectors against Localized
Patch Hiding Attacks | State-of-the-art object detectors are vulnerable to localized patch hiding attacks, where an adversary introduces a small adversarial patch to make detectors miss the detection of salient objects. The patch attacker can carry out a physical-world attack by printing and attaching an adversarial patch to the victim object. In this paper, we propose DetectorGuard as the first general framework for building provably robust object detectors against localized patch hiding attacks. DetectorGuard is inspired by recent advancements in robust image classification research; we ask: can we adapt robust image classifiers for robust object detection? Unfortunately, due to their task difference, an object detector naively adapted from a robust image classifier 1) may not necessarily be robust in the adversarial setting or 2) even maintain decent performance in the clean setting. To build a high-performance robust object detector, we propose an objectness explaining strategy: we adapt a robust image classifier to predict objectness for every image location and then explain each objectness using the bounding boxes predicted by a conventional object detector. If all objectness is well explained, we output the predictions made by the conventional object detector; otherwise, we issue an attack alert. Notably, 1) in the adversarial setting, we formally prove the end-to-end robustness of DetectorGuard on certified objects, i.e., it either detects the object or triggers an alert, against any patch hiding attacker within our threat model; 2) in the clean setting, we have almost the same performance as state-of-the-art object detectors. Our evaluation on the PASCAL VOC, MS COCO, and KITTI datasets further demonstrates that DetectorGuard achieves the first provable robustness against localized patch hiding attacks at a negligible cost (<1%) of clean performance. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 218,577 |
2207.05322 | Using Interpretable Machine Learning to Predict Maternal and Fetal
Outcomes | Most pregnancies and births result in a good outcome, but complications are not uncommon and when they do occur, they can be associated with serious implications for mothers and babies. Predictive modeling has the potential to improve outcomes through better understanding of risk factors, heightened surveillance, and more timely and appropriate interventions, thereby helping obstetricians deliver better care. For three types of complications we identify and study the most important risk factors using Explainable Boosting Machine (EBM), a glass box model, in order to gain intelligibility: (i) Severe Maternal Morbidity (SMM), (ii) shoulder dystocia, and (iii) preterm preeclampsia. While using the interpretability of EBM's to reveal surprising insights into the features contributing to risk, our experiments show EBMs match the accuracy of other black-box ML methods such as deep neural nets and random forests. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 307,500 |
2302.06133 | Deep Transfer Tensor Factorization for Multi-View Learning | This paper studies the data sparsity problem in multi-view learning. To solve data sparsity problem in multiview ratings, we propose a generic architecture of deep transfer tensor factorization (DTTF) by integrating deep learning and cross-domain tensor factorization, where the side information is embedded to provide effective compensation for the tensor sparsity. Then we exhibit instantiation of our architecture by combining stacked denoising autoencoder (SDAE) and CANDECOMP/ PARAFAC (CP) tensor factorization in both source and target domains, where the side information of both users and items is tightly coupled with the sparse multi-view ratings and the latent factors are learned based on the joint optimization. We tightly couple the multi-view ratings and the side information to improve cross-domain tensor factorization based recommendations. Experimental results on real-world datasets demonstrate that our DTTF schemes outperform state-of-the-art methods on multi-view rating predictions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 345,302 |
2410.09355 | On Divergence Measures for Training GFlowNets | Generative Flow Networks (GFlowNets) are amortized inference models designed to sample from unnormalized distributions over composable objects, with applications in generative modeling for tasks in fields such as causal discovery, NLP, and drug discovery. Traditionally, the training procedure for GFlowNets seeks to minimize the expected log-squared difference between a proposal (forward policy) and a target (backward policy) distribution, which enforces certain flow-matching conditions. While this training procedure is closely related to variational inference (VI), directly attempting standard Kullback-Leibler (KL) divergence minimization can lead to proven biased and potentially high-variance estimators. Therefore, we first review four divergence measures, namely, Renyi-$\alpha$'s, Tsallis-$\alpha$'s, reverse and forward KL's, and design statistically efficient estimators for their stochastic gradients in the context of training GFlowNets. Then, we verify that properly minimizing these divergences yields a provably correct and empirically effective training scheme, often leading to significantly faster convergence than previously proposed optimization. To achieve this, we design control variates based on the REINFORCE leave-one-out and score-matching estimators to reduce the variance of the learning objectives' gradients. Our work contributes by narrowing the gap between GFlowNets training and generalized variational approximations, paving the way for algorithmic ideas informed by the divergence minimization viewpoint. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 497,555 |
2209.09391 | QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars | Real-time tracking of human body motion is crucial for interactive and immersive experiences in AR/VR. However, very limited sensor data about the body is available from standalone wearable devices such as HMDs (Head Mounted Devices) or AR glasses. In this work, we present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers, and simulates plausible and physically valid full body motions. Using high quality full body motion as dense supervision during training, a simple policy network can learn to output appropriate torques for the character to balance, walk, and jog, while closely following the input signals. Our results demonstrate surprisingly similar leg motions to ground truth without any observations of the lower body, even when the input is only the 6D transformations of the HMD. We also show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 318,484 |
2003.11948 | Bag of biterms modeling for short texts | Analyzing texts from social media encounters many challenges due to their unique characteristics of shortness, massiveness, and dynamic. Short texts do not provide enough context information, causing the failure of the traditional statistical models. Furthermore, many applications often face with massive and dynamic short texts, causing various computational challenges to the current batch learning algorithms. This paper presents a novel framework, namely Bag of Biterms Modeling (BBM), for modeling massive, dynamic, and short text collections. BBM comprises of two main ingredients: (1) the concept of Bag of Biterms (BoB) for representing documents, and (2) a simple way to help statistical models to include BoB. Our framework can be easily deployed for a large class of probabilistic models, and we demonstrate its usefulness with two well-known models: Latent Dirichlet Allocation (LDA) and Hierarchical Dirichlet Process (HDP). By exploiting both terms (words) and biterms (pairs of words), the major advantages of BBM are: (1) it enhances the length of the documents and makes the context more coherent by emphasizing the word connotation and co-occurrence via Bag of Biterms, (2) it inherits inference and learning algorithms from the primitive to make it straightforward to design online and streaming algorithms for short texts. Extensive experiments suggest that BBM outperforms several state-of-the-art models. We also point out that the BoB representation performs better than the traditional representations (e.g, Bag of Words, tf-idf) even for normal texts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 169,765 |
2407.07046 | CorMulT: A Semi-supervised Modality Correlation-aware Multimodal
Transformer for Sentiment Analysis | Multimodal sentiment analysis is an active research area that combines multiple data modalities, e.g., text, image and audio, to analyze human emotions and benefits a variety of applications. Existing multimodal sentiment analysis methods can be classified as modality interaction-based methods, modality transformation-based methods and modality similarity-based methods. However, most of these methods highly rely on the strong correlations between modalities, and cannot fully uncover and utilize the correlations between modalities to enhance sentiment analysis. Therefore, these methods usually achieve bad performance for identifying the sentiment of multimodal data with weak correlations. To address this issue, we proposed a two-stage semi-supervised model termed Correlation-aware Multimodal Transformer (CorMulT) which consists pre-training stage and prediction stage. At the pre-training stage, a modality correlation contrastive learning module is designed to efficiently learn modality correlation coefficients between different modalities. At the prediction stage, the learned correlation coefficients are fused with modality representations to make the sentiment prediction. According to the experiments on the popular multimodal dataset CMU-MOSEI, CorMulT obviously surpasses state-of-the-art multimodal sentiment analysis methods. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 471,622 |
2310.14297 | APP-RUSS: Automated Path Planning for Robotic Ultrasound System | Autonomous robotic ultrasound System (RUSS) has been extensively studied. However, fully automated ultrasound image acquisition is still challenging, partly due to the lack of study in combining two phases of path planning: guiding the ultrasound probe to the scan target and covering the scan surface or volume. This paper presents a system of Automated Path Planning for RUSS (APP-RUSS). Our focus is on the first phase of automation, which emphasizes directing the ultrasound probe's path toward the target over extended distances. Specifically, our APP-RUSS system consists of a RealSense D405 RGB-D camera that is employed for visual guidance of the UR5e robotic arm and a cubic Bezier curve path planning model that is customized for delivering the probe to the recognized target. APP-RUSS can contribute to understanding the integration of the two phases of path planning in robotic ultrasound imaging, paving the way for its clinical adoption. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 401,794 |
2003.02575 | DANTE: A framework for mining and monitoring darknet traffic | Trillions of network packets are sent over the Internet to destinations which do not exist. This 'darknet' traffic captures the activity of botnets and other malicious campaigns aiming to discover and compromise devices around the world. In order to mine threat intelligence from this data, one must be able to handle large streams of logs and represent the traffic patterns in a meaningful way. However, by observing how network ports (services) are used, it is possible to capture the intent of each transmission. In this paper, we present DANTE: a framework and algorithm for mining darknet traffic. DANTE learns the meaning of targeted network ports by applying Word2Vec to observed port sequences. Then, when a host sends a new sequence, DANTE represents the transmission as the average embedding of the ports found that sequence. Finally, DANTE uses a novel and incremental time-series cluster tracking algorithm on observed sequences to detect recurring behaviors and new emerging threats. To evaluate the system, we ran DANTE on a full year of darknet traffic (over three Tera-Bytes) collected by the largest telecommunications provider in Europe, Deutsche Telekom and analyzed the results. DANTE discovered 1,177 new emerging threats and was able to track malicious campaigns over time. We also compared DANTE to the current best approach and found DANTE to be more practical and effective at detecting darknet traffic patterns. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 166,972 |
1805.08622 | Transitions, Losses, and Re-parameterizations: Elements of Prediction
Games | This thesis presents some geometric insights into three different types of two player prediction games -- namely general learning task, prediction with expert advice, and online convex optimization. These games differ in the nature of the opponent (stochastic, adversarial, or intermediate), the order of the players' move, and the utility function. The insights shed some light on the understanding of the intrinsic barriers of the prediction problems and the design of computationally efficient learning algorithms with strong theoretical guarantees (such as generalizability, statistical consistency, and constant regret etc.). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 98,190 |
2207.01426 | Dynamic Contrastive Distillation for Image-Text Retrieval | Although the vision-and-language pretraining (VLP) equipped cross-modal image-text retrieval (ITR) has achieved remarkable progress in the past two years, it suffers from a major drawback: the ever-increasing size of VLP models restricts its deployment to real-world search scenarios (where the high latency is unacceptable). To alleviate this problem, we present a novel plug-in dynamic contrastive distillation (DCD) framework to compress the large VLP models for the ITR task. Technically, we face the following two challenges: 1) the typical uni-modal metric learning approach is difficult to directly apply to the cross-modal tasks, due to the limited GPU memory to optimize too many negative samples during handling cross-modal fusion features. 2) it is inefficient to static optimize the student network from different hard samples, which have different effects on distillation learning and student network optimization. We try to overcome these challenges from two points. First, to achieve multi-modal contrastive learning, and balance the training costs and effects, we propose to use a teacher network to estimate the difficult samples for students, making the students absorb the powerful knowledge from pre-trained teachers, and master the knowledge from hard samples. Second, to dynamic learn from hard sample pairs, we propose dynamic distillation to dynamically learn samples of different difficulties, from the perspective of better balancing the difficulty of knowledge and students' self-learning ability. We successfully apply our proposed DCD strategy to two state-of-the-art vision-language pretrained models, i.e. ViLT and METER. Extensive experiments on MS-COCO and Flickr30K benchmarks show the effectiveness and efficiency of our DCD framework. Encouragingly, we can speed up the inference at least 129$\times$ compared to the existing ITR models. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | true | 306,173 |
2404.01701 | On the Role of Summary Content Units in Text Summarization Evaluation | At the heart of the Pyramid evaluation method for text summarization lie human written summary content units (SCUs). These SCUs are concise sentences that decompose a summary into small facts. Such SCUs can be used to judge the quality of a candidate summary, possibly partially automated via natural language inference (NLI) systems. Interestingly, with the aim to fully automate the Pyramid evaluation, Zhang and Bansal (2021) show that SCUs can be approximated by automatically generated semantic role triplets (STUs). However, several questions currently lack answers, in particular: i) Are there other ways of approximating SCUs that can offer advantages? ii) Under which conditions are SCUs (or their approximations) offering the most value? In this work, we examine two novel strategies to approximate SCUs: generating SCU approximations from AMR meaning representations (SMUs) and from large language models (SGUs), respectively. We find that while STUs and SMUs are competitive, the best approximation quality is achieved by SGUs. We also show through a simple sentence-decomposition baseline (SSUs) that SCUs (and their approximations) offer the most value when ranking short summaries, but may not help as much when ranking systems or longer summaries. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 443,538 |
2402.11347 | PhaseEvo: Towards Unified In-Context Prompt Optimization for Large
Language Models | Crafting an ideal prompt for Large Language Models (LLMs) is a challenging task that demands significant resources and expert human input. Existing work treats the optimization of prompt instruction and in-context learning examples as distinct problems, leading to sub-optimal prompt performance. This research addresses this limitation by establishing a unified in-context prompt optimization framework, which aims to achieve joint optimization of the prompt instruction and examples. However, formulating such optimization in the discrete and high-dimensional natural language space introduces challenges in terms of convergence and computational efficiency. To overcome these issues, we present PhaseEvo, an efficient automatic prompt optimization framework that combines the generative capability of LLMs with the global search proficiency of evolution algorithms. Our framework features a multi-phase design incorporating innovative LLM-based mutation operators to enhance search efficiency and accelerate convergence. We conduct an extensive evaluation of our approach across 35 benchmark tasks. The results demonstrate that PhaseEvo significantly outperforms the state-of-the-art baseline methods by a large margin whilst maintaining good efficiency. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 430,346 |
2403.18056 | Self-Clustering Hierarchical Multi-Agent Reinforcement Learning with
Extensible Cooperation Graph | Multi-Agent Reinforcement Learning (MARL) has been successful in solving many cooperative challenges. However, classic non-hierarchical MARL algorithms still cannot address various complex multi-agent problems that require hierarchical cooperative behaviors. The cooperative knowledge and policies learned in non-hierarchical algorithms are implicit and not interpretable, thereby restricting the integration of existing knowledge. This paper proposes a novel hierarchical MARL model called Hierarchical Cooperation Graph Learning (HCGL) for solving general multi-agent problems. HCGL has three components: a dynamic Extensible Cooperation Graph (ECG) for achieving self-clustering cooperation; a group of graph operators for adjusting the topology of ECG; and an MARL optimizer for training these graph operators. HCGL's key distinction from other MARL models is that the behaviors of agents are guided by the topology of ECG instead of policy neural networks. ECG is a three-layer graph consisting of an agent node layer, a cluster node layer, and a target node layer. To manipulate the ECG topology in response to changing environmental conditions, four graph operators are trained to adjust the edge connections of ECG dynamically. The hierarchical feature of ECG provides a unique approach to merge primitive actions (actions executed by the agents) and cooperative actions (actions executed by the clusters) into a unified action space, allowing us to integrate fundamental cooperative knowledge into an extensible interface. In our experiments, the HCGL model has shown outstanding performance in multi-agent benchmarks with sparse rewards. We also verify that HCGL can easily be transferred to large-scale scenarios with high zero-shot transfer success rates. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 441,740 |
2501.11525 | Technical Report for the Forgotten-by-Design Project: Targeted
Obfuscation for Machine Learning | The right to privacy, enshrined in various human rights declarations, faces new challenges in the age of artificial intelligence (AI). This paper explores the concept of the Right to be Forgotten (RTBF) within AI systems, contrasting it with traditional data erasure methods. We introduce Forgotten by Design, a proactive approach to privacy preservation that integrates instance-specific obfuscation techniques during the AI model training process. Unlike machine unlearning, which modifies models post-training, our method prevents sensitive data from being embedded in the first place. Using the LIRA membership inference attack, we identify vulnerable data points and propose defenses that combine additive gradient noise and weighting schemes. Our experiments on the CIFAR-10 dataset demonstrate that our techniques reduce privacy risks by at least an order of magnitude while maintaining model accuracy (at 95% significance). Additionally, we present visualization methods for the privacy-utility trade-off, providing a clear framework for balancing privacy risk and model accuracy. This work contributes to the development of privacy-preserving AI systems that align with human cognitive processes of motivated forgetting, offering a robust framework for safeguarding sensitive information and ensuring compliance with privacy regulations. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 525,955 |
1911.07817 | Skin Lesion Classification Using Deep Neural Network | This paper reports the methods and techniques we have developed for classify dermoscopic images (task 1) of the ISIC 2019 challenge dataset for skin lesion classification, our approach aims to use ensemble deep neural network with some powerful techniques to deal with unbalance data sets as its the main problem for this challenge in a move to increase the performance of CNNs model. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 153,978 |
1908.00681 | FlowSense: A Natural Language Interface for Visual Data Exploration
within a Dataflow System | Dataflow visualization systems enable flexible visual data exploration by allowing the user to construct a dataflow diagram that composes query and visualization modules to specify system functionality. However learning dataflow diagram usage presents overhead that often discourages the user. In this work we design FlowSense, a natural language interface for dataflow visualization systems that utilizes state-of-the-art natural language processing techniques to assist dataflow diagram construction. FlowSense employs a semantic parser with special utterance tagging and special utterance placeholders to generalize to different datasets and dataflow diagrams. It explicitly presents recognized dataset and diagram special utterances to the user for dataflow context awareness. With FlowSense the user can expand and adjust dataflow diagrams more conveniently via plain English. We apply FlowSense to the VisFlow subset-flow visualization system to enhance its usability. We evaluate FlowSense by one case study with domain experts on a real-world data analysis problem and a formal user study. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 140,561 |
2210.15395 | Querying Incomplete Numerical Data: Between Certain and Possible Answers | Queries with aggregation and arithmetic operations, as well as incomplete data, are common in real-world database, but we lack a good understanding of how they should interact. On the one hand, systems based on SQL provide ad-hoc rules for numerical nulls, on the other, theoretical research largely concentrates on the standard notions of certain and possible answers. In the presence of numerical attributes and aggregates, however, these answers are often meaningless, returning either too little or too much. Our goal is to define a principled framework for databases with numerical nulls and answering queries with arithmetic and aggregations over them. Towards this goal, we assume that missing values in numerical attributes are given by probability distributions associated with marked nulls. This yields a model of probabilistic bag databases in which tuples are not necessarily independent, since nulls can repeat. We provide a general compositional framework for query answering, and then concentrate on queries that resemble standard SQL with arithmetic and aggregation. We show that these queries are measurable, and that their outputs have a finite representation. Moreover, since the classical forms of answers provide little information in the numerical setting, we look at the probability that numerical values in output tuples belong to specific intervals. Even though their exact computation is intractable, we show efficient approximation algorithms to compute such probabilities. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 326,932 |
2012.13044 | Union-net: A deep neural network model adapted to small data sets | In real applications, generally small data sets can be obtained. At present, most of the practical applications of machine learning use classic models based on big data to solve the problem of small data sets. However, the deep neural network model has complex structure, huge model parameters, and training requires more advanced equipment, which brings certain difficulties to the application. Therefore, this paper proposes the concept of union convolution, designing a light deep network model union-net with a shallow network structure and adapting to small data sets. This model combines convolutional network units with different combinations of the same input to form a union module. Each union module is equivalent to a convolutional layer. The serial input and output between the 3 modules constitute a "3-layer" neural network. The output of each union module is fused and added as the input of the last convolutional layer to form a complex network with a 4-layer network structure. It solves the problem that the deep network model network is too deep and the transmission path is too long, which causes the loss of the underlying information transmission. Because the model has fewer model parameters and fewer channels, it can better adapt to small data sets. It solves the problem that the deep network model is prone to overfitting in training small data sets. Use the public data sets cifar10 and 17flowers to conduct multi-classification experiments. Experiments show that the Union-net model can perform well in classification of large data sets and small data sets. It has high practical value in daily application scenarios. The model code is published at https://github.com/yeaso/union-net | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 213,091 |
2207.09143 | What Matters for 3D Scene Flow Network | 3D scene flow estimation from point clouds is a low-level 3D motion perception task in computer vision. Flow embedding is a commonly used technique in scene flow estimation, and it encodes the point motion between two consecutive frames. Thus, it is critical for the flow embeddings to capture the correct overall direction of the motion. However, previous works only search locally to determine a soft correspondence, ignoring the distant points that turn out to be the actual matching ones. In addition, the estimated correspondence is usually from the forward direction of the adjacent point clouds, and may not be consistent with the estimated correspondence acquired from the backward direction. To tackle these problems, we propose a novel all-to-all flow embedding layer with backward reliability validation during the initial scene flow estimation. Besides, we investigate and compare several design choices in key components of the 3D scene flow network, including the point similarity calculation, input elements of predictor, and predictor & refinement level design. After carefully choosing the most effective designs, we are able to present a model that achieves the state-of-the-art performance on FlyingThings3D and KITTI Scene Flow datasets. Our proposed model surpasses all existing methods by at least 38.2% on FlyingThings3D dataset and 24.7% on KITTI Scene Flow dataset for EPE3D metric. We release our codes at https://github.com/IRMVLab/3DFlow. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 308,813 |
1612.07878 | Markov-Nash Equilibria in Mean-Field Games with Discounted Cost | In this paper, we consider discrete-time dynamic games of the mean-field type with a finite number $N$ of agents subject to an infinite-horizon discounted-cost optimality criterion. The state space of each agent is a locally compact Polish space. At each time, the agents are coupled through the empirical distribution of their states, which affects both the agents' individual costs and their state transition probabilities. We introduce a new solution concept of the Markov-Nash equilibrium, under which a policy is player-by-player optimal in the class of all Markov policies. Under mild assumptions, we demonstrate the existence of a mean-field equilibrium in the infinite-population limit $N \to \infty$, and then show that the policy obtained from the mean-field equilibrium is approximately Markov-Nash when the number of agents $N$ is sufficiently large. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 65,998 |
1208.3845 | Adaptive Graph via Multiple Kernel Learning for Nonnegative Matrix
Factorization | Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of nonnegative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation,which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 18,147 |
2110.03171 | Assemblies of neurons learn to classify well-separated distributions | An assembly is a large population of neurons whose synchronous firing is hypothesized to represent a memory, concept, word, and other cognitive categories. Assemblies are believed to provide a bridge between high-level cognitive phenomena and low-level neural activity. Recently, a computational system called the Assembly Calculus (AC), with a repertoire of biologically plausible operations on assemblies, has been shown capable of simulating arbitrary space-bounded computation, but also of simulating complex cognitive phenomena such as language, reasoning, and planning. However, the mechanism whereby assemblies can mediate learning has not been known. Here we present such a mechanism, and prove rigorously that, for simple classification problems defined on distributions of labeled assemblies, a new assembly representing each class can be reliably formed in response to a few stimuli from the class; this assembly is henceforth reliably recalled in response to new stimuli from the same class. Furthermore, such class assemblies will be distinguishable as long as the respective classes are reasonably separated -- for example, when they are clusters of similar assemblies. To prove these results, we draw on random graph theory with dynamic edge weights to estimate sequences of activated vertices, yielding strong generalizations of previous calculations and theorems in this field over the past five years. These theorems are backed up by experiments demonstrating the successful formation of assemblies which represent concept classes on synthetic data drawn from such distributions, and also on MNIST, which lends itself to classification through one assembly per digit. Seen as a learning algorithm, this mechanism is entirely online, generalizes from very few samples, and requires only mild supervision -- all key attributes of learning in a model of the brain. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 259,394 |
1810.12281 | Three Mechanisms of Weight Decay Regularization | Weight decay is one of the standard tricks in the neural network toolbox, but the reasons for its regularization effect are poorly understood, and recent results have cast doubt on the traditional interpretation in terms of $L_2$ regularization. Literal weight decay has been shown to outperform $L_2$ regularization for optimizers for which they differ. We empirically investigate weight decay for three optimization algorithms (SGD, Adam, and K-FAC) and a variety of network architectures. We identify three distinct mechanisms by which weight decay exerts a regularization effect, depending on the particular optimization algorithm and architecture: (1) increasing the effective learning rate, (2) approximately regularizing the input-output Jacobian norm, and (3) reducing the effective damping coefficient for second-order optimization. Our results provide insight into how to improve the regularization of neural networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 111,731 |
1910.04930 | Random Quadratic Forms with Dependence: Applications to Restricted
Isometry and Beyond | Several important families of computational and statistical results in machine learning and randomized algorithms rely on uniform bounds on quadratic forms of random vectors or matrices. Such results include the Johnson-Lindenstrauss (J-L) Lemma, the Restricted Isometry Property (RIP), randomized sketching algorithms, and approximate linear algebra. The existing results critically depend on statistical independence, e.g., independent entries for random vectors, independent rows for random matrices, etc., which prevent their usage in dependent or adaptive modeling settings. In this paper, we show that such independence is in fact not needed for such results which continue to hold under fairly general dependence structures. In particular, we present uniform bounds on random quadratic forms of stochastic processes which are conditionally independent and sub-Gaussian given another (latent) process. Our setup allows general dependencies of the stochastic process on the history of the latent process and the latent process to be influenced by realizations of the stochastic process. The results are thus applicable to adaptive modeling settings and also allows for sequential design of random vectors and matrices. We also discuss stochastic process based forms of J-L, RIP, and sketching, to illustrate the generality of the results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 148,910 |
2307.06206 | SepVAE: a contrastive VAE to separate pathological patterns from healthy
ones | Contrastive Analysis VAE (CA-VAEs) is a family of Variational auto-encoders (VAEs) that aims at separating the common factors of variation between a background dataset (BG) (i.e., healthy subjects) and a target dataset (TG) (i.e., patients) from the ones that only exist in the target dataset. To do so, these methods separate the latent space into a set of salient features (i.e., proper to the target dataset) and a set of common features (i.e., exist in both datasets). Currently, all models fail to prevent the sharing of information between latent spaces effectively and to capture all salient factors of variation. To this end, we introduce two crucial regularization losses: a disentangling term between common and salient representations and a classification term between background and target samples in the salient space. We show a better performance than previous CA-VAEs methods on three medical applications and a natural images dataset (CelebA). Code and datasets are available on GitHub https://github.com/neurospin-projects/2023_rlouiset_sepvae. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 378,998 |
1709.06832 | Atomic Norm Denoising-Based Joint Channel Estimation and Faulty Antenna
Detection for Massive MIMO | We consider joint channel estimation and faulty antenna detection for massive multiple-input multiple-output (MIMO) systems operating in time-division duplexing (TDD) mode. For systems with faulty antennas, we show that the impact of faulty antennas on uplink (UL) data transmission does not vanish even with unlimited number of antennas. However, the signal detection performance can be improved with a priori knowledge on the indices of faulty antennas. This motivates us to propose the approach for simultaneous channel estimation and faulty antenna detection. By exploiting the fact that the degrees of freedom of the physical channel matrix are smaller than the number of free parameters, the channel estimation and faulty antenna detection can be formulated as an extended atomic norm denoising problem and solved efficiently via the alternating direction method of multipliers (ADMM). Furthermore, we improve the computational efficiency by proposing a fast algorithm and show that it is a good approximation of the corresponding extended atomic norm minimization method. Numerical simulations are provided to compare the performances of the proposed algorithms with several existing approaches and demonstrate the performance gains of detecting the indices of faulty antennas. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 81,180 |
2401.03298 | ENSTRECT: A Stage-based Approach to 2.5D Structural Damage Detection | To effectively assess structural damage, it is essential to localize the instances of damage in the physical world of a civil structure. ENSTRECT is a stage-based approach designed to accomplish 2.5D structural damage detection. The method requires an image collection, the relative orientation, and a point cloud. Using these inputs, surface damages are segmented at the image level and then mapped into the point cloud space, resulting in a segmented point cloud. To enable further quantitative analyses, the segmented point cloud is transformed into measurable damage instances: cracks are extracted by contracting the clustered point cloud into a corresponding medial axis. For areal damages, such as spalling and corrosion, a procedure is proposed to compute the bounding polygon based on PCA and alpha shapes. With a localization tolerance of 4cm, ENSTRECT can achieve IoUs of over 90% for cracks, 82% for corrosion, and 41% for spalling. Detection at the instance level yields an AP50 of about 45% (cracks, spalling) and 56% (corrosion). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 420,047 |
1901.00040 | Deep Information Theoretic Registration | This paper establishes an information theoretic framework for deep metric based image registration techniques. We show an exact equivalence between maximum profile likelihood and minimization of joint entropy, an important early information theoretic registration method. We further derive deep classifier-based metrics that can be used with iterated maximum likelihood to achieve Deep Information Theoretic Registration on patches rather than pixels. This alleviates a major shortcoming of previous information theoretic registration approaches, namely the implicit pixel-wise independence assumptions. Our proposed approach does not require well-registered training data; this brings previous fully supervised deep metric registration approaches to the realm of weak supervision. We evaluate our approach on several image registration tasks and show significantly better performance compared to mutual information, specifically when images have substantially different contrasts. This work enables general-purpose registration in applications where current methods are not successful. | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | 117,673 |
2103.04215 | Hierarchical Causal Bandit | Causal bandit is a nascent learning model where an agent sequentially experiments in a causal network of variables, in order to identify the reward-maximizing intervention. Despite the model's wide applicability, existing analytical results are largely restricted to a parallel bandit version where all variables are mutually independent. We introduce in this work the hierarchical causal bandit model as a viable path towards understanding general causal bandits with dependent variables. The core idea is to incorporate a contextual variable that captures the interaction among all variables with direct effects. Using this hierarchical framework, we derive sharp insights on algorithmic design in causal bandits with dependent arms and obtain nearly matching regret bounds in the case of a binary context. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 223,557 |
2302.09491 | X-Adv: Physical Adversarial Object Attacks against X-ray Prohibited Item
Detection | Adversarial attacks are valuable for evaluating the robustness of deep learning models. Existing attacks are primarily conducted on the visible light spectrum (e.g., pixel-wise texture perturbation). However, attacks targeting texture-free X-ray images remain underexplored, despite the widespread application of X-ray imaging in safety-critical scenarios such as the X-ray detection of prohibited items. In this paper, we take the first step toward the study of adversarial attacks targeted at X-ray prohibited item detection, and reveal the serious threats posed by such attacks in this safety-critical scenario. Specifically, we posit that successful physical adversarial attacks in this scenario should be specially designed to circumvent the challenges posed by color/texture fading and complex overlapping. To this end, we propose X-adv to generate physically printable metals that act as an adversarial agent capable of deceiving X-ray detectors when placed in luggage. To resolve the issues associated with color/texture fading, we develop a differentiable converter that facilitates the generation of 3D-printable objects with adversarial shapes, using the gradients of a surrogate model rather than directly generating adversarial textures. To place the printed 3D adversarial objects in luggage with complex overlapped instances, we design a policy-based reinforcement learning strategy to find locations eliciting strong attack performance in worst-case scenarios whereby the prohibited items are heavily occluded by other items. To verify the effectiveness of the proposed X-Adv, we conduct extensive experiments in both the digital and the physical world (employing a commercial X-ray security inspection system for the latter case). Furthermore, we present the physical-world X-ray adversarial attack dataset XAD. | false | false | false | false | true | false | false | false | false | false | false | true | true | false | false | false | false | false | 346,449 |
1103.5639 | Partially Linear Estimation with Application to Sparse Signal Recovery
From Measurement Pairs | We address the problem of estimating a random vector X from two sets of measurements Y and Z, such that the estimator is linear in Y. We show that the partially linear minimum mean squared error (PLMMSE) estimator does not require knowing the joint distribution of X and Y in full, but rather only its second-order moments. This renders it of potential interest in various applications. We further show that the PLMMSE method is minimax-optimal among all estimators that solely depend on the second-order statistics of X and Y. We demonstrate our approach in the context of recovering a signal, which is sparse in a unitary dictionary, from noisy observations of it and of a filtered version of it. We show that in this setting PLMMSE estimation has a clear computational advantage, while its performance is comparable to state-of-the-art algorithms. We apply our approach both in static and dynamic estimation applications. In the former category, we treat the problem of image enhancement from blurred/noisy image pairs, where we show that PLMMSE estimation performs only slightly worse than state-of-the art algorithms, while running an order of magnitude faster. In the dynamic setting, we provide a recursive implementation of the estimator and demonstrate its utility in the context of tracking maneuvering targets from position and acceleration measurements. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 9,803 |
2501.17628 | Dual Invariance Self-training for Reliable Semi-supervised Surgical
Phase Recognition | Accurate surgical phase recognition is crucial for advancing computer-assisted interventions, yet the scarcity of labeled data hinders training reliable deep learning models. Semi-supervised learning (SSL), particularly with pseudo-labeling, shows promise over fully supervised methods but often lacks reliable pseudo-label assessment mechanisms. To address this gap, we propose a novel SSL framework, Dual Invariance Self-Training (DIST), that incorporates both Temporal and Transformation Invariance to enhance surgical phase recognition. Our two-step self-training process dynamically selects reliable pseudo-labels, ensuring robust pseudo-supervision. Our approach mitigates the risk of noisy pseudo-labels, steering decision boundaries toward true data distribution and improving generalization to unseen data. Evaluations on Cataract and Cholec80 datasets show our method outperforms state-of-the-art SSL approaches, consistently surpassing both supervised and SSL baselines across various network architectures. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 528,396 |
2212.14687 | Multi-step-ahead Stock Price Prediction Using Recurrent Fuzzy Neural
Network and Variational Mode Decomposition | Financial time series prediction, a growing research topic, has attracted considerable interest from scholars, and several approaches have been developed. Among them, decomposition-based methods have achieved promising results. Most decomposition-based methods approximate a single function, which is insufficient for obtaining accurate results. Moreover, most existing researches have concentrated on one-step-ahead forecasting that prevents stock market investors from arriving at the best decisions for the future. This study proposes two novel methods for multi-step-ahead stock price prediction based on the issues outlined. DCT-MFRFNN, a method based on discrete cosine transform (DCT) and multi-functional recurrent fuzzy neural network (MFRFNN), uses DCT to reduce fluctuations in the time series and simplify its structure and MFRFNN to predict the stock price. VMD-MFRFNN, an approach based on variational mode decomposition (VMD) and MFRFNN, brings together their advantages. VMD-MFRFNN consists of two phases. The input signal is decomposed to several IMFs using VMD in the decomposition phase. In the prediction and reconstruction phase, each of the IMFs is given to a separate MFRFNN for prediction, and predicted signals are summed to reconstruct the output. Three financial time series, including Hang Seng Index (HSI), Shanghai Stock Exchange (SSE), and Standard & Poor's 500 Index (SPX), are used for the evaluation of the proposed methods. Experimental results indicate that VMD-MFRFNN surpasses other state-of-the-art methods. VMD-MFRFNN, on average, shows 35.93%, 24.88%, and 34.59% decreases in RMSE from the second-best model for HSI, SSE, and SPX, respectively. Also, DCT-MFRFNN outperforms MFRFNN in all experiments. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 338,701 |
2403.01740 | DEMOS: Dynamic Environment Motion Synthesis in 3D Scenes via Local
Spherical-BEV Perception | Motion synthesis in real-world 3D scenes has recently attracted much attention. However, the static environment assumption made by most current methods usually cannot be satisfied especially for real-time motion synthesis in scanned point cloud scenes, if multiple dynamic objects exist, e.g., moving persons or vehicles. To handle this problem, we propose the first Dynamic Environment MOtion Synthesis framework (DEMOS) to predict future motion instantly according to the current scene, and use it to dynamically update the latent motion for final motion synthesis. Concretely, we propose a Spherical-BEV perception method to extract local scene features that are specifically designed for instant scene-aware motion prediction. Then, we design a time-variant motion blending to fuse the new predicted motions into the latent motion, and the final motion is derived from the updated latent motions, benefitting both from motion-prior and iterative methods. We unify the data format of two prevailing datasets, PROX and GTA-IM, and take them for motion synthesis evaluation in 3D scenes. We also assess the effectiveness of the proposed method in dynamic environments from GTA-IM and Semantic3D to check the responsiveness. The results show our method outperforms previous works significantly and has great performance in handling dynamic environments. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 434,551 |
2104.13236 | UAV-Assisted Underwater Sensor Networks using RF and Optical Wireless
Links | Underwater sensor networks (UWSNs) are of interest to gather data from underwater sensor nodes (SNs) and deliver information to a terrestrial access point (AP) in the uplink transmission, and transfer data from the AP to the SNs in the downlink transmission. In this paper, we investigate a triple-hop UWSN in which autonomous underwater vehicle (AUV) and unmanned aerial vehicle (UAV) relays enable end-to-end communications between the SNs and the AP. It is assumed that the SN--AUV, AUV--UAV, and UAV--AP links are deployed by underwater optical communication (UWOC), free-space optic (FSO), and radio frequency (RF) technologies, respectively. Two scenarios are proposed for the FSO uplink and downlink transmissions between the AUV and the UAV, subject to water-to-air and air-to-water interface impacts; direct transmission scenario (DTS) and retro-reflection scenario (RRS). After providing the channel models and their statistics, the UWSN's outage probability and average bit error rate (BER) are computed. Besides, a tracking procedure is proposed to set up reliable and stable AUV--UAV FSO communications. Through numerical results, it is concluded that the RSS scheme outperforms the DTS one with about 200% (32%) and 80% (17%) better outage probability (average BER) in the uplink and downlink, respectively. It is also shown that the tracking procedure provides on average 480% and 170% improvements in the network's outage probability and average BER, respectively, compared to poorly aligned FSO conditions. The results are verified by applying Monte-Carlo simulations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 232,446 |
2407.14622 | BOND: Aligning LLMs with Best-of-N Distillation | Reinforcement learning from human feedback (RLHF) is a key driver of quality and safety in state-of-the-art large language models. Yet, a surprisingly simple and strong inference-time strategy is Best-of-N sampling that selects the best generation among N candidates. In this paper, we propose Best-of-N Distillation (BOND), a novel RLHF algorithm that seeks to emulate Best-of-N but without its significant computational overhead at inference time. Specifically, BOND is a distribution matching algorithm that forces the distribution of generations from the policy to get closer to the Best-of-N distribution. We use the Jeffreys divergence (a linear combination of forward and backward KL) to balance between mode-covering and mode-seeking behavior, and derive an iterative formulation that utilizes a moving anchor for efficiency. We demonstrate the effectiveness of our approach and several design choices through experiments on abstractive summarization and Gemma models. Aligning Gemma policies with BOND outperforms other RLHF algorithms by improving results on several benchmarks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 474,845 |
1804.08361 | DenseFuse: A Fusion Approach to Infrared and Visible Images | In this paper, we present a novel deep learning architecture for infrared and visible images fusion problem. In contrast to conventional convolutional networks, our encoding network is combined by convolutional layers, fusion layer and dense block in which the output of each layer is connected to every other layer. We attempt to use this architecture to get more useful features from source images in encoding process. And two fusion layers(fusion strategies) are designed to fuse these features. Finally, the fused image is reconstructed by decoder. Compared with existing fusion methods, the proposed fusion method achieves state-of-the-art performance in objective and subjective assessment. Code and pre-trained models are available at https://github.com/hli1221/imagefusion_densefuse | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 95,751 |
2204.09350 | Placement and Resource Allocation of Wireless-Powered Multiantenna UAV
for Energy-Efficient Multiuser NOMA | This paper investigates a new downlink nonorthogonal multiple access (NOMA) system, where a multiantenna unmanned aerial vehicle (UAV) is powered by wireless power transfer (WPT) and serves as the base station for multiple pairs of ground users (GUs) running NOMA in each pair. An energy efficiency (EE) maximization problem is formulated to jointly optimize the WPT time and the placement for the UAV, and the allocation of the UAV's transmit power between different NOMA user pairs and within each pair. To efficiently solve this nonconvex problem, we decompose the problem into three subproblems using block coordinate descent. For the subproblem of intra-pair power allocation within each NOMA user pair, we construct a supermodular game with confirmed convergence to a Nash equilibrium. Given the intra-pair power allocation, successive convex approximation is applied to convexify and solve the subproblem of WPT time allocation and inter-pair power allocation between the user pairs. Finally, we solve the subproblem of UAV placement by using the Lagrange multiplier method. Simulations show that our approach can substantially outperform its alternatives that do not use NOMA and WPT techniques or that do not optimize the UAV location. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 292,408 |
2403.04206 | GRAWA: Gradient-based Weighted Averaging for Distributed Training of
Deep Learning Models | We study distributed training of deep learning models in time-constrained environments. We propose a new algorithm that periodically pulls workers towards the center variable computed as a weighted average of workers, where the weights are inversely proportional to the gradient norms of the workers such that recovering the flat regions in the optimization landscape is prioritized. We develop two asynchronous variants of the proposed algorithm that we call Model-level and Layer-level Gradient-based Weighted Averaging (resp. MGRAWA and LGRAWA), which differ in terms of the weighting scheme that is either done with respect to the entire model or is applied layer-wise. On the theoretical front, we prove the convergence guarantee for the proposed approach in both convex and non-convex settings. We then experimentally demonstrate that our algorithms outperform the competitor methods by achieving faster convergence and recovering better quality and flatter local optima. We also carry out an ablation study to analyze the scalability of the proposed algorithms in more crowded distributed training environments. Finally, we report that our approach requires less frequent communication and fewer distributed updates compared to the state-of-the-art baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 435,509 |
2004.04704 | Heuristics for Link Prediction in Multiplex Networks | Link prediction, or the inference of future or missing connections between entities, is a well-studied problem in network analysis. A multitude of heuristics exist for link prediction in ordinary networks with a single type of connection. However, link prediction in multiplex networks, or networks with multiple types of connections, is not a well understood problem. We propose a novel general framework and three families of heuristics for multiplex network link prediction that are simple, interpretable, and take advantage of the rich connection type correlation structure that exists in many real world networks. We further derive a theoretical threshold for determining when to use a different connection type based on the number of links that overlap with an Erdos-Renyi random graph. Through experiments with simulated and real world scientific collaboration, transportation and global trade networks, we demonstrate that the proposed heuristics show increased performance with the richness of connection type correlation structure and significantly outperform their baseline heuristics for ordinary networks with a single connection type. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 171,967 |
2407.02292 | Strategic Demand-Planning in Wireless Networks: Can Generative-AI Save
Spectrum and Energy? | Generative-AI (GenAI), a novel technology capable of producing various types of outputs, including text, images, and videos, offers significant potential for wireless communications. This article introduces the concept of strategic demand-planning through demand-labeling, demand-shaping, and demand-rescheduling. Accordingly, GenAI is proposed as a powerful tool to facilitate demand-shaping in wireless networks. More specifically, GenAI is used to compress and convert the content of various types (e.g., from a higher bandwidth mode to a lower one, such as from a video to text), which subsequently enhances performance of wireless networks in various usage scenarios, such as cell-switching, user association and load balancing, interference management, as well as disasters and unusual gatherings. Therefore, GenAI can serve a function in saving energy and spectrum in wireless networks. With recent advancements in AI, including sophisticated algorithms like large language models and the development of more powerful hardware built exclusively for AI tasks, such as AI accelerators, the concept of demand-planning, particularly demand-shaping through GenAI, becomes increasingly relevant. Furthermore, recent efforts to make GenAI accessible on devices, such as user terminals, make the implementation of this concept even more straightforward and feasible. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 469,668 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.