id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2410.14617
On the Use of Proxies in Political Ad Targeting
Detailed targeting of advertisements has long been one of the core offerings of online platforms. Unfortunately, malicious advertisers have frequently abused such targeting features, with results that range from violating civil rights laws to driving division, polarization, and even social unrest. Platforms have often attempted to mitigate this behavior by removing targeting attributes deemed problematic, such as inferred political leaning, religion, or ethnicity. In this work, we examine the effectiveness of these mitigations by collecting data from political ads placed on Facebook in the lead up to the 2022 U.S. midterm elections. We show that major political advertisers circumvented these mitigations by targeting proxy attributes: seemingly innocuous targeting criteria that closely correspond to political and racial divides in American society. We introduce novel methods for directly measuring the skew of various targeting criteria to quantify their effectiveness as proxies, and then examine the scale at which those attributes are used. Our findings have crucial implications for the ongoing discussion on the regulation of political advertising and emphasize the urgency for increased transparency.
true
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
500,104
2406.02856
Xmodel-LM Technical Report
We introduce Xmodel-LM, a compact and efficient 1.1B language model pre-trained on around 2 trillion tokens. Trained on our self-built dataset (Xdata), which balances Chinese and English corpora based on downstream task optimization, Xmodel-LM exhibits remarkable performance despite its smaller size. It notably surpasses existing open-source language models of similar scale. Our model checkpoints and code are publicly accessible on GitHub at https://github.com/XiaoduoAILab/XmodelLM.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
460,966
2212.02493
Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural Fields
Coordinate-based implicit neural networks, or neural fields, have emerged as useful representations of shape and appearance in 3D computer vision. Despite advances, however, it remains challenging to build neural fields for categories of objects without datasets like ShapeNet that provide "canonicalized" object instances that are consistently aligned for their 3D position and orientation (pose). We present Canonical Field Network (CaFi-Net), a self-supervised method to canonicalize the 3D pose of instances from an object category represented as neural fields, specifically neural radiance fields (NeRFs). CaFi-Net directly learns from continuous and noisy radiance fields using a Siamese network architecture that is designed to extract equivariant field features for category-level canonicalization. During inference, our method takes pre-trained neural radiance fields of novel object instances at arbitrary 3D pose and estimates a canonical field with consistent 3D pose across the entire category. Extensive experiments on a new dataset of 1300 NeRF models across 13 object categories show that our method matches or exceeds the performance of 3D point cloud-based methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
334,801
2201.12649
Transfer Learning for Estimation of Pendubot Angular Position Using Deep Neural Networks
In this paper, a machine learning based approach is introduced to estimate pendubot angular position from its captured images. Initially, a baseline algorithm is introduced to estimate the angle using conventional image processing techniques. The baseline algorithm performs well for the cases that the pendubot is not moving fast. However, when moving quickly due to a free fall, the pendubot appears as a blurred object in the captured image in a way that the baseline algorithm fails to estimate the angle. Consequently, a Deep Neural Network (DNN) based algorithm is introduced to cope with this challenge. The approach relies on the concept of transfer learning to allow the training of the DNN on a very small fine-tuning dataset. The base algorithm is used to create the ground truth labels of the fine-tuning dataset. Experimental results on the held-out evaluation set show that the proposed approach achieves a median absolute error of 0.02 and 0.06 degrees for the sharp and blurry images respectively.
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
277,722
2409.16849
Exposing Assumptions in AI Benchmarks through Cognitive Modelling
Cultural AI benchmarks often rely on implicit assumptions about measured constructs, leading to vague formulations with poor validity and unclear interrelations. We propose exposing these assumptions using explicit cognitive models formulated as Structural Equation Models. Using cross-lingual alignment transfer as an example, we show how this approach can answer key research questions and identify missing datasets. This framework grounds benchmark construction theoretically and guides dataset development to improve construct measurement. By embracing transparency, we move towards more rigorous, cumulative AI evaluation science, challenging researchers to critically examine their assessment foundations.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
491,543
2308.06894
When Provenance Aids and Complicates Reproducibility Judgments
It is well-established that the provenance of a scientific result is important, sometimes more important than the actual result. For computational analyses that involve visualization, this provenance information may contain the steps involved in generating visualizations from raw data. Specifically, data provenance tracks the lineage of data and process provenance tracks the steps executed. In this paper, we argue that the utility of computational provenance may not be as clear-cut as we might like. One common use case for provenance is that the information can be used to reproduce the original result. However, in visualization, the goal is often to communicate results to a user or viewer, and thus the insights obtained are ultimately most important. Viewers can miss important changes or react to unimportant ones. Here, interaction provenance, which tracks a user's actions with a visualization, or insight provenance, which tracks the decision-making process, can help capture what happened but don't remove the issues. In this paper, we present scenarios where provenance impacts reproducibility in different ways. We also explore how provenance and visualizations can be better related.
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
385,310
2406.17518
Enhancing Explainability of Knowledge Learning Paths: Causal Knowledge Networks
A reliable knowledge structure is a prerequisite for building effective adaptive learning systems and intelligent tutoring systems. Pursuing an explainable and trustworthy knowledge structure, we propose a method for constructing causal knowledge networks. This approach leverages Bayesian networks as a foundation and incorporates causal relationship analysis to derive a causal network. Additionally, we introduce a dependable knowledge-learning path recommendation technique built upon this framework, improving teaching and learning quality while maintaining transparency in the decision-making process.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
467,602
2406.16967
Remaining useful life prediction of rolling bearings based on refined composite multi-scale attention entropy and dispersion entropy
Remaining useful life (RUL) prediction based on vibration signals is crucial for ensuring the safe operation and effective health management of rotating machinery. Existing studies often extract health indicators (HI) from time domain and frequency domain features to analyze complex vibration signals, but these features may not accurately capture the degradation process. In this study, we propose a degradation feature extraction method called Fusion of Multi-Modal Multi-Scale Entropy (FMME), which utilizes multi-modal Refined Composite Multi-scale Attention Entropy (RCMATE) and Fluctuation Dispersion Entropy (RCMFDE), to solve the problem that the existing degradation features cannot accurately reflect the degradation process. Firstly, the Empirical Mode Decomposition (EMD) is employed to decompose the dual-channel vibration signals of bearings into multiple modals. The main modals are then selected for further analysis. The subsequent step involves the extraction of RCMATE and RCMFDE from each modal, followed by wavelet denoising. Next, a novel metric is proposed to evaluate the quality of degradation features. The attention entropy and dispersion entropy of the optimal scales under different modals are fused using Laplacian Eigenmap (LE) to obtain the health indicators. Finally, RUL prediction is performed through the similarity of health indicators between fault samples and bearings to be predicted. Experimental results demonstrate that the proposed method yields favorable outcomes across diverse operating conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
467,369
2007.09996
Social Learning in Non-Stationary Environments
Potential buyers of a product or service, before making their decisions, tend to read reviews written by previous consumers. We consider Bayesian consumers with heterogeneous preferences, who sequentially decide whether to buy an item of unknown quality, based on previous buyers' reviews. The quality is multi-dimensional and may occasionally vary over time; the reviews are also multi-dimensional. In the simple uni-dimensional and static setting, beliefs about the quality are known to converge to its true value. Our paper extends this result in several ways. First, a multi-dimensional quality is considered, second, rates of convergence are provided, third, a dynamical Markovian model with varying quality is studied. In this dynamical setting the cost of learning is shown to be small.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
188,132
2106.01345
Decision Transformer: Reinforcement Learning via Sequence Modeling
We introduce a framework that abstracts Reinforcement Learning (RL) as a sequence modeling problem. This allows us to draw upon the simplicity and scalability of the Transformer architecture, and associated advances in language modeling such as GPT-x and BERT. In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling. Unlike prior approaches to RL that fit value functions or compute policy gradients, Decision Transformer simply outputs the optimal actions by leveraging a causally masked Transformer. By conditioning an autoregressive model on the desired return (reward), past states, and actions, our Decision Transformer model can generate future actions that achieve the desired return. Despite its simplicity, Decision Transformer matches or exceeds the performance of state-of-the-art model-free offline RL baselines on Atari, OpenAI Gym, and Key-to-Door tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
238,463
2303.11436
Mind meets machine: Unravelling GPT-4's cognitive psychology
Cognitive psychology delves on understanding perception, attention, memory, language, problem-solving, decision-making, and reasoning. Large language models (LLMs) are emerging as potent tools increasingly capable of performing human-level tasks. The recent development in the form of GPT-4 and its demonstrated success in tasks complex to humans exam and complex problems has led to an increased confidence in the LLMs to become perfect instruments of intelligence. Although GPT-4 report has shown performance on some cognitive psychology tasks, a comprehensive assessment of GPT-4, via the existing well-established datasets is required. In this study, we focus on the evaluation of GPT-4's performance on a set of cognitive psychology datasets such as CommonsenseQA, SuperGLUE, MATH and HANS. In doing so, we understand how GPT-4 processes and integrates cognitive psychology with contextual information, providing insight into the underlying cognitive processes that enable its ability to generate the responses. We show that GPT-4 exhibits a high level of accuracy in cognitive psychology tasks relative to the prior state-of-the-art models. Our results strengthen the already available assessments and confidence on GPT-4's cognitive psychology abilities. It has significant potential to revolutionize the field of AI, by enabling machines to bridge the gap between human and machine reasoning.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
352,848
2203.01203
Manipulation of unknown objects via contact configuration regulation
We present an approach to robotic manipulation of unknown objects through regulation of the object's contact configuration: the location, geometry, and mode of all contacts between the object, robot, and environment. A contact configuration constrains the forces and motions that can be applied to the object; however, synthesizing these constraints generally requires knowledge of the object's pose and geometry. We develop an object-agnostic approach for estimation and control that circumvents this need. Our framework directly estimates a set of wrench and motion constraints which it uses to regulate the contact configuration. We use this to reactively manipulate unknown planar objects in the gravity plane. A video describing our work can be found on our project page: http://mcube.mit.edu/research/contactConfig.html.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
283,291
1702.01381
Relative Camera Pose Estimation Using Convolutional Neural Networks
This paper presents a convolutional neural network based approach for estimating the relative pose between two cameras. The proposed network takes RGB images from both cameras as input and directly produces the relative rotation and translation as output. The system is trained in an end-to-end manner utilising transfer learning from a large scale classification dataset. The introduced approach is compared with widely used local feature based methods (SURF, ORB) and the results indicate a clear improvement over the baseline. In addition, a variant of the proposed architecture containing a spatial pyramid pooling (SPP) layer is evaluated and shown to further improve the performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
67,803
2501.11411
Beyond the Hype: Benchmarking LLM-Evolved Heuristics for Bin Packing
Coupling Large Language Models (LLMs) with Evolutionary Algorithms has recently shown significant promise as a technique to design new heuristics that outperform existing methods, particularly in the field of combinatorial optimisation. An escalating arms race is both rapidly producing new heuristics and improving the efficiency of the processes evolving them. However, driven by the desire to quickly demonstrate the superiority of new approaches, evaluation of the new heuristics produced for a specific domain is often cursory: testing on very few datasets in which instances all belong to a specific class from the domain, and on few instances per class. Taking bin-packing as an example, to the best of our knowledge we conduct the first rigorous benchmarking study of new LLM-generated heuristics, comparing them to well-known existing heuristics across a large suite of benchmark instances using three performance metrics. For each heuristic, we then evolve new instances won by the heuristic and perform an instance space analysis to understand where in the feature space each heuristic performs well. We show that most of the LLM heuristics do not generalise well when evaluated across a broad range of benchmarks in contrast to existing simple heuristics, and suggest that any gains from generating very specialist heuristics that only work in small areas of the instance space need to be weighed carefully against the considerable cost of generating these heuristics.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
525,913
2308.04553
From Fake to Real: Pretraining on Balanced Synthetic Images to Prevent Spurious Correlations in Image Recognition
Visual recognition models are prone to learning spurious correlations induced by a biased training set where certain conditions $B$ (\eg, Indoors) are over-represented in certain classes $Y$ (\eg, Big Dogs). Synthetic data from off-the-shelf large-scale generative models offers a promising direction to mitigate this issue by augmenting underrepresented subgroups in the real dataset. However, by using a mixed distribution of real and synthetic data, we introduce another source of bias due to distributional differences between synthetic and real data (\eg synthetic artifacts). As we will show, prior work's approach for using synthetic data to resolve the model's bias toward $B$ do not correct the model's bias toward the pair $(B, G)$, where $G$ denotes whether the sample is real or synthetic. Thus, the model could simply learn signals based on the pair $(B, G)$ (\eg, Synthetic Indoors) to make predictions about $Y$ (\eg, Big Dogs). To address this issue, we propose a simple, easy-to-implement, two-step training pipeline that we call From Fake to Real (FFR). The first step of FFR pre-trains a model on balanced synthetic data to learn robust representations across subgroups. In the second step, FFR fine-tunes the model on real data using ERM or common loss-based bias mitigation methods. By training on real and synthetic data separately, FFR does not expose the model to the statistical differences between real and synthetic data and thus avoids the issue of bias toward the pair $(B, G)$. Our experiments show that FFR improves worst group accuracy over the state-of-the-art by up to 20\% over three datasets. Code available: \url{https://github.com/mqraitem/From-Fake-to-Real}
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
384,455
1709.01643
Learning to Compose Domain-Specific Transformations for Data Augmentation
Data augmentation is a ubiquitous technique for increasing the size of labeled training sets by leveraging task-specific data transformations that preserve class labels. While it is often easy for domain experts to specify individual transformations, constructing and tuning the more sophisticated compositions typically needed to achieve state-of-the-art results is a time-consuming manual task in practice. We propose a method for automating this process by learning a generative sequence model over user-specified transformation functions using a generative adversarial approach. Our method can make use of arbitrary, non-deterministic transformation functions, is robust to misspecified user input, and is trained on unlabeled data. The learned transformation model can then be used to perform data augmentation for any end discriminative model. In our experiments, we show the efficacy of our approach on both image and text datasets, achieving improvements of 4.0 accuracy points on CIFAR-10, 1.4 F1 points on the ACE relation extraction task, and 3.4 accuracy points when using domain-specific transformation operations on a medical imaging dataset as compared to standard heuristic augmentation approaches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
80,122
2006.12819
Distributed Subgraph Enumeration via Backtracking-based Framework
Finding or monitoring subgraph instances that are isomorphic to a given pattern graph in a data graph is a fundamental query operation in many graph analytic applications, such as network motif mining and fraud detection. The state-of-the-art distributed methods are inefficient in communication. They have to shuffle partial matching results during the distributed multiway join. The partial matching results may be much larger than the data graph itself. To overcome the drawback, we develop the Batch-BENU framework (B-BENU) for distributed subgraph enumeration. B-BENU executes a group of local search tasks in parallel. Each task enumerates subgraphs around a vertex in the data graph, guided by a backtracking-based execution plan. B-BENU does not shuffle any partial matching result. Instead, it stores the data graph in a distributed database. Each task queries adjacency sets of the data graph on demand. To support dynamic data graphs, we propose the concept of incremental pattern graphs and turn continuous subgraph enumeration into enumerating incremental pattern graphs at each time step. We develop the Streaming-BENU framework (S-BENU) to enumerate their matches efficiently. We implement B-BENU and S-BENU with the local database cache and the task splitting techniques. The extensive experiments show that B-BENU and S-BENU can scale to big data graphs and complex pattern graphs. They outperform the state-of-the-art methods by up to one and two orders of magnitude, respectively.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
183,722
1202.0417
Universal communication part II: channels with memory
Consider communication over a channel whose probabilistic model is completely unknown vector-wise and is not assumed to be stationary. Communication over such channels is challenging because knowing the past does not indicate anything about the future. The existence of reliable feedback and common randomness is assumed. In a previous paper it was shown that the Shannon capacity cannot be attained, in general, if the channel is not known. An alternative notion of "capacity" was defined, as the maximum rate of reliable communication by any block-coding system used over consecutive blocks. This rate was shown to be achievable for the modulo-additive channel with an individual, unknown noise sequence, and not achievable for some channels with memory. In this paper this "capacity" is shown to be achievable for general channel models possibly including memory, as long as this memory fades with time. In other words, there exists a system with feedback and common randomness that, without knowledge of the channel, asymptotically performs as well as any block code, which may be designed knowing the channel. For non-fading memory channels a weaker type of "capacity" is shown to be achievable.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
14,079
2211.11412
Resource Allocation for Capacity Optimization in Joint Source-Channel Coding Systems
Benefited from the advances of deep learning (DL) techniques, deep joint source-channel coding (JSCC) has shown its great potential to improve the performance of wireless transmission. However, most of the existing works focus on the DL-based transceiver design of the JSCC model, while ignoring the resource allocation problem in wireless systems. In this paper, we consider a downlink resource allocation problem, where a base station (BS) jointly optimizes the compression ratio (CR) and power allocation as well as resource block (RB) assignment of each user according to the latency and performance constraints to maximize the number of users that successfully receive their requested content with desired quality. To solve this problem, we first decompose it into two subproblems without loss of optimality. The first subproblem is to minimize the required transmission power for each user under given RB allocation. We derive the closed-form expression of the optimal transmit power by searching the maximum feasible compression ratio. The second one aims at maximizing the number of supported users through optimal user-RB pairing, which we solve by utilizing bisection search as well as Karmarka' s algorithm. Simulation results validate the effectiveness of the proposed resource allocation method in terms of the number of satisfied users with given resources.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
331,721
2102.04729
A Provably Convergent Information Bottleneck Solution via ADMM
The Information bottleneck (IB) method enables optimizing over the trade-off between compression of data and prediction accuracy of learned representations, and has successfully and robustly been applied to both supervised and unsupervised representation learning problems. However, IB has several limitations. First, the IB problem is hard to optimize. The IB Lagrangian $\mathcal{L}_{IB}:=I(X;Z)-\beta I(Y;Z)$ is non-convex and existing solutions guarantee only local convergence. As a result, the obtained solutions depend on initialization. Second, the evaluation of a solution is also a challenging task. Conventionally, it resorts to characterizing the information plane, that is, plotting $I(Y;Z)$ versus $I(X;Z)$ for all solutions obtained from different initial points. Furthermore, the IB Lagrangian has phase transitions while varying the multiplier $\beta$. At phase transitions, both $I(X;Z)$ and $I(Y;Z)$ increase abruptly and the rate of convergence becomes significantly slow for existing solutions. Recent works with IB adopt variational surrogate bounds to the IB Lagrangian. Although allowing efficient optimization, how close are these surrogates to the IB Lagrangian is not clear. In this work, we solve the IB Lagrangian using augmented Lagrangian methods. With augmented variables, we show that the IB objective can be solved with the alternating direction method of multipliers (ADMM). Different from prior works, we prove that the proposed algorithm is consistently convergent, regardless of the value of $\beta$. Empirically, our gradient-descent-based method results in information plane points that are comparable to those obtained through the conventional Blahut-Arimoto-based solvers and is convergent for a wider range of the penalty coefficient than previous ADMM solvers.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
219,209
2409.08831
Breaking reCAPTCHAv2
Our work examines the efficacy of employing advanced machine learning methods to solve captchas from Google's reCAPTCHAv2 system. We evaluate the effectiveness of automated systems in solving captchas by utilizing advanced YOLO models for image segmentation and classification. Our main result is that we can solve 100% of the captchas, while previous work only solved 68-71%. Furthermore, our findings suggest that there is no significant difference in the number of challenges humans and bots must solve to pass the captchas in reCAPTCHAv2. This implies that current AI technologies can exploit advanced image-based captchas. We also look under the hood of reCAPTCHAv2, and find evidence that reCAPTCHAv2 is heavily based on cookie and browser history data when evaluating whether a user is human or not. The code is provided alongside this paper.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
488,076
1601.04952
Emergence of Consensus in a Multi-Robot Network: from Abstract Models to Empirical Validation
Consensus dynamics in decentralised multiagent systems are subject to intense studies, and several different models have been proposed and analysed. Among these, the naming game stands out for its simplicity and applicability to a wide range of phenomena and applications, from semiotics to engineering. Despite the wide range of studies available, the implementation of theoretical models in real distributed systems is not always straightforward, as the physical platform imposes several constraints that may have a bearing on the consensus dynamics. In this paper, we investigate the effects of an implementation of the naming game for the kilobot robotic platform, in which we consider concurrent execution of games and physical interferences. Consensus dynamics are analysed in the light of the continuously evolving communication network created by the robots, highlighting how the different regimes crucially depend on the robot density and on their ability to spread widely in the experimental arena. We find that physical interferences reduce the benefits resulting from robot mobility in terms of consensus time, but also result in lower cognitive load for individual agents.
false
false
false
true
false
false
false
true
false
false
false
false
false
false
true
false
false
false
51,081
1509.01886
A Lyapunov Optimization Approach for Green Cellular Networks with Hybrid Energy Supplies
Powering cellular networks with renewable energy sources via energy harvesting (EH) has recently been proposed as a promising solution for green networking. However, with intermittent and random energy arrivals, it is challenging to provide satisfactory quality of service (QoS) in EH networks. To enjoy the greenness brought by EH while overcoming the instability of the renewable energy sources, hybrid energy supply (HES) networks that are powered by both EH and the electric grid have emerged as a new paradigm for green communications. In this paper, we will propose new design methodologies for HES green cellular networks with the help of Lyapunov optimization techniques. The network service cost, which addresses both the grid energy consumption and achievable QoS, is adopted as the performance metric, and it is optimized via base station assignment and power control (BAPC). Our main contribution is a low-complexity online algorithm to minimize the long-term average network service cost, namely, the Lyapunov optimization-based BAPC (LBAPC) algorithm. One main advantage of this algorithm is that the decisions depend only on the instantaneous side information without requiring distribution information of channels and EH processes. To determine the network operation, we only need to solve a deterministic per-time slot problem, for which an efficient inner-outer optimization algorithm is proposed. Moreover, the proposed algorithm is shown to be asymptotically optimal via rigorous analysis. Finally, sample simulation results are presented to verify the theoretical analysis as well as validate the effectiveness of the proposed algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
46,670
2012.11965
Tractable Orders for Direct Access to Ranked Answers of Conjunctive Queries
We study the question of when we can provide direct access to the k-th answer to a Conjunctive Query (CQ) according to a specified order over the answers in time logarithmic in the size of the database, following a preprocessing step that constructs a data structure in time quasilinear in database size. Specifically, we embark on the challenge of identifying the tractable answer orderings, that is, those orders that allow for such complexity guarantees. To better understand the computational challenge at hand, we also investigate the more modest task of providing access to only a single answer (i.e., finding the answer at a given position), a task that we refer to as the selection problem, and ask when it can be performed in quasilinear time. We also explore the question of when selection is indeed easier than ranked direct access. We begin with lexicographic orders. For each of the two problems, we give a decidable characterization (under conventional complexity assumptions) of the class of tractable lexicographic orders for every CQ without self-joins. We then continue to the more general orders by the sum of attribute weights and establish the corresponding decidable characterizations, for each of the two problems, of the tractable CQs without self-joins. Finally, we explore the question of when the satisfaction of Functional Dependencies (FDs) can be utilized for tractability, and establish the corresponding generalizations of our characterizations for every set of unary FDs.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
212,790
1905.04486
Symbolic Monitoring against Specifications Parametric in Time and Data
Monitoring consists in deciding whether a log meets a given specification. In this work, we propose an automata-based formalism to monitor logs in the form of actions associated with time stamps and arbitrarily data values over infinite domains. Our formalism uses both timing parameters and data parameters, and is able to output answers symbolic in these parameters and in the log segments where the property is satisfied or violated. We implemented our approach in an ad-hoc prototype SyMon, and experiments show that its high expressive power still allows for efficient online monitoring.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
130,483
1910.02776
Biologically-Inspired Spatial Neural Networks
We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions. To simulate properties of biological systems we add the costs penalizing long connections and the proximity of neurons in a two-dimensional space. Our experiments show that in the case where the network performs two different tasks, the neurons naturally split into clusters, where each cluster is responsible for processing a different task. This behavior not only corresponds to the biological systems, but also allows for further insight into interpretability or continual learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
148,337
2412.18723
MRI Reconstruction with Regularized 3D Diffusion Model (R3DM)
Magnetic Resonance Imaging (MRI) is a powerful imaging technique widely used for visualizing structures within the human body and in other fields such as plant sciences. However, there is a demand to develop fast 3D-MRI reconstruction algorithms to show the fine structure of objects from under-sampled acquisition data, i.e., k-space data. This emphasizes the need for efficient solutions that can handle limited input while maintaining high-quality imaging. In contrast to previous methods only using 2D, we propose a 3D MRI reconstruction method that leverages a regularized 3D diffusion model combined with optimization method. By incorporating diffusion based priors, our method improves image quality, reduces noise, and enhances the overall fidelity of 3D MRI reconstructions. We conduct comprehensive experiments analysis on clinical and plant science MRI datasets. To evaluate the algorithm effectiveness for under-sampled k-space data, we also demonstrate its reconstruction performance with several undersampling patterns, as well as with in- and out-of-distribution pre-trained data. In experiments, we show that our method improves upon tested competitors.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
520,541
2110.09054
A Stable FDTD Subgridding Scheme with SBP-SAT for Transient Electromagnetic Analysis
We proposed a provably stable FDTD subgridding method for accurate and efficient transient electromagnetic analysis. In the proposed method, several field components are properly added to the boundaries of Yee's grid to make sure that the discrete operators meet the summation-by-parts (SBP) property. Then, by incorporating the simultaneous approximation terms (SATs) into the finite-difference time-domain (FDTD) method, the proposed FDTD subgridding method mimics the energy estimate of the continuous Maxwell's equations at the semi-discrete level to guarantee its stability. Further, to couple multiple mesh blocks with different mesh sizes, the interpolation matrices are also derived. The proposed FDTD subgridding method is accurate, efficient, easy to implement and be integrated into the existing FDTD codes with only simple modifications. At last, three numerical examples with fine structures are carried out to validate the effectiveness of the proposed method.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
261,663
2412.12994
Model agnostic signal encoding by leaky integrate and fire, performance and uncertainty
Integrate and fire is a resource efficient time-encoding mechanism that summarizes into a signed spike train those time intervals where a signal's charge exceeds a certain threshold. We analyze the IF encoder in terms of a very general notion of approximate bandwidth, which is shared by most commonly-used signal models. This complements results on exact encoding that may be overly adapted to a particular signal model. We take into account, possibly for the first time, the effect of uncertainty in the exact location of the spikes (as may arise by decimation), uncertainty of integration leakage (as may arise in realistic manufacturing), and boundary effects inherent to finite periods of exposure to the measurement device. The analysis is done by means of a concrete bandwidth-based Ansatz that can also be useful to initialize more sophisticated model specific reconstruction algorithms, and uses the earth mover's (Wassertein) distance to measure spike discrepancy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
518,118
1910.03385
Linguistically Informed Relation Extraction and Neural Architectures for Nested Named Entity Recognition in BioNLP-OST 2019
Named Entity Recognition (NER) and Relation Extraction (RE) are essential tools in distilling knowledge from biomedical literature. This paper presents our findings from participating in BioNLP Shared Tasks 2019. We addressed Named Entity Recognition including nested entities extraction, Entity Normalization and Relation Extraction. Our proposed approach of Named Entities can be generalized to different languages and we have shown it's effectiveness for English and Spanish text. We investigated linguistic features, hybrid loss including ranking and Conditional Random Fields (CRF), multi-task objective and token-level ensembling strategy to improve NER. We employed dictionary based fuzzy and semantic search to perform Entity Normalization. Finally, our RE system employed Support Vector Machine (SVM) with linguistic features. Our NER submission (team:MIC-CIS) ranked first in BB-2019 norm+NER task with standard error rate (SER) of 0.7159 and showed competitive performance on PharmaCo NER task with F1-score of 0.8662. Our RE system ranked first in the SeeDev-binary Relation Extraction Task with F1-score of 0.3738.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
148,483
1702.04850
Coded TeraSort
We focus on sorting, which is the building block of many machine learning algorithms, and propose a novel distributed sorting algorithm, named Coded TeraSort, which substantially improves the execution time of the TeraSort benchmark in Hadoop MapReduce. The key idea of Coded TeraSort is to impose structured redundancy in data, in order to enable in-network coding opportunities that overcome the data shuffling bottleneck of TeraSort. We empirically evaluate the performance of CodedTeraSort algorithm on Amazon EC2 clusters, and demonstrate that it achieves 1.97x - 3.39x speedup, compared with TeraSort, for typical settings of interest.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
68,321
2401.02402
3D Open-Vocabulary Panoptic Segmentation with 2D-3D Vision-Language Distillation
3D panoptic segmentation is a challenging perception task, especially in autonomous driving. It aims to predict both semantic and instance annotations for 3D points in a scene. Although prior 3D panoptic segmentation approaches have achieved great performance on closed-set benchmarks, generalizing these approaches to unseen things and unseen stuff categories remains an open problem. For unseen object categories, 2D open-vocabulary segmentation has achieved promising results that solely rely on frozen CLIP backbones and ensembling multiple classification outputs. However, we find that simply extending these 2D models to 3D does not guarantee good performance due to poor per-mask classification quality, especially for novel stuff categories. In this paper, we propose the first method to tackle 3D open-vocabulary panoptic segmentation. Our model takes advantage of the fusion between learnable LiDAR features and dense frozen vision CLIP features, using a single classification head to make predictions for both base and novel classes. To further improve the classification performance on novel classes and leverage the CLIP model, we propose two novel loss functions: object-level distillation loss and voxel-level distillation loss. Our experiments on the nuScenes and SemanticKITTI datasets show that our method outperforms the strong baseline by a large margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
419,701
1802.07528
Learning Integral Representations of Gaussian Processes
We propose a representation of Gaussian processes (GPs) based on powers of the integral operator defined by a kernel function, we call these stochastic processes integral Gaussian processes (IGPs). Sample paths from IGPs are functions contained within the reproducing kernel Hilbert space (RKHS) defined by the kernel function, in contrast sample paths from the standard GP are not functions within the RKHS. We develop computationally efficient non-parametric regression models based on IGPs. The main innovation in our regression algorithm is the construction of a low dimensional subspace that captures the information most relevant to explaining variation in the response. We use ideas from supervised dimension reduction to compute this subspace. The result of using the construction we propose involves significant improvements in the computational complexity of estimating kernel hyper-parameters as well as reducing the prediction variance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,914
2501.03250
Machine Learning and Deep Learning Techniques used in Cybersecurity and Digital Forensics: a Review
In the paced realms of cybersecurity and digital forensics machine learning (ML) and deep learning (DL) have emerged as game changing technologies that introduce methods to identify stop and analyze cyber risks. This review presents an overview of the ML and DL approaches used in these fields showcasing their advantages drawbacks and possibilities. It covers a range of AI techniques used in spotting intrusions in systems and classifying malware to prevent cybersecurity attacks, detect anomalies and enhance resilience. This study concludes by highlighting areas where further research is needed and suggesting ways to create transparent and scalable ML and DL solutions that are suited to the evolving landscape of cybersecurity and digital forensics.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
522,808
1911.06500
A Survey of Algorithms for Distributed Charging Control of Electric Vehicles in Smart Grid
Electric vehicles (EVs) are an eco-friendly alternative to vehicles with internal combustion engines. Despite their environmental benefits, the massive electricity demand imposed by the anticipated proliferation of EVs could jeopardize the secure and economic operation of the power grid. Hence, proper strategies for charging coordination will be indispensable to the future power grid. Coordinated EV charging schemes can be implemented as centralized, decentralized, and hierarchical systems, with the last two, referred to as distributed charging control systems. This paper reviews the recent literature of distributed charging control schemes, where the computations are distributed across multiple EVs and/or aggregators. First, we categorize optimization problems for EV charging in terms of operational aspects and cost aspects. Then under each category, we provide a comprehensive discussion on algorithms for distributed EV charge scheduling, considering the perspectives of the grid operator, the aggregator, and the EV user. We also discuss how certain algorithms proposed in the literature cope with various uncertainties inherent to distributed EV charging control problems. Finally, we outline several research directions that require further attention.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
153,557
2407.08981
Joint Load and Capacity Scheduling for Flexible Radio Resource Management of High-Throughput Satellites
This work first explores using flexible beam-user mapping to optimize the beam service range and beam position, in order to adapt the non-uniform traffic demand to offer in high-throughput satellite (HTS) systems. Second, on this basis, the joint flexible bandwidth allocation is adopted to adapt the offer to demand at the same time. This strategy allows both beam capacity and load to be adjusted to cope with the traffic demand. The new information generated during the load transfer process of flexible beam-user mapping can guide the direction of beam optimization. Then, the proposed strategies are tested against joint power-bandwidth allocation and joint optimization of bandwidth and beam-user mapping under different traffic profiles. Numerical results are obtained for various non-uniform traffic distributions to evaluate the performance of the solutions. Results show that flexible joint load and capacity scheduling are superior to other strategies in terms of demand satisfaction with acceptable complexity. Our source code along with results are available at crystal-zwz/HTS_RRM_Joint-Load-and-Capacity-Scheduling (github.com).
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
472,394
2111.03941
Time Discretization-Invariant Safe Action Repetition for Policy Gradient Methods
In reinforcement learning, continuous time is often discretized by a time scale $\delta$, to which the resulting performance is known to be highly sensitive. In this work, we seek to find a $\delta$-invariant algorithm for policy gradient (PG) methods, which performs well regardless of the value of $\delta$. We first identify the underlying reasons that cause PG methods to fail as $\delta \to 0$, proving that the variance of the PG estimator can diverge to infinity in stochastic environments under a certain assumption of stochasticity. While durative actions or action repetition can be employed to have $\delta$-invariance, previous action repetition methods cannot immediately react to unexpected situations in stochastic environments. We thus propose a novel $\delta$-invariant method named Safe Action Repetition (SAR) applicable to any existing PG algorithm. SAR can handle the stochasticity of environments by adaptively reacting to changes in states during action repetition. We empirically show that our method is not only $\delta$-invariant but also robust to stochasticity, outperforming previous $\delta$-invariant approaches on eight MuJoCo environments with both deterministic and stochastic settings. Our code is available at https://vision.snu.ac.kr/projects/sar.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
265,324
2402.14527
Federated Learning on Transcriptomic Data: Model Quality and Performance Trade-Offs
Machine learning on large-scale genomic or transcriptomic data is important for many novel health applications. For example, precision medicine tailors medical treatments to patients on the basis of individual biomarkers, cellular and molecular states, etc. However, the data required is sensitive, voluminous, heterogeneous, and typically distributed across locations where dedicated machine learning hardware is not available. Due to privacy and regulatory reasons, it is also problematic to aggregate all data at a trusted third party.Federated learning is a promising solution to this dilemma, because it enables decentralized, collaborative machine learning without exchanging raw data. In this paper, we perform comparative experiments with the federated learning frameworks TensorFlow Federated and Flower. Our test case is the training of disease prognosis and cell type classification models. We train the models with distributed transcriptomic data, considering both data heterogeneity and architectural heterogeneity. We measure model quality, robustness against privacy-enhancing noise, computational performance and resource overhead. Each of the federated learning frameworks has different strengths. However, our experiments confirm that both frameworks can readily build models on transcriptomic data, without transferring personal raw data to a third party with abundant computational resources.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
431,725
2405.14964
Black Start Operation of Grid-Forming Converters Based on Generalized Three-phase Droop Control Under Unbalanced Conditions
This paper focuses on the challenging task of bottom-up restoration in a complete blackout system using Grid-forming (GFM) converters. Challenges arise due to the limited current capability of power converters, resulting in distinct dynamic responses and fault current characteristics compared to synchronous generators. Additionally, GFM control needs to address the presence of unbalanced conditions commonly found in distribution systems. To address these challenges, this paper explores the black start capability of GFM converters with a generalized three-phase GFM droop control. This approach integrates GFM controls individually for each phase, incorporating phase-balancing feedback and enabling current limiting for each phase during unbalanced faults or overloading. The introduction of a phase-balancing gain provides flexibility to trade-off between voltage and power imbalances. The study further investigates bottom-up black start operations using GFM converters, incorporating advanced load relays into breakers for gradual load energization without central coordination. The effectiveness of bottom-up black start operations with GFM converters, utilizing the generalized three-phase GFM droop, is evaluated through electromagnetic transient (EMT) simulations in MATLAB/Simulink. The results confirm the performance and effectiveness of this approach in achieving successful black start operations under unbalanced conditions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
456,680
2010.03505
Learning from demonstration using products of experts: applications to manipulation and task prioritization
Probability distributions are key components of many learning from demonstration (LfD) approaches. While the configuration of a manipulator is defined by its joint angles, poses are often best explained within several task spaces. In many approaches, distributions within relevant task spaces are learned independently and only combined at the control level. This simplification implies several problems that are addressed in this work. We show that the fusion of models in different task spaces can be expressed as a product of experts (PoE), where the probabilities of the models are multiplied and renormalized so that it becomes a proper distribution of joint angles. Multiple experiments are presented to show that learning the different models jointly in the PoE framework significantly improves the quality of the model. The proposed approach particularly stands out when the robot has to learn competitive or hierarchical objectives. Training the model jointly usually relies on contrastive divergence, which requires costly approximations that can affect performance. We propose an alternative strategy using variational inference and mixture model approximations. In particular, we show that the proposed approach can be extended to PoE with a nullspace structure (PoENS), where the model is able to recover tasks that are masked by the resolution of higher-level objectives.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
199,420
1712.06568
ES Is More Than Just a Traditional Finite-Difference Approximator
An evolution strategy (ES) variant based on a simplification of a natural evolution strategy recently attracted attention because it performs surprisingly well in challenging deep reinforcement learning domains. It searches for neural network parameters by generating perturbations to the current set of parameters, checking their performance, and moving in the aggregate direction of higher reward. Because it resembles a traditional finite-difference approximation of the reward gradient, it can naturally be confused with one. However, this ES optimizes for a different gradient than just reward: It optimizes for the average reward of the entire population, thereby seeking parameters that are robust to perturbation. This difference can channel ES into distinct areas of the search space relative to gradient descent, and also consequently to networks with distinct properties. This unique robustness-seeking property, and its consequences for optimization, are demonstrated in several domains. They include humanoid locomotion, where networks from policy gradient-based reinforcement learning are significantly less robust to parameter perturbation than ES-based policies solving the same task. While the implications of such robustness and robustness-seeking remain open to further study, this work's main contribution is to highlight such differences and their potential importance.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
86,908
2310.10477
Gaining Wisdom from Setbacks: Aligning Large Language Models via Mistake Analysis
The rapid development of large language models (LLMs) has not only provided numerous opportunities but also presented significant challenges. This becomes particularly evident when LLMs inadvertently generate harmful or toxic content, either unintentionally or because of intentional inducement. Existing alignment methods usually direct LLMs toward the favorable outcomes by utilizing human-annotated, flawless instruction-response pairs. Conversely, this study proposes a novel alignment technique based on mistake analysis, which deliberately exposes LLMs to erroneous content to learn the reasons for mistakes and how to avoid them. In this case, mistakes are repurposed into valuable data for alignment, effectively helping to avoid the production of erroneous responses. Without external models or human annotations, our method leverages a model's intrinsic ability to discern undesirable mistakes and improves the safety of its generated responses. Experimental results reveal that our method outperforms existing alignment approaches in enhancing model safety while maintaining the overall utility.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
400,238
2411.05111
Location-Based Output Adaptation for Enhanced Actuator Performance using Frequency Sweep Analysis
This paper presents a methodology for enhancing actuator performance in older devices or retrofitting devices with haptic feedback actuators. The approach is versatile, accommodating various actuator and mounting positions. Through a frequency sweep analysis, the system's characteristics are captured, enabling the creation of location-specific transfer functions to accurately transform input signals into command signals for a precise output at the target location. This method offers fast and simple collection of the system properties and generation of location-specific signals.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
506,551
2403.05408
FedFMS: Exploring Federated Foundation Models for Medical Image Segmentation
Medical image segmentation is crucial for clinical diagnosis. The Segmentation Anything Model (SAM) serves as a powerful foundation model for visual segmentation and can be adapted for medical image segmentation. However, medical imaging data typically contain privacy-sensitive information, making it challenging to train foundation models with centralized storage and sharing. To date, there are few foundation models tailored for medical image deployment within the federated learning framework, and the segmentation performance, as well as the efficiency of communication and training, remain unexplored. In response to these issues, we developed Federated Foundation models for Medical image Segmentation (FedFMS), which includes the Federated SAM (FedSAM) and a communication and training-efficient Federated SAM with Medical SAM Adapter (FedMSA). Comprehensive experiments on diverse datasets are conducted to investigate the performance disparities between centralized training and federated learning across various configurations of FedFMS. The experiments revealed that FedFMS could achieve performance comparable to models trained via centralized training methods while maintaining privacy. Furthermore, FedMSA demonstrated the potential to enhance communication and training efficiency. Our model implementation codes are available at https://github.com/LIU-YUXI/FedFMS.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
435,998
2304.02333
Reactive Task Allocation for Balanced Servicing of Multiple Task Queues
In this article, we propose a reactive task allocation architecture for a multi-agent system for scenarios where the tasks arrive at random times and are grouped into multiple queues. Two stage tasks are considered where every task has a beginning, an intermediate and a final part, typical in pick-and-drop and inspect-and-report scenarios. A centralized auction-based task allocation system is proposed, where an auction system takes into consideration bids submitted by the agents for individual tasks, current length of the queues and the waiting times of the tasks in the queues to decide on a task allocation strategy. The costs associated with these considerations, along with the constraints of having unique mappings between tasks and agents and constraints on the maximum number of agents that can be assigned to a queue, results in a Linear Integer Program (LIP) that is solved using the SCIP solver. For the scenario where the queue lengths are penalized but not the waiting times, we demonstrate that the auction system allocates tasks in a manner that all the queue lengths become constant, which is termed balancing. For the scenarios where both the costs are considered, we qualitatively analyse the effect of the choice of the relative weights on the resulting task allocation and provide guidelines for the choice of the weights. We present simulation results that illustrate the balanced allocation of tasks and validate the analysis for the trade-off between the costs related to queue lengths and task waiting times.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
356,406
2409.11277
Machine Learning and Theory Ladenness -- A Phenomenological Account
In recent years, the dissemination of machine learning (ML) methodologies in scientific research has prompted discussions on theory ladenness. More specifically, the issue of theory ladenness has remerged as questions about whether and how ML models (MLMs) and ML modelling strategies are impacted by the domain theory of the scientific field in which ML is used and implemented (e.g., physics, chemistry, biology, etc). On the one hand, some have argued that there is no difference between traditional (pre ML) and ML assisted science. In both cases, theory plays an essential and unavoidable role in the analysis of phenomena and the construction and use of models. Others have argued instead that ML methodologies and models are theory independent and, in some cases, even theory free. In this article, we argue that both positions are overly simplistic and do not advance our understanding of the interplay between ML methods and domain theories. Specifically, we provide an analysis of theory ladenness in ML assisted science. Our analysis reveals that, while the construction of MLMs can be relatively independent of domain theory, the practical implementation and interpretation of these models within a given specific domain still relies on fundamental theoretical assumptions and background knowledge.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
489,088
2409.15734
Trust-Region Sequential Quadratic Programming for Stochastic Optimization with Random Models
In this work, we consider solving optimization problems with a stochastic objective and deterministic equality constraints. We propose a Trust-Region Sequential Quadratic Programming method to find both first- and second-order stationary points. Our method utilizes a random model to represent the objective function, which is constructed from stochastic observations of the objective and is designed to satisfy proper adaptive accuracy conditions with a high but fixed probability. To converge to first-order stationary points, our method computes a gradient step in each iteration defined by minimizing a quadratic approximation of the objective subject to a (relaxed) linear approximation of the problem constraints and a trust-region constraint. To converge to second-order stationary points, our method additionally computes an eigen step to explore the negative curvature of the reduced Hessian matrix, as well as a second-order correction step to address the potential Maratos effect, which arises due to the nonlinearity of the problem constraints. Such an effect may impede the method from moving away from saddle points. Both gradient and eigen step computations leverage a novel parameter-free decomposition of the step and the trust-region radius, accounting for the proportions among the feasibility residual, optimality residual, and negative curvature. We establish global almost sure first- and second-order convergence guarantees for our method, and present computational results on CUTEst problems, regression problems, and saddle-point problems to demonstrate its superiority over existing line-search-based stochastic methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
491,037
2310.16406
Radio Frequency Fingerprinting via Deep Learning: Challenges and Opportunities
Radio Frequency Fingerprinting (RFF) techniques promise to authenticate wireless devices at the physical layer based on inherent hardware imperfections introduced during manufacturing. Such RF transmitter imperfections are reflected into over-the-air signals, allowing receivers to accurately identify the RF transmitting source. Recent advances in Machine Learning, particularly in Deep Learning (DL), have improved the ability of RFF systems to extract and learn complex features that make up the device-specific fingerprint. However, integrating DL techniques with RFF and operating the system in real-world scenarios presents numerous challenges, originating from the embedded systems and the DL research domains. This paper systematically identifies and analyzes the essential considerations and challenges encountered in the creation of DL-based RFF systems across their typical development life-cycle, which include (i) data collection and preprocessing, (ii) training, and finally, (iii) deployment. Our investigation provides a comprehensive overview of the current open problems that prevent real deployment of DL-based RFF systems while also discussing promising research opportunities to enhance the overall accuracy, robustness, and privacy of these systems.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
402,718
2409.06297
User Preferences for Large Language Model versus Template-Based Explanations of Movie Recommendations: A Pilot Study
Recommender systems have become integral to our digital experiences, from online shopping to streaming platforms. Still, the rationale behind their suggestions often remains opaque to users. While some systems employ a graph-based approach, offering inherent explainability through paths associating recommended items and seed items, non-experts could not easily understand these explanations. A popular alternative is to convert graph-based explanations into textual ones using a template and an algorithm, which we denote here as ''template-based'' explanations. Yet, these can sometimes come across as impersonal or uninspiring. A novel method would be to employ large language models (LLMs) for this purpose, which we denote as ''LLM-based''. To assess the effectiveness of LLMs in generating more resonant explanations, we conducted a pilot study with 25 participants. They were presented with three explanations: (1) traditional template-based, (2) LLM-based rephrasing of the template output, and (3) purely LLM-based explanations derived from the graph-based explanations. Although subject to high variance, preliminary findings suggest that LLM-based explanations may provide a richer and more engaging user experience, further aligning with user expectations. This study sheds light on the potential limitations of current explanation methods and offers promising directions for leveraging large language models to improve user satisfaction and trust in recommender systems.
true
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
487,073
1602.05459
Localization of dominant eigenpairs and planted communities by means of Frobenius inner products
We propose a new localization result for the leading eigenvalue and eigenvector of a symmetric matrix $A$. The result exploits the Frobenius inner product between $A$ and a given rank-one landmark matrix $X$. Different choices for $X$ may be used, depending upon the problem under investigation. In particular, we show that the choice where $X$ is the all-ones matrix allows to estimate the signature of the leading eigenvector of $A$, generalizing previous results on Perron-Frobenius properties of matrices with some negative entries. As another application we consider the problem of community detection in graphs and networks. The problem is solved by means of modularity-based spectral techniques, following the ideas pioneered by Miroslav Fiedler in mid 70s. We show that a suitable choice of $X$ can be used to provide new quality guarantees of those techniques, when the network follows a stochastic block model.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
52,258
2406.05279
SuperPos-Prompt: Enhancing Soft Prompt Tuning of Language Models with Superposition of Multi Token Embeddings
Soft prompt tuning techniques have recently gained traction as an effective strategy for the parameter-efficient tuning of pretrained language models, particularly minimizing the required adjustment of model parameters. Despite their growing use, achieving optimal tuning with soft prompts, especially for smaller datasets, remains a substantial challenge. This study makes two contributions in this domain: (i) we introduce SuperPos-Prompt, a new reparameterization technique employing the superposition of multiple pretrained vocabulary embeddings to improve the learning of soft prompts. Our experiments across several GLUE and SuperGLUE benchmarks consistently highlight SuperPos-Prompt's superiority over Residual Prompt tuning, exhibiting an average score increase of $+6.4$ in T5-Small and $+5.0$ in T5-Base along with a faster convergence. Remarkably, SuperPos-Prompt occasionally outperforms even full fine-tuning methods. (ii) Additionally, we demonstrate enhanced performance and rapid convergence by omitting dropouts from the frozen network, yielding consistent improvements across various scenarios and tuning methods.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
462,057
2102.12119
A new upper bound and optimal constructions of equi-difference conflict-avoiding codes on constant weight
Conflict-avoiding codes (CACs) have been used in multiple-access collision channel without feedback. The size of a CAC is the number of potential users that can be supported in the system. A code with maximum size is called optimal. The use of an optimal CAC enables the largest possible number of asynchronous users to transmit information efficiently and reliably. In this paper, a new upper bound on the maximum size of arbitrary equi-difference CAC is presented. Furthermore, three optimal constructions of equi-difference CACs are also given. One is a generalized construction for prime length $L=p$ and the other two are for two-prime length $L=pq$.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
221,622
2106.13755
Reinforcement Learning for Mean Field Games, with Applications to Economics
Mean field games (MFG) and mean field control problems (MFC) are frameworks to study Nash equilibria or social optima in games with a continuum of agents. These problems can be used to approximate competitive or cooperative games with a large finite number of agents and have found a broad range of applications, in particular in economics. In recent years, the question of learning in MFG and MFC has garnered interest, both as a way to compute solutions and as a way to model how large populations of learners converge to an equilibrium. Of particular interest is the setting where the agents do not know the model, which leads to the development of reinforcement learning (RL) methods. After reviewing the literature on this topic, we present a two timescale approach with RL for MFG and MFC, which relies on a unified Q-learning algorithm. The main novelty of this method is to simultaneously update an action-value function and a distribution but with different rates, in a model-free fashion. Depending on the ratio of the two learning rates, the algorithm learns either the MFG or the MFC solution. To illustrate this method, we apply it to a mean field problem of accumulated consumption in finite horizon with HARA utility function, and to a trader's optimal liquidation problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
243,174
2307.15621
Shrink-Perturb Improves Architecture Mixing during Population Based Training for Neural Architecture Search
In this work, we show that simultaneously training and mixing neural networks is a promising way to conduct Neural Architecture Search (NAS). For hyperparameter optimization, reusing the partially trained weights allows for efficient search, as was previously demonstrated by the Population Based Training (PBT) algorithm. We propose PBT-NAS, an adaptation of PBT to NAS where architectures are improved during training by replacing poorly-performing networks in a population with the result of mixing well-performing ones and inheriting the weights using the shrink-perturb technique. After PBT-NAS terminates, the created networks can be directly used without retraining. PBT-NAS is highly parallelizable and effective: on challenging tasks (image generation and reinforcement learning) PBT-NAS achieves superior performance compared to baselines (random search and mutation-based PBT).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
382,319
2306.06330
Autonomous Drifting with 3 Minutes of Data via Learned Tire Models
Near the limits of adhesion, the forces generated by a tire are nonlinear and intricately coupled. Efficient and accurate modelling in this region could improve safety, especially in emergency situations where high forces are required. To this end, we propose a novel family of tire force models based on neural ordinary differential equations and a neural-ExpTanh parameterization. These models are designed to satisfy physically insightful assumptions while also having sufficient fidelity to capture higher-order effects directly from vehicle state measurements. They are used as drop-in replacements for an analytical brush tire model in an existing nonlinear model predictive control framework. Experiments with a customized Toyota Supra show that scarce amounts of driving data -- less than three minutes -- is sufficient to achieve high-performance autonomous drifting on various trajectories with speeds up to 45mph. Comparisons with the benchmark model show a $4 \times$ improvement in tracking performance, smoother control inputs, and faster and more consistent computation time.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
372,560
2207.09504
Invariant Feature Learning for Generalized Long-Tailed Classification
Existing long-tailed classification (LT) methods only focus on tackling the class-wise imbalance that head classes have more samples than tail classes, but overlook the attribute-wise imbalance. In fact, even if the class is balanced, samples within each class may still be long-tailed due to the varying attributes. Note that the latter is fundamentally more ubiquitous and challenging than the former because attributes are not just implicit for most datasets, but also combinatorially complex, thus prohibitively expensive to be balanced. Therefore, we introduce a novel research problem: Generalized Long-Tailed classification (GLT), to jointly consider both kinds of imbalances. By "generalized", we mean that a GLT method should naturally solve the traditional LT, but not vice versa. Not surprisingly, we find that most class-wise LT methods degenerate in our proposed two benchmarks: ImageNet-GLT and MSCOCO-GLT. We argue that it is because they over-emphasize the adjustment of class distribution while neglecting to learn attribute-invariant features. To this end, we propose an Invariant Feature Learning (IFL) method as the first strong baseline for GLT. IFL first discovers environments with divergent intra-class distributions from the imperfect predictions and then learns invariant features across them. Promisingly, as an improved feature backbone, IFL boosts all the LT line-up: one/two-stage re-balance, augmentation, and ensemble. Codes and benchmarks are available on Github: https://github.com/KaihuaTang/Generalized-Long-Tailed-Benchmarks.pytorch
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
308,910
2206.03746
Reliable Flight Control: Gravity-Compensation-First Principle
Safety is always the priority in aviation. However, current state-of-the-art passive fault-tolerant control is too conservative to use; current state-of-the-art active fault-tolerant control requires time to perform fault detection and diagnosis, and control switching. But it may be later to recover impaired aircraft. Most designs depend on failures determined as a priori and cannot deal with fault, causing the original system's state to be uncontrollable. However, experienced human pilots can save a serve impaired aircraft as far as they can. Motivated by this, this paper develops a principle to try to explain human pilot behavior behind, coined the gravity-compensation-first principle. This further supports reliable flight control for aircraft such as quadcopters and tail-sitter unmanned aerial vehicles.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
301,398
2409.18351
Tracking Software Security Topics
Software security incidents occur everyday and thousands of software security reports are announced each month. Thus, it is difficult for software security researchers, engineers, and other stakeholders to follow software security topics of their interests in real-time. In this paper, we propose, SOSK, a novel tool for this problem. SOSK allows a user to import a collection of software security reports. It pre-processes and extracts the most important keywords from the textual description of the reports. Based on the similarity of embedding vectors of keywords, SOSK can expand and/or refine a keyword set from a much smaller set of user-provided keywords. Thus, SOSK allows users to define any topic of their interests and retrieve security reports relevant to that topic effectively. Our preliminary evaluation shows that SOSK can expand keywords and retrieve reports relevant to user requests.
false
false
false
false
true
true
false
false
false
false
false
false
true
false
false
false
false
true
492,210
2211.04703
Automated MRI Field of View Prescription from Region of Interest Prediction by Intra-stack Attention Neural Network
Manual prescription of the field of view (FOV) by MRI technologists is variable and prolongs the scanning process. Often, the FOV is too large or crops critical anatomy. We propose a deep-learning framework, trained by radiologists' supervision, for automating FOV prescription. An intra-stack shared feature extraction network and an attention network are used to process a stack of 2D image inputs to generate output scalars defining the location of a rectangular region of interest (ROI). The attention mechanism is used to make the model focus on the small number of informative slices in a stack. Then the smallest FOV that makes the neural network predicted ROI free of aliasing is calculated by an algebraic operation derived from MR sampling theory. We retrospectively collected 595 cases between February 2018 and February 2022. The framework's performance is examined quantitatively with intersection over union (IoU) and pixel error on position, and qualitatively with a reader study. We use the t-test for comparing quantitative results from all models and a radiologist. The proposed model achieves an average IoU of 0.867 and average ROI position error of 9.06 out of 512 pixels on 80 test cases, significantly better (P<0.05) than two baseline models and not significantly different from a radiologist (P>0.12). Finally, the FOV given by the proposed framework achieves an acceptance rate of 92% from an experienced radiologist.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
329,330
2406.03170
StatBot.Swiss: Bilingual Open Data Exploration in Natural Language
The potential for improvements brought by Large Language Models (LLMs) in Text-to-SQL systems is mostly assessed on monolingual English datasets. However, LLMs' performance for other languages remains vastly unexplored. In this work, we release the StatBot.Swiss dataset, the first bilingual benchmark for evaluating Text-to-SQL systems based on real-world applications. The StatBot.Swiss dataset contains 455 natural language/SQL-pairs over 35 big databases with varying level of complexity for both English and German. We evaluate the performance of state-of-the-art LLMs such as GPT-3.5-Turbo and mixtral-8x7b-instruct for the Text-to-SQL translation task using an in-context learning approach. Our experimental analysis illustrates that current LLMs struggle to generalize well in generating SQL queries on our novel bilingual dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
461,123
2203.04668
Towards Inadequately Pre-trained Models in Transfer Learning
Pre-training has been a popular learning paradigm in deep learning era, especially in annotation-insufficient scenario. Better ImageNet pre-trained models have been demonstrated, from the perspective of architecture, by previous research to have better transferability to downstream tasks. However, in this paper, we found that during the same pre-training process, models at middle epochs, which is inadequately pre-trained, can outperform fully trained models when used as feature extractors (FE), while the fine-tuning (FT) performance still grows with the source performance. This reveals that there is not a solid positive correlation between top-1 accuracy on ImageNet and the transferring result on target data. Based on the contradictory phenomenon between FE and FT that better feature extractor fails to be fine-tuned better accordingly, we conduct comprehensive analyses on features before softmax layer to provide insightful explanations. Our discoveries suggest that, during pre-training, models tend to first learn spectral components corresponding to large singular values and the residual components contribute more when fine-tuning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
284,553
2307.08487
Latent Jailbreak: A Benchmark for Evaluating Text Safety and Output Robustness of Large Language Models
Considerable research efforts have been devoted to ensuring that large language models (LLMs) align with human values and generate safe text. However, an excessive focus on sensitivity to certain topics can compromise the model's robustness in following instructions, thereby impacting its overall performance in completing tasks. Previous benchmarks for jailbreaking LLMs have primarily focused on evaluating the safety of the models without considering their robustness. In this paper, we propose a benchmark that assesses both the safety and robustness of LLMs, emphasizing the need for a balanced approach. To comprehensively study text safety and output robustness, we introduce a latent jailbreak prompt dataset, each involving malicious instruction embedding. Specifically, we instruct the model to complete a regular task, such as translation, with the text to be translated containing malicious instructions. To further analyze safety and robustness, we design a hierarchical annotation framework. We present a systematic analysis of the safety and robustness of LLMs regarding the position of explicit normal instructions, word replacements (verbs in explicit normal instructions, target groups in malicious instructions, cue words for explicit normal instructions), and instruction replacements (different explicit normal instructions). Our results demonstrate that current LLMs not only prioritize certain instruction verbs but also exhibit varying jailbreak rates for different instruction verbs in explicit normal instructions. Code and data are available at https://github.com/qiuhuachuan/latent-jailbreak.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
379,821
2308.14815
Distributionally Robust Statistical Verification with Imprecise Neural Networks
A particularly challenging problem in AI safety is providing guarantees on the behavior of high-dimensional autonomous systems. Verification approaches centered around reachability analysis fail to scale, and purely statistical approaches are constrained by the distributional assumptions about the sampling process. Instead, we pose a distributionally robust version of the statistical verification problem for black-box systems, where our performance guarantees hold over a large family of distributions. This paper proposes a novel approach based on a combination of active learning, uncertainty quantification, and neural network verification. A central piece of our approach is an ensemble technique called Imprecise Neural Networks, which provides the uncertainty to guide active learning. The active learning uses an exhaustive neural-network verification tool Sherlock to collect samples. An evaluation on multiple physical simulators in the openAI gym Mujoco environments with reinforcement-learned controllers demonstrates that our approach can provide useful and scalable guarantees for high-dimensional systems.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
388,457
1904.04468
Private Pliable Index Coding
The Pliable Index CODing (PICOD) problem is a variant of the Index Coding (IC) problem, where the desired messages by the users, who are equipped with message side information, is part of the optimization. This paper studies the PICOD problem where users are subject to a privacy constraint. In particular, the following spacial class of private PICODs is investigated: 1) the side information structure is circular, and 2) each user can decode one and only one message. The first condition is a special case of the "circular-arc network topology hypergraph" class of PICOD studied in [Liu and D. Tuninetti, "Tight information theoretic converse results for some pliable index coding problems," ITW, 2018], for which an optimal solution was given without the privacy constraint. The second condition was first studied in [S. Sasi and B. S. Rajan, "On pliable index coding," arXiv:1901.05809] and was motivated by the need to keep content privacy is some distribution networks. This paper proposes both converse and achievable bounds. The proposed achievable scheme not only strictly outperforms the one in [S. Sasi and B. S. Rajan, "On pliable index coding," arXiv:1901.05809] for some values of the system parameters, but it is also information theoretically optimal in some settings. For the remaining cases, the proposed linear code is shown to require at most one more transmission than the converse bound derived by restricting the sender to only use linear codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
127,049
2301.00712
On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis
Bilevel optimization reveals the inner structure of otherwise oblique optimization problems, such as hyperparameter tuning, neural architecture search, and meta-learning. A common goal in bilevel optimization is to minimize a hyper-objective that implicitly depends on the solution set of the lower-level function. Although this hyper-objective approach is widely used, its theoretical properties have not been thoroughly investigated in cases where the lower-level functions lack strong convexity. In this work, we first provide hardness results to show that the goal of finding stationary points of the hyper-objective for nonconvex-convex bilevel optimization can be intractable for zero-respecting algorithms. Then we study a class of tractable nonconvex-nonconvex bilevel problems when the lower-level function satisfies the Polyak-{\L}ojasiewicz (PL) condition. We show a simple first-order algorithm can achieve better complexity bounds of $\tilde{\mathcal{O}}(\epsilon^{-2})$, $\tilde{\mathcal{O}}(\epsilon^{-4})$ and $\tilde{\mathcal{O}}(\epsilon^{-6})$ in the deterministic, partially stochastic, and fully stochastic setting respectively. The complexities in the first two cases are optimal up to logarithmic factors.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
338,994
2110.02634
Heterogeneous Attentions for Solving Pickup and Delivery Problem via Deep Reinforcement Learning
Recently, there is an emerging trend to apply deep reinforcement learning to solve the vehicle routing problem (VRP), where a learnt policy governs the selection of next node for visiting. However, existing methods could not handle well the pairing and precedence relationships in the pickup and delivery problem (PDP), which is a representative variant of VRP. To address this challenging issue, we leverage a novel neural network integrated with a heterogeneous attention mechanism to empower the policy in deep reinforcement learning to automatically select the nodes. In particular, the heterogeneous attention mechanism specifically prescribes attentions for each role of the nodes while taking into account the precedence constraint, i.e., the pickup node must precede the pairing delivery node. Further integrated with a masking scheme, the learnt policy is expected to find higher-quality solutions for solving PDP. Extensive experimental results show that our method outperforms the state-of-the-art heuristic and deep learning model, respectively, and generalizes well to different distributions and problem sizes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
259,195
1911.04070
BP-Transformer: Modelling Long-Range Context via Binary Partitioning
The Transformer model is widely successful on many natural language processing tasks. However, the quadratic complexity of self-attention limit its application on long text. In this paper, adopting a fine-to-coarse attention mechanism on multi-scale spans via binary partitioning (BP), we propose BP-Transformer (BPT for short). BPT yields $O(k\cdot n\log (n/k))$ connections where $k$ is a hyperparameter to control the density of attention. BPT has a good balance between computation complexity and model capacity. A series of experiments on text classification, machine translation and language modeling shows BPT has a superior performance for long text than previous self-attention models. Our code, hyperparameters and CUDA kernels for sparse attention are available in PyTorch.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
152,880
2402.15538
AgentLite: A Lightweight Library for Building and Advancing Task-Oriented LLM Agent System
The booming success of LLMs initiates rapid development in LLM agents. Though the foundation of an LLM agent is the generative model, it is critical to devise the optimal reasoning strategies and agent architectures. Accordingly, LLM agent research advances from the simple chain-of-thought prompting to more complex ReAct and Reflection reasoning strategy; agent architecture also evolves from single agent generation to multi-agent conversation, as well as multi-LLM multi-agent group chat. However, with the existing intricate frameworks and libraries, creating and evaluating new reasoning strategies and agent architectures has become a complex challenge, which hinders research investigation into LLM agents. Thus, we open-source a new AI agent library, AgentLite, which simplifies this process by offering a lightweight, user-friendly platform for innovating LLM agent reasoning, architectures, and applications with ease. AgentLite is a task-oriented framework designed to enhance the ability of agents to break down tasks and facilitate the development of multi-agent systems. Furthermore, we introduce multiple practical applications developed with AgentLite to demonstrate its convenience and flexibility. Get started now at: \url{https://github.com/SalesforceAIResearch/AgentLite}.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
432,181
1902.00726
State Estimation over Worst-Case Erasure and Symmetric Channels with Memory
Worst-case models of erasure and symmetric channels are investigated, in which the number of channel errors occurring in each sliding window of a given length is bounded. Upper and lower bounds on their zero-error capacities are derived, with the lower bounds revealing a connection with the topological entropy of the channel dynamics. Necessary and sufficient conditions for linear state estimation with bounded estimation errors via such channels are then obtained, by extending previous results for non-stochastic memoryless channels to those with finite memory. These estimation conditions involve the topological entropies of the linear system and the channel.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
120,488
2406.05575
A Survey on Hybrid Motion Planning Methods for Automated Driving Systems
Motion planning is an essential element of the modular architecture of autonomous vehicles, serving as a bridge between upstream perception modules and downstream low-level control signals. Traditional motion planners were initially designed for specific Automated Driving Functions (ADFs), yet the evolving landscape of highly automated driving systems (ADS) requires motion for a wide range of ADFs, including unforeseen ones. This need has motivated the development of the ``hybrid" approach in the literature, seeking to enhance motion planning performance by combining diverse techniques, such as data-driven (learning-based) and logic-driven (analytic) methodologies. Recent research endeavours have significantly contributed to the development of more efficient, accurate, and safe hybrid methods for Tactical Decision Making (TDM) and Trajectory Generation (TG), as well as integrating these algorithms into the motion planning module. Owing to the extensive variety and potential of hybrid methods, a timely and comprehensive review of the current literature is undertaken in this survey article. We classify the hybrid motion planners based on the types of components they incorporate, such as combinations of sampling-based with optimization-based/learning-based motion planners. The comparison of different classes is conducted by evaluating the addressed challenges and limitations, as well as assessing whether they focus on TG and/or TDM. We hope this approach will enable the researchers in this field to gain in-depth insights into the identification of current trends in hybrid motion planning and shed light on promising areas for future research.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
462,192
2206.14358
Using Twitter Data to Understand Public Perceptions of Approved versus Off-label Use for COVID-19-related Medications
Understanding public discourse on emergency use of unproven therapeutics is crucial for monitoring safe use and combating misinformation. We developed a natural language processing-based pipeline to comprehend public perceptions of and stances on coronavirus disease 2019 (COVID-19)-related drugs on Twitter over time. This retrospective study included 609,189 US-based tweets from January 29, 2020, to November 30, 2021, about four drugs that garnered significant public attention during the COVID-19 pandemic: (1) Hydroxychloroquine and Ivermectin, therapies with anecdotal evidence; and (2) Molnupiravir and Remdesivir, FDA-approved treatments for eligible patients. Time-trend analysis was employed to understand popularity trends and related events. Content and demographic analyses were conducted to explore potential rationales behind people's stances on each drug. Time-trend analysis indicated that Hydroxychloroquine and Ivermectin were discussed more than Molnupiravir and Remdesivir, particularly during COVID-19 surges. Hydroxychloroquine and Ivermectin discussions were highly politicized, related to conspiracy theories, hearsay, and celebrity influences. The distribution of stances between the two major US political parties was significantly different (P < .001); Republicans were more likely to support Hydroxychloroquine (55%) and Ivermectin (30%) than Democrats. People with healthcare backgrounds tended to oppose Hydroxychloroquine (7%) more than the general population, while the general population was more likely to support Ivermectin (14%). Our study found that social media users have varying perceptions and stances on off-label versus FDA-authorized drug use at different stages of COVID-19. This indicates that health systems, regulatory agencies, and policymakers should design tailored strategies to monitor and reduce misinformation to promote safe drug use.
false
false
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
305,259
2103.16196
AlphaEvolve: A Learning Framework to Discover Novel Alphas in Quantitative Investment
Alphas are stock prediction models capturing trading signals in a stock market. A set of effective alphas can generate weakly correlated high returns to diversify the risk. Existing alphas can be categorized into two classes: Formulaic alphas are simple algebraic expressions of scalar features, and thus can generalize well and be mined into a weakly correlated set. Machine learning alphas are data-driven models over vector and matrix features. They are more predictive than formulaic alphas, but are too complex to mine into a weakly correlated set. In this paper, we introduce a new class of alphas to model scalar, vector, and matrix features which possess the strengths of these two existing classes. The new alphas predict returns with high accuracy and can be mined into a weakly correlated set. In addition, we propose a novel alpha mining framework based on AutoML, called AlphaEvolve, to generate the new alphas. To this end, we first propose operators for generating the new alphas and selectively injecting relational domain knowledge to model the relations between stocks. We then accelerate the alpha mining by proposing a pruning technique for redundant alphas. Experiments show that AlphaEvolve can evolve initial alphas into the new alphas with high returns and weak correlations.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
227,496
2305.10889
FLIGHT Mode On: A Feather-Light Network for Low-Light Image Enhancement
Low-light image enhancement (LLIE) is an ill-posed inverse problem due to the lack of knowledge of the desired image which is obtained under ideal illumination conditions. Low-light conditions give rise to two main issues: a suppressed image histogram and inconsistent relative color distributions with low signal-to-noise ratio. In order to address these problems, we propose a novel approach named FLIGHT-Net using a sequence of neural architecture blocks. The first block regulates illumination conditions through pixel-wise scene dependent illumination adjustment. The output image is produced in the output of the second block, which includes channel attention and denoising sub-blocks. Our highly efficient neural network architecture delivers state-of-the-art performance with only 25K parameters. The method's code, pretrained models and resulting images will be publicly available.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
365,279
2103.01823
A Structurally Regularized Convolutional Neural Network for Image Classification using Wavelet-based SubBand Decomposition
We propose a convolutional neural network (CNN) architecture for image classification based on subband decomposition of the image using wavelets. The proposed architecture decomposes the input image spectra into multiple critically sampled subbands, extracts features using a single CNN per subband, and finally, performs classification by combining the extracted features using a fully connected layer. Processing each of the subbands by an individual CNN, thereby limiting the learning scope of each CNN to a single subband, imposes a form of structural regularization. This provides better generalization capability as seen by the presented results. The proposed architecture achieves best-in-class performance in terms of total multiply-add-accumulator operations and nearly best-in-class performance in terms of total parameters required, yet it maintains competitive classification performance. We also show the proposed architecture is more robust than the regular full-band CNN to noise caused by weight-and-bias quantization and input quantization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
222,753
1801.05393
Coexistence of 5G mmWave Users with Incumbent Fixed Stations over 70 and 80 GHz
Millimeter wave spectrum access over the 70GHz and 80GHz is central to unlocking gigabit connectivity and meeting the explosive growth of mobile traffic. A pressing question, however, is whether fifth-generation (5G) systems can harmoniously coexist with the incumbents of these bands, which are primarily point-to-point fixed stations (FSs). To this end, we thoroughly analyze the impact of 5G coexistence on FSs. Specifically, we first analyze the geometry of existing FSs' deployment using actual databases of these stations. Then, we present a case study on the interference generated from users towards FSs in two populated areas in Chicago, where we use actual building databases to accurately compute the aggregate interference. The analysis and simulation results reveal that the deployment strategy of FSs and the high attenuation losses at 70/80GHz significantly limit the 5G interference, with the majority of FSs experiencing interference levels well below the noise floor.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
88,447
1807.08465
Multimodal Social Media Analysis for Gang Violence Prevention
Gang violence is a severe issue in major cities across the U.S. and recent studies [Patton et al. 2017] have found evidence of social media communications that can be linked to such violence in communities with high rates of exposure to gang activity. In this paper we partnered computer scientists with social work researchers, who have domain expertise in gang violence, to analyze how public tweets with images posted by youth who mention gang associations on Twitter can be leveraged to automatically detect psychosocial factors and conditions that could potentially assist social workers and violence outreach workers in prevention and early intervention programs. To this end, we developed a rigorous methodology for collecting and annotating tweets. We gathered 1,851 tweets and accompanying annotations related to visual concepts and the psychosocial codes: aggression, loss, and substance use. These codes are relevant to social work interventions, as they represent possible pathways to violence on social media. We compare various methods for classifying tweets into these three classes, using only the text of the tweet, only the image of the tweet, or both modalities as input to the classifier. In particular, we analyze the usefulness of mid-level visual concepts and the role of different modalities for this tweet classification task. Our experiments show that individually, text information dominates classification performance of the loss class, while image information dominates the aggression and substance use classes. Our multimodal approach provides a very promising improvement (18% relative in mean average precision) over the best single modality approach. Finally, we also illustrate the complexity of understanding social media data and elaborate on open challenges.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
103,546
2307.13259
GaitFormer: Revisiting Intrinsic Periodicity for Gait Recognition
Gait recognition aims to distinguish different walking patterns by analyzing video-level human silhouettes, rather than relying on appearance information. Previous research on gait recognition has primarily focused on extracting local or global spatial-temporal representations, while overlooking the intrinsic periodic features of gait sequences, which, when fully utilized, can significantly enhance performance. In this work, we propose a plug-and-play strategy, called Temporal Periodic Alignment (TPA), which leverages the periodic nature and fine-grained temporal dependencies of gait patterns. The TPA strategy comprises two key components. The first component is Adaptive Fourier-transform Position Encoding (AFPE), which adaptively converts features and discrete-time signals into embeddings that are sensitive to periodic walking patterns. The second component is the Temporal Aggregation Module (TAM), which separates embeddings into trend and seasonal components, and extracts meaningful temporal correlations to identify primary components, while filtering out random noise. We present a simple and effective baseline method for gait recognition, based on the TPA strategy. Extensive experiments conducted on three popular public datasets (CASIA-B, OU-MVLP, and GREW) demonstrate that our proposed method achieves state-of-the-art performance on multiple benchmark tests.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
381,529
2207.13895
Generative Hypergraph Models and Spectral Embedding
Many complex systems involve interactions between more than two agents. Hypergraphs capture these higher-order interactions through hyperedges that may link more than two nodes. We consider the problem of embedding a hypergraph into low-dimensional Euclidean space so that most interactions are short-range. This embedding is relevant to many follow-on tasks, such as node reordering, clustering, and visualization. We focus on two spectral embedding algorithms customized to hypergraphs which recover linear and periodic structures respectively. In the periodic case, nodes are positioned on the unit circle. We show that the two spectral hypergraph embedding algorithms are associated with a new class of generative hypergraph models. These models generate hyperedges according to node positions in the embedded space and encourage short-range connections. They allow us to quantify the relative presence of periodic and linear structures in the data through maximum likelihood. They also improve the interpretability of node embedding and provide a metric for hyperedge prediction. We demonstrate the hypergraph embedding and follow-on tasks -- including structure quantification, clustering and hyperedge prediction -- on synthetic and real-world hypergraphs. We find that the hypergraph approach can outperform clustering algorithms that use only dyadic edges. We also compare several triadic edge prediction methods on high school contact data where our algorithm improves upon benchmark methods when the amount of training data is limited.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
310,439
2002.03312
FSD-10: A Dataset for Competitive Sports Content Analysis
Action recognition is an important and challenging problem in video analysis. Although the past decade has witnessed progress in action recognition with the development of deep learning, such process has been slow in competitive sports content analysis. To promote the research on action recognition from competitive sports video clips, we introduce a Figure Skating Dataset (FSD-10) for finegrained sports content analysis. To this end, we collect 1484 clips from the worldwide figure skating championships in 2017-2018, which consist of 10 different actions in men/ladies programs. Each clip is at a rate of 30 frames per second with resolution 1080 $\times$ 720. These clips are then annotated by experts in type, grade of execution, skater info, .etc. To build a baseline for action recognition in figure skating, we evaluate state-of-the-art action recognition methods on FSD-10. Motivated by the idea that domain knowledge is of great concern in sports field, we propose a keyframe based temporal segment network (KTSN) for classification and achieve remarkable performance. Experimental results demonstrate that FSD-10 is an ideal dataset for benchmarking action recognition algorithms, as it requires to accurately extract action motions rather than action poses. We hope FSD-10, which is designed to have a large collection of finegrained actions, can serve as a new challenge to develop more robust and advanced action recognition models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
163,225
2410.07763
HARIVO: Harnessing Text-to-Image Models for Video Generation
We present a method to create diffusion-based video models from pretrained Text-to-Image (T2I) models. Recently, AnimateDiff proposed freezing the T2I model while only training temporal layers. We advance this method by proposing a unique architecture, incorporating a mapping network and frame-wise tokens, tailored for video generation while maintaining the diversity and creativity of the original T2I model. Key innovations include novel loss functions for temporal smoothness and a mitigating gradient sampling technique, ensuring realistic and temporally consistent video generation despite limited public video data. We have successfully integrated video-specific inductive biases into the architecture and loss functions. Our method, built on the frozen StableDiffusion model, simplifies training processes and allows for seamless integration with off-the-shelf models like ControlNet and DreamBooth. project page: https://kwonminki.github.io/HARIVO
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
496,789
2303.06017
Information Theoretic I-MMSE generalize Time-Frequency Signal Processing Tools
In this paper, we capitalize on information theoretic-estimation theoretic result, called the I-MMSE [1]-[2] to show that such tool generalizes time-frequency signal processing tools urgent for the analysis of non-stationary non-Gaussian signals.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
350,662
2409.11022
GEIC: Universal and Multilingual Named Entity Recognition with Large Language Models
Large Language Models (LLMs) have supplanted traditional methods in numerous natural language processing tasks. Nonetheless, in Named Entity Recognition (NER), existing LLM-based methods underperform compared to baselines and require significantly more computational resources, limiting their application. In this paper, we introduce the task of generation-based extraction and in-context classification (GEIC), designed to leverage LLMs' prior knowledge and self-attention mechanisms for NER tasks. We then propose CascadeNER, a universal and multilingual GEIC framework for few-shot and zero-shot NER. CascadeNER employs model cascading to utilize two small-parameter LLMs to extract and classify independently, reducing resource consumption while enhancing accuracy. We also introduce AnythingNER, the first NER dataset specifically designed for LLMs, including 8 languages, 155 entity types and a novel dynamic categorization system. Experiments show that CascadeNER achieves state-of-the-art performance on low-resource and fine-grained scenarios, including CrossNER and FewNERD. Our work is openly accessible.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
488,980
2007.07435
AdvFlow: Inconspicuous Black-box Adversarial Attacks using Normalizing Flows
Deep learning classifiers are susceptible to well-crafted, imperceptible variations of their inputs, known as adversarial attacks. In this regard, the study of powerful attack models sheds light on the sources of vulnerability in these classifiers, hopefully leading to more robust ones. In this paper, we introduce AdvFlow: a novel black-box adversarial attack method on image classifiers that exploits the power of normalizing flows to model the density of adversarial examples around a given target image. We see that the proposed method generates adversaries that closely follow the clean data distribution, a property which makes their detection less likely. Also, our experimental results show competitive performance of the proposed approach with some of the existing attack methods on defended classifiers. The code is available at https://github.com/hmdolatabadi/AdvFlow.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
187,328
2308.10720
On the accuracy of interpolation based on single-layer artificial neural networks with a focus on defeating the Runge phenomenon
In the present paper, we consider one-hidden layer ANNs with a feedforward architecture, also referred to as shallow or two-layer networks, so that the structure is determined by the number and types of neurons. The determination of the parameters that define the function, called training, is done via the resolution of the approximation problem, so by imposing the interpolation through a set of specific nodes. We present the case where the parameters are trained using a procedure that is referred to as Extreme Learning Machine (ELM) that leads to a linear interpolation problem. In such hypotheses, the existence of an ANN interpolating function is guaranteed. The focus is then on the accuracy of the interpolation outside of the given sampling interpolation nodes when they are the equispaced, the Chebychev, and the randomly selected ones. The study is motivated by the well-known bell-shaped Runge example, which makes it clear that the construction of a global interpolating polynomial is accurate only if trained on suitably chosen nodes, ad example the Chebychev ones. In order to evaluate the behavior when growing the number of interpolation nodes, we raise the number of neurons in our network and compare it with the interpolating polynomial. We test using Runge's function and other well-known examples with different regularities. As expected, the accuracy of the approximation with a global polynomial increases only if the Chebychev nodes are considered. Instead, the error for the ANN interpolating function always decays and in most cases we observe that the convergence follows what is observed in the polynomial case on Chebychev nodes, despite the set of nodes used for training.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
386,855
0811.0136
Extension of Max-Min Ant System with Exponential Pheromone Deposition Rule
The paper presents an exponential pheromone deposition approach to improve the performance of classical Ant System algorithm which employs uniform deposition rule. A simplified analysis using differential equations is carried out to study the stability of basic ant system dynamics with both exponential and constant deposition rules. A roadmap of connected cities, where the shortest path between two specified cities are to be found out, is taken as a platform to compare Max-Min Ant System model (an improved and popular model of Ant System algorithm) with exponential and constant deposition rules. Extensive simulations are performed to find the best parameter settings for non-uniform deposition approach and experiments with these parameter settings revealed that the above approach outstripped the traditional one by a large extent in terms of both solution quality and convergence time.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
2,599
2103.05819
Active Exploration and Mapping via Iterative Covariance Regulation over Continuous $SE(3)$ Trajectories
This paper develops \emph{iterative Covariance Regulation} (iCR), a novel method for active exploration and mapping for a mobile robot equipped with on-board sensors. The problem is posed as optimal control over the $SE(3)$ pose kinematics of the robot to minimize the differential entropy of the map conditioned the potential sensor observations. We introduce a differentiable field of view formulation, and derive iCR via the gradient descent method to iteratively update an open-loop control sequence in continuous space so that the covariance of the map estimate is minimized. We demonstrate autonomous exploration and uncertainty reduction in simulated occupancy grid environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
224,092
2404.12670
Towards Human-centered Proactive Conversational Agents
Recent research on proactive conversational agents (PCAs) mainly focuses on improving the system's capabilities in anticipating and planning action sequences to accomplish tasks and achieve goals before users articulate their requests. This perspectives paper highlights the importance of moving towards building human-centered PCAs that emphasize human needs and expectations, and that considers ethical and social implications of these agents, rather than solely focusing on technological capabilities. The distinction between a proactive and a reactive system lies in the proactive system's initiative-taking nature. Without thoughtful design, proactive systems risk being perceived as intrusive by human users. We address the issue by establishing a new taxonomy concerning three key dimensions of human-centered PCAs, namely Intelligence, Adaptivity, and Civility. We discuss potential research opportunities and challenges based on this new taxonomy upon the five stages of PCA system construction. This perspectives paper lays a foundation for the emerging area of conversational information retrieval research and paves the way towards advancing human-centered proactive conversational systems.
true
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
447,981
2402.06150
Domain Generalization with Small Data
In this work, we propose to tackle the problem of domain generalization in the context of \textit{insufficient samples}. Instead of extracting latent feature embeddings based on deterministic models, we propose to learn a domain-invariant representation based on the probabilistic framework by mapping each data point into probabilistic embeddings. Specifically, we first extend empirical maximum mean discrepancy (MMD) to a novel probabilistic MMD that can measure the discrepancy between mixture distributions (i.e., source domains) consisting of a series of latent distributions rather than latent points. Moreover, instead of imposing the contrastive semantic alignment (CSA) loss based on pairs of latent points, a novel probabilistic CSA loss encourages positive probabilistic embedding pairs to be closer while pulling other negative ones apart. Benefiting from the learned representation captured by probabilistic models, our proposed method can marriage the measurement on the \textit{distribution over distributions} (i.e., the global perspective alignment) and the distribution-based contrastive semantic alignment (i.e., the local perspective alignment). Extensive experimental results on three challenging medical datasets show the effectiveness of our proposed method in the context of insufficient data compared with state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
428,186
2206.01722
A Learning-Based Method for Automatic Operator Selection in the Fanoos XAI System
We describe an extension of the Fanoos XAI system [Bayani et al 2022] which enables the system to learn the appropriate action to take in order to satisfy a user's request for description to be made more or less abstract. Specifically, descriptions of systems under analysis are stored in states, and in order to make a description more or less abstract, Fanoos selects an operator from a large library to apply to the state and generate a new description. Prior work on Fanoos predominately used hand-written methods for operator-selection; this current work allows Fanoos to leverage experience to learn the best operator to apply in a particular situation, balancing exploration and exploitation, leveraging expert insights when available, and utilizing similarity between the current state and past states. Additionally, in order to bootstrap the learning process (i.e., like in curriculum learning), we describe a simulated user which we implemented; this simulation allows Fanoos to gain general insights that enable reasonable courses of action, insights which later can be refined by experience with real users, as opposed to interacting with humans completely from scratch. Code implementing the methods described in the paper can be found at https://github/DBay-ani/Operator_Selection_Learning_Extensions_For_Fanoos.
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
300,576
cs/0405032
EvoNF: A Framework for Optimization of Fuzzy Inference Systems Using Neural Network Learning and Evolutionary Computation
Several adaptation techniques have been investigated to optimize fuzzy inference systems. Neural network learning algorithms have been used to determine the parameters of fuzzy inference system. Such models are often called as integrated neuro-fuzzy models. In an integrated neuro-fuzzy model there is no guarantee that the neural network learning algorithm converges and the tuning of fuzzy inference system will be successful. Success of evolutionary search procedures for optimization of fuzzy inference system is well proven and established in many application areas. In this paper, we will explore how the optimization of fuzzy inference systems could be further improved using a meta-heuristic approach combining neural network learning and evolutionary computation. The proposed technique could be considered as a methodology to integrate neural networks, fuzzy inference systems and evolutionary search procedures. We present the theoretical frameworks and some experimental results to demonstrate the efficiency of the proposed technique.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
538,190
1803.06082
Load Balancing for 5G Ultra-Dense Networks using Device-to-Device Communications
Load balancing is an effective approach to address the spatial-temporal fluctuation problem of mobile data traffic for cellular networks. The existing schemes that focus on channel borrowing from neighboring cells cannot be directly applied to future 5G wireless networks, because the neighboring cells will reuse the same spectrum band in 5G systems. In this paper, we consider an orthogonal frequency division multiple access~(OFDMA) ultra-dense small cell network, where Device-to-Device~(D2D) communication is advocated to facilitate load balancing without extra spectrum. Specifically, the data traffic can be effectively offloaded from a congested small cell to other underutilized small cells by D2D communications. The problem is naturally formulated as a joint resource allocation and D2D routing problem that maximizes the system sum-rate. To efficiently solve the problem, we decouple the problem into a resource allocation subproblem and a D2D routing subproblem. The two subproblems are solved iteratively as a monotonic optimization problem and a complementary geometric programming problem, respectively. Simulation results show that the data sum-rate in the neighboring small cells increases 20\% on average by offloading the data traffic in the congested small cell to the neighboring small cell base stations~(SBSs).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
92,768
2411.16532
Continual Deep Reinforcement Learning with Task-Agnostic Policy Distillation
Central to the development of universal learning systems is the ability to solve multiple tasks without retraining from scratch when new data arrives. This is crucial because each task requires significant training time. Addressing the problem of continual learning necessitates various methods due to the complexity of the problem space. This problem space includes: (1) addressing catastrophic forgetting to retain previously learned tasks, (2) demonstrating positive forward transfer for faster learning, (3) ensuring scalability across numerous tasks, and (4) facilitating learning without requiring task labels, even in the absence of clear task boundaries. In this paper, the Task-Agnostic Policy Distillation (TAPD) framework is introduced. This framework alleviates problems (1)-(4) by incorporating a task-agnostic phase, where an agent explores its environment without any external goal and maximizes only its intrinsic motivation. The knowledge gained during this phase is later distilled for further exploration. Therefore, the agent acts in a self-supervised manner by systematically seeking novel states. By utilizing task-agnostic distilled knowledge, the agent can solve downstream tasks more efficiently, leading to improved sample efficiency. Our code is available at the repository: https://github.com/wabbajack1/TAPD.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
511,060
1903.01888
Gated Graph Convolutional Recurrent Neural Networks
Graph processes model a number of important problems such as identifying the epicenter of an earthquake or predicting weather. In this paper, we propose a Graph Convolutional Recurrent Neural Network (GCRNN) architecture specifically tailored to deal with these problems. GCRNNs use convolutional filter banks to keep the number of trainable parameters independent of the size of the graph and of the time sequences considered. We also put forward Gated GCRNNs, a time-gated variation of GCRNNs akin to LSTMs. When compared with GNNs and another graph recurrent architecture in experiments using both synthetic and real-word data, GCRNNs significantly improve performance while using considerably less parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
123,363
2305.14211
Towards Graph-hop Retrieval and Reasoning in Complex Question Answering over Textual Database
In Textual question answering (TQA) systems, complex questions often require retrieving multiple textual fact chains with multiple reasoning steps. While existing benchmarks are limited to single-chain or single-hop retrieval scenarios. In this paper, we propose to conduct Graph-Hop -- a novel multi-chains and multi-hops retrieval and reasoning paradigm in complex question answering. We construct a new benchmark called ReasonGraphQA, which provides explicit and fine-grained evidence graphs for complex questions to support interpretable reasoning, comprehensive and detailed reasoning. And ReasonGraphQA also shows an advantage in reasoning diversity and scale. Moreover, We propose a strong graph-hop baseline called Bidirectional Graph Retrieval (BGR) method for generating an explanation graph of textual evidence in knowledge reasoning and question answering. We have thoroughly evaluated existing evidence retrieval and reasoning models on the ReasonGraphQA. Experiments highlight Graph-Hop is a promising direction for answering complex questions, but it still has certain limitations. We have further studied mitigation strategies to meet these challenges and discuss future directions.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
366,907
2106.08838
Domain-independent User Simulation with Transformers for Task-oriented Dialogue Systems
Dialogue policy optimisation via reinforcement learning requires a large number of training interactions, which makes learning with real users time consuming and expensive. Many set-ups therefore rely on a user simulator instead of humans. These user simulators have their own problems. While hand-coded, rule-based user simulators have been shown to be sufficient in small, simple domains, for complex domains the number of rules quickly becomes intractable. State-of-the-art data-driven user simulators, on the other hand, are still domain-dependent. This means that adaptation to each new domain requires redesigning and retraining. In this work, we propose a domain-independent transformer-based user simulator (TUS). The structure of our TUS is not tied to a specific domain, enabling domain generalisation and learning of cross-domain user behaviour from data. We compare TUS with the state of the art using automatic as well as human evaluations. TUS can compete with rule-based user simulators on pre-defined domains and is able to generalise to unseen domains in a zero-shot fashion.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
241,439
2012.05754
Optimal Thompson Sampling strategies for support-aware CVaR bandits
In this paper we study a multi-arm bandit problem in which the quality of each arm is measured by the Conditional Value at Risk (CVaR) at some level alpha of the reward distribution. While existing works in this setting mainly focus on Upper Confidence Bound algorithms, we introduce a new Thompson Sampling approach for CVaR bandits on bounded rewards that is flexible enough to solve a variety of problems grounded on physical resources. Building on a recent work by Riou & Honda (2020), we introduce B-CVTS for continuous bounded rewards and M-CVTS for multinomial distributions. On the theoretical side, we provide a non-trivial extension of their analysis that enables to theoretically bound their CVaR regret minimization performance. Strikingly, our results show that these strategies are the first to provably achieve asymptotic optimality in CVaR bandits, matching the corresponding asymptotic lower bounds for this setting. Further, we illustrate empirically the benefit of Thompson Sampling approaches both in a realistic environment simulating a use-case in agriculture and on various synthetic examples.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,884
1410.1165
Understanding Locally Competitive Networks
Recently proposed neural network activation functions such as rectified linear, maxout, and local winner-take-all have allowed for faster and more effective training of deep neural architectures on large and complex datasets. The common trait among these functions is that they implement local competition between small groups of computational units within a layer, so that only part of the network is activated for any given input pattern. In this paper, we attempt to visualize and understand this self-modularization, and suggest a unified explanation for the beneficial properties of such networks. We also show how our insights can be directly useful for efficiently performing retrieval over large datasets using neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
36,535
2403.15569
Music to Dance as Language Translation using Sequence Models
Synthesising appropriate choreographies from music remains an open problem. We introduce MDLT, a novel approach that frames the choreography generation problem as a translation task. Our method leverages an existing data set to learn to translate sequences of audio into corresponding dance poses. We present two variants of MDLT: one utilising the Transformer architecture and the other employing the Mamba architecture. We train our method on AIST++ and PhantomDance data sets to teach a robotic arm to dance, but our method can be applied to a full humanoid robot. Evaluation metrics, including Average Joint Error and Fr\'echet Inception Distance, consistently demonstrate that, when given a piece of music, MDLT excels at producing realistic and high-quality choreography. The code can be found at github.com/meowatthemoon/MDLT.
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
440,638
2210.06108
Reconstructing Personalized Semantic Facial NeRF Models From Monocular Video
We present a novel semantic model for human head defined with neural radiance field. The 3D-consistent head model consist of a set of disentangled and interpretable bases, and can be driven by low-dimensional expression coefficients. Thanks to the powerful representation ability of neural radiance field, the constructed model can represent complex facial attributes including hair, wearings, which can not be represented by traditional mesh blendshape. To construct the personalized semantic facial model, we propose to define the bases as several multi-level voxel fields. With a short monocular RGB video as input, our method can construct the subject's semantic facial NeRF model with only ten to twenty minutes, and can render a photo-realistic human head image in tens of miliseconds with a given expression coefficient and view direction. With this novel representation, we apply it to many tasks like facial retargeting and expression editing. Experimental results demonstrate its strong representation ability and training/inference speed. Demo videos and released code are provided in our project page: https://ustc3dv.github.io/NeRFBlendShape/
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
323,142
1408.4753
Be Careful When Assuming the Obvious: Commentary on "The placement of the head that minimizes online memory: a complex systems approach"
Ferrer-i-Cancho (2015) presents a mathematical model of both the synchronic and diachronic nature of word order based on the assumption that memory costs are a never decreasing function of distance and a few very general linguistic assumptions. However, even these minimal and seemingly obvious assumptions are not as safe as they appear in light of recent typological and psycholinguistic evidence. The interaction of word order and memory has further depths to be explored.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
35,488