id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2311.10090
JaxMARL: Multi-Agent RL Environments and Algorithms in JAX
Benchmarks are crucial in the development of machine learning algorithms, with available environments significantly influencing reinforcement learning (RL) research. Traditionally, RL environments run on the CPU, which limits their scalability with typical academic compute. However, recent advancements in JAX have enabled the wider use of hardware acceleration, enabling massively parallel RL training pipelines and environments. While this has been successfully applied to single-agent RL, it has not yet been widely adopted for multi-agent scenarios. In this paper, we present JaxMARL, the first open-source, Python-based library that combines GPU-enabled efficiency with support for a large number of commonly used MARL environments and popular baseline algorithms. Our experiments show that, in terms of wall clock time, our JAX-based training pipeline is around 14 times faster than existing approaches, and up to 12500x when multiple training runs are vectorized. This enables efficient and thorough evaluations, potentially alleviating the evaluation crisis in the field. We also introduce and benchmark SMAX, a JAX-based approximate reimplementation of the popular StarCraft Multi-Agent Challenge, which removes the need to run the StarCraft II game engine. This not only enables GPU acceleration, but also provides a more flexible MARL environment, unlocking the potential for self-play, meta-learning, and other future applications in MARL. The code is available at https://github.com/flairox/jaxmarl.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
408,409
1709.04577
DeepVoting: A Robust and Explainable Deep Network for Semantic Part Detection under Partial Occlusion
In this paper, we study the task of detecting semantic parts of an object, e.g., a wheel of a car, under partial occlusion. We propose that all models should be trained without seeing occlusions while being able to transfer the learned knowledge to deal with occlusions. This setting alleviates the difficulty in collecting an exponentially large dataset to cover occlusion patterns and is more essential. In this scenario, the proposal-based deep networks, like RCNN-series, often produce unsatisfactory results, because both the proposal extraction and classification stages may be confused by the irrelevant occluders. To address this, [25] proposed a voting mechanism that combines multiple local visual cues to detect semantic parts. The semantic parts can still be detected even though some visual cues are missing due to occlusions. However, this method is manually-designed, thus is hard to be optimized in an end-to-end manner. In this paper, we present DeepVoting, which incorporates the robustness shown by [25] into a deep network, so that the whole pipeline can be jointly optimized. Specifically, it adds two layers after the intermediate features of a deep network, e.g., the pool-4 layer of VGGNet. The first layer extracts the evidence of local visual cues, and the second layer performs a voting mechanism by utilizing the spatial relationship between visual cues and semantic parts. We also propose an improved version DeepVoting+ by learning visual cues from context outside objects. In experiments, DeepVoting achieves significantly better performance than several baseline methods, including Faster-RCNN, for semantic part detection under occlusion. In addition, DeepVoting enjoys explainability as the detection results can be diagnosed via looking up the voting cues.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
80,690
1206.6824
Gene Expression Time Course Clustering with Countably Infinite Hidden Markov Models
Most existing approaches to clustering gene expression time course data treat the different time points as independent dimensions and are invariant to permutations, such as reversal, of the experimental time course. Approaches utilizing HMMs have been shown to be helpful in this regard, but are hampered by having to choose model architectures with appropriate complexities. Here we propose for a clustering application an HMM with a countably infinite state space; inference in this model is possible by recasting it in the hierarchical Dirichlet process (HDP) framework (Teh et al. 2006), and hence we call it the HDP-HMM. We show that the infinite model outperforms model selection methods over finite models, and traditional time-independent methods, as measured by a variety of external and internal indices for clustering on two large publicly available data sets. Moreover, we show that the infinite models utilize more hidden states and employ richer architectures (e.g. state-to-state transitions) without the damaging effects of overfitting.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
17,053
2404.02395
Optimal Batch Allocation for Wireless Federated Learning
Federated learning aims to construct a global model that fits the dataset distributed across local devices without direct access to private data, leveraging communication between a server and the local devices. In the context of a practical communication scheme, we study the completion time required to achieve a target performance. Specifically, we analyze the number of iterations required for federated learning to reach a specific optimality gap from a minimum global loss. Subsequently, we characterize the time required for each iteration under two fundamental multiple access schemes: time-division multiple access (TDMA) and random access (RA). We propose a step-wise batch allocation, demonstrated to be optimal for TDMA-based federated learning systems. Additionally, we show that the non-zero batch gap between devices provided by the proposed step-wise batch allocation significantly reduces the completion time for RA-based learning systems. Numerical evaluations validate these analytical results through real-data experiments, highlighting the remarkable potential for substantial completion time reduction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
443,839
2209.01688
On the Privacy Risks of Cell-Based NAS Architectures
Existing studies on neural architecture search (NAS) mainly focus on efficiently and effectively searching for network architectures with better performance. Little progress has been made to systematically understand if the NAS-searched architectures are robust to privacy attacks while abundant work has already shown that human-designed architectures are prone to privacy attacks. In this paper, we fill this gap and systematically measure the privacy risks of NAS architectures. Leveraging the insights from our measurement study, we further explore the cell patterns of cell-based NAS architectures and evaluate how the cell patterns affect the privacy risks of NAS-searched architectures. Through extensive experiments, we shed light on how to design robust NAS architectures against privacy attacks, and also offer a general methodology to understand the hidden correlation between the NAS-searched architectures and other privacy risks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
315,981
2204.13806
Practical Considerations in Direct Detection Under Tukey Signalling
The deliberate introduction of controlled intersymbol interference (ISI) in Tukey signalling enables the recovery of signal amplitude and (in part) signal phase under direct detection, giving rise to significant data rate improvements compared to intensity modulation with direct detection (IMDD). The use of an integrate-and-dump detector makes precise waveform shaping unnecessary, thereby equipping the scheme with a high degree of robustness to nonlinear signal distortions introduced by practical modulators. Signal sequences drawn from star quadrature amplitude modulation (SQAM) formats admit an efficient trellis description that facilitates codebook design and low-complexity near maximum-likelihood sequence detection in the presence of both shot noise and thermal noise. Under the practical (though suboptimal) allocation of a 50% duty cycle between ISI-free and ISI-present signalling segments, at a symbol rate of 50 Gbaud and a launch power of -10 dBm the Tukey scheme has a maximum theoretically achievable throughput of 200 Gb/s with an (8,4)-SQAM constellation, while an IMDD scheme achieves about 145 Gb/s using PAM-8. Note that the two mentioned constellations have the same number of magnitude levels and the difference in throughput is resulting from exploiting phase information under using a complex-valued signal constellation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
293,941
1508.01059
Offline and Online Models of Budget Allocation for Maximizing Influence Spread
The research of influence propagation in social networks via word-of-mouth processes has been given considerable attention in recent years. Arguably, the most fundamental problem in this domain is influence maximization, where the goal is to identify a seed set of individuals that can trigger a large cascade of influence in the network. While there has been significant progress regarding this problem and its variants, one basic shortcoming of the models is that they lack the flexibility in the way the budget is allocated to individuals. Indeed, budget allocation is a critical issue in advertising and viral marketing. Taking the other point of view, known models allowing flexible budget allocation do not take into account the influence spread in the network. We introduce a generalized model that captures both budgets and influence propagation simultaneously. For the offline setting, we identify a large family of budget-based propagation functions that admit tight approximation guarantee. This family extends most of the previously studied influence models, including the well-known Triggering model. We establish that any function in this family implies an instance of a monotone submodular function maximization over the integer lattice subject to a knapsack constraint. This problem is known to admit an optimal (1-1/e)-approximation. We also study the price of anarchy of the multi-player game that extends the model and establish tight results. For the online setting, in which an unknown subset of agents arrive in a random order and the algorithm needs to make an irrevocable budget allocation in each step, we develop a 1/(15e)-competitive algorithm. This setting extends the secretary problem, and its variant, the submodular knapsack secretary problem. Notably, our algorithm improves over the best known approximation for the latter problem, even though it applies to a more general setting.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
45,746
1603.02488
Extracting Arabic Relations from the Web
The goal of this research is to extract a large list or table from named entities and relations in a specific domain. A small set of a handful of instance relations is required as input from the user. The system exploits summaries from Google search engine as a source text. These instances are used to extract patterns. The output is a set of new entities and their relations. The results from four experiments show that precision and recall varies according to relation type. Precision ranges from 0.61 to 0.75 while recall ranges from 0.71 to 0.83. The best result is obtained for (player, club) relationship, 0.72 and 0.83 for precision and recall respectively.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
53,015
2303.06163
Category-Level Multi-Part Multi-Joint 3D Shape Assembly
Shape assembly composes complex shapes geometries by arranging simple part geometries and has wide applications in autonomous robotic assembly and CAD modeling. Existing works focus on geometry reasoning and neglect the actual physical assembly process of matching and fitting joints, which are the contact surfaces connecting different parts. In this paper, we consider contacting joints for the task of multi-part assembly. A successful joint-optimized assembly needs to satisfy the bilateral objectives of shape structure and joint alignment. We propose a hierarchical graph learning approach composed of two levels of graph representation learning. The part graph takes part geometries as input to build the desired shape structure. The joint-level graph uses part joints information and focuses on matching and aligning joints. The two kinds of information are combined to achieve the bilateral objectives. Extensive experiments demonstrate that our method outperforms previous methods, achieving better shape structure and higher joint alignment accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,715
1802.03873
PRIL: Perceptron Ranking Using Interval Labeled Data
In this paper, we propose an online learning algorithm PRIL for learning ranking classifiers using interval labeled data and show its correctness. We show its convergence in finite number of steps if there exists an ideal classifier such that the rank given by it for an example always lies in its label interval. We then generalize this mistake bound result for the general case. We also provide regret bound for the proposed algorithm. We propose a multiplicative update algorithm for PRIL called M-PRIL. We provide its correctness and convergence results. We show the effectiveness of PRIL by showing its performance on various datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
90,085
1002.4063
Investigating modularity in the analysis of process algebra models of biochemical systems
Compositionality is a key feature of process algebras which is often cited as one of their advantages as a modelling technique. It is certainly true that in biochemical systems, as in many other systems, model construction is made easier in a formalism which allows the problem to be tackled compositionally. In this paper we consider the extent to which the compositional structure which is inherent in process algebra models of biochemical systems can be exploited during model solution. In essence this means using the compositional structure to guide decomposed solution and analysis. Unfortunately the dynamic behaviour of biochemical systems exhibits strong interdependencies between the components of the model making decomposed solution a difficult task. Nevertheless we believe that if such decomposition based on process algebras could be established it would demonstrate substantial benefits for systems biology modelling. In this paper we present our preliminary investigations based on a case study of the pheromone pathway in yeast, modelling in the stochastic process algebra Bio-PEPA.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
5,757
1706.02999
Symmetry Learning for Function Approximation in Reinforcement Learning
In this paper we explore methods to exploit symmetries for ensuring sample efficiency in reinforcement learning (RL), this problem deserves ever increasing attention with the recent advances in the use of deep networks for complex RL tasks which require large amount of training data. We introduce a novel method to detect symmetries using reward trails observed during episodic experience and prove its completeness. We also provide a framework to incorporate the discovered symmetries for functional approximation. Finally we show that the use of potential based reward shaping is especially effective for our symmetry exploitation mechanism. Experiments on various classical problems show that our method improves the learning performance significantly by utilizing symmetry information.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
75,074
2108.12373
FAST-PCA: A Fast and Exact Algorithm for Distributed Principal Component Analysis
Principal Component Analysis (PCA) is a fundamental data preprocessing tool in the world of machine learning. While PCA is often thought of as a dimensionality reduction method, the purpose of PCA is actually two-fold: dimension reduction and uncorrelated feature learning. Furthermore, the enormity of the dimensions and sample size in the modern day datasets have rendered the centralized PCA solutions unusable. In that vein, this paper reconsiders the problem of PCA when data samples are distributed across nodes in an arbitrarily connected network. While a few solutions for distributed PCA exist, those either overlook the uncorrelated feature learning aspect of the PCA, tend to have high communication overhead that makes them inefficient and/or lack `exact' or `global' convergence guarantees. To overcome these aforementioned issues, this paper proposes a distributed PCA algorithm termed FAST-PCA (Fast and exAct diSTributed PCA). The proposed algorithm is efficient in terms of communication and is proven to converge linearly and exactly to the principal components, leading to dimension reduction as well as uncorrelated features. The claims are further supported by experimental results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
252,480
2405.00402
Self-Refine Instruction-Tuning for Aligning Reasoning in Language Models
The alignments of reasoning abilities between smaller and larger Language Models are largely conducted via Supervised Fine-Tuning (SFT) using demonstrations generated from robust Large Language Models (LLMs). Although these approaches deliver more performant models, they do not show sufficiently strong generalization ability as the training only relies on the provided demonstrations. In this paper, we propose the Self-refine Instruction-tuning method that elicits Smaller Language Models to self-refine their abilities. Our approach is based on a two-stage process, where reasoning abilities are first transferred between LLMs and Small Language Models (SLMs) via Instruction-tuning on demonstrations provided by LLMs, and then the instructed models Self-refine their abilities through preference optimization strategies. In particular, the second phase operates refinement heuristics based on the Direct Preference Optimization algorithm, where the SLMs are elicited to deliver a series of reasoning paths by automatically sampling the generated responses and providing rewards using ground truths from the LLMs. Results obtained on commonsense and math reasoning tasks show that this approach significantly outperforms Instruction-tuning in both in-domain and out-domain scenarios, aligning the reasoning abilities of Smaller and Larger Language Models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
450,909
1811.08284
Feature exploration for almost zero-resource ASR-free keyword spotting using a multilingual bottleneck extractor and correspondence autoencoders
We compare features for dynamic time warping (DTW) when used to bootstrap keyword spotting (KWS) in an almost zero-resource setting. Such quickly-deployable systems aim to support United Nations (UN) humanitarian relief efforts in parts of Africa with severely under-resourced languages. Our objective is to identify acoustic features that provide acceptable KWS performance in such environments. As supervised resource, we restrict ourselves to a small, easily acquired and independently compiled set of isolated keywords. For feature extraction, a multilingual bottleneck feature (BNF) extractor, trained on well-resourced out-of-domain languages, is integrated with a correspondence autoencoder (CAE) trained on extremely sparse in-domain data. On their own, BNFs and CAE features are shown to achieve a more than 2% absolute performance improvement over baseline MFCCs. However, by using BNFs as input to the CAE, even better performance is achieved, with a more than 11% absolute improvement in ROC AUC over MFCCs and more than twice as many top-10 retrievals for two evaluated languages, English and Luganda. We conclude that integrating BNFs with the CAE allows both large out-of-domain and sparse in-domain resources to be exploited for improved ASR-free keyword spotting.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
114,002
2105.14398
Learning Domain-Specialised Representations for Cross-Lingual Biomedical Entity Linking
Injecting external domain-specific knowledge (e.g., UMLS) into pretrained language models (LMs) advances their capability to handle specialised in-domain tasks such as biomedical entity linking (BEL). However, such abundant expert knowledge is available only for a handful of languages (e.g., English). In this work, by proposing a novel cross-lingual biomedical entity linking task (XL-BEL) and establishing a new XL-BEL benchmark spanning 10 typologically diverse languages, we first investigate the ability of standard knowledge-agnostic as well as knowledge-enhanced monolingual and multilingual LMs beyond the standard monolingual English BEL task. The scores indicate large gaps to English performance. We then address the challenge of transferring domain-specific knowledge in resource-rich languages to resource-poor ones. To this end, we propose and evaluate a series of cross-lingual transfer methods for the XL-BEL task, and demonstrate that general-domain bitext helps propagate the available English knowledge to languages with little to no in-domain data. Remarkably, we show that our proposed domain-specific transfer methods yield consistent gains across all target languages, sometimes up to 20 Precision@1 points, without any in-domain knowledge in the target language, and without any in-domain parallel data.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
237,655
2101.12338
Enabling Robots to Draw and Tell: Towards Visually Grounded Multimodal Description Generation
Socially competent robots should be equipped with the ability to perceive the world that surrounds them and communicate about it in a human-like manner. Representative skills that exhibit such ability include generating image descriptions and visually grounded referring expressions. In the NLG community, these generation tasks are largely investigated in non-interactive and language-only settings. However, in face-to-face interaction, humans often deploy multiple modalities to communicate, forming seamless integration of natural language, hand gestures and other modalities like sketches. To enable robots to describe what they perceive with speech and sketches/gestures, we propose to model the task of generating natural language together with free-hand sketches/hand gestures to describe visual scenes and real life objects, namely, visually-grounded multimodal description generation. In this paper, we discuss the challenges and evaluation metrics of the task, and how the task can benefit from progress recently made in the natural language processing and computer vision realms, where related topics such as visually grounded NLG, distributional semantics, and photo-based sketch generation have been extensively studied.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
217,536
2305.17400
Query-Policy Misalignment in Preference-Based Reinforcement Learning
Preference-based reinforcement learning (PbRL) provides a natural way to align RL agents' behavior with human desired outcomes, but is often restrained by costly human feedback. To improve feedback efficiency, most existing PbRL methods focus on selecting queries to maximally improve the overall quality of the reward model, but counter-intuitively, we find that this may not necessarily lead to improved performance. To unravel this mystery, we identify a long-neglected issue in the query selection schemes of existing PbRL studies: Query-Policy Misalignment. We show that the seemingly informative queries selected to improve the overall quality of reward model actually may not align with RL agents' interests, thus offering little help on policy learning and eventually resulting in poor feedback efficiency. We show that this issue can be effectively addressed via near on-policy query and a specially designed hybrid experience replay, which together enforce the bidirectional query-policy alignment. Simple yet elegant, our method can be easily incorporated into existing approaches by changing only a few lines of code. We showcase in comprehensive experiments that our method achieves substantial gains in both human feedback and RL sample efficiency, demonstrating the importance of addressing query-policy misalignment in PbRL tasks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
368,569
2410.20375
On performance bounds for topology optimization
Topology optimization has matured to become a powerful engineering design tool that is capable of designing extraordinary structures and materials taking into account various physical phenomena. Despite the method's great advancements in recent years, several unanswered questions remain. This paper takes a step towards answering one of the larger questions, namely: How far from the global optimum is a given topology optimized design? Typically this is a hard question to answer, as almost all interesting topology optimization problems are non-convex. Unfortunately, this non-convexity implies that local minima may plague the design space, resulting in optimizers ending up in suboptimal designs. In this work, we investigate performance bounds for topology optimization via a computational framework that utilizes Lagrange duality theory. This approach provides a viable measure of how \say{close} a given design is to the global optimum for a subset of optimization formulations. The method's capabilities are exemplified via several numerical examples, including the design of mode converters and resonating plates.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
502,793
1908.02392
Moving-Target Defense for Detecting Coordinated Cyber-Physical Attacks in Power Grids
This work proposes a moving target defense (MTD) strategy to detect coordinated cyber-physical attacks (CCPAs) against power grids. A CCPA consists of a physical attack, such as disconnecting a transmission line, followed by a coordinated cyber attack that injects false data into the sensor measurements to mask the effects of the physical attack. Such attacks can lead to undetectable line outages and cause significant damage to the grid. The main idea of the proposed approach is to invalidate the knowledge that the attackers use to mask the effects of the physical attack by actively perturbing the grid's transmission line reactances using distributed flexible AC transmission system (D-FACTS) devices. We identify the MTD design criteria in this context to thwart CCPAs. The proposed MTD design consists of two parts. First, we identify the subset of links for D-FACTS device deployment that enables the defender to detect CCPAs against any link in the system. Then, in order to minimize the defense cost during the system's operational time, we use a game-theoretic approach to identify the best subset of links (within the D-FACTS deployment set) to perturb which will provide adequate protection. Extensive simulations performed using the MATPOWER simulator on IEEE bus systems verify the effectiveness of our approach in detecting CCPAs and reducing the operator's defense cost.
false
false
false
false
false
false
false
false
false
true
true
false
true
false
false
false
false
false
140,987
2307.01909
ClimateLearn: Benchmarking Machine Learning for Weather and Climate Modeling
Modeling weather and climate is an essential endeavor to understand the near- and long-term impacts of climate change, as well as inform technology and policymaking for adaptation and mitigation efforts. In recent years, there has been a surging interest in applying data-driven methods based on machine learning for solving core problems such as weather forecasting and climate downscaling. Despite promising results, much of this progress has been impaired due to the lack of large-scale, open-source efforts for reproducibility, resulting in the use of inconsistent or underspecified datasets, training setups, and evaluations by both domain scientists and artificial intelligence researchers. We introduce ClimateLearn, an open-source PyTorch library that vastly simplifies the training and evaluation of machine learning models for data-driven climate science. ClimateLearn consists of holistic pipelines for dataset processing (e.g., ERA5, CMIP6, PRISM), implementation of state-of-the-art deep learning models (e.g., Transformers, ResNets), and quantitative and qualitative evaluation for standard weather and climate modeling tasks. We supplement these functionalities with extensive documentation, contribution guides, and quickstart tutorials to expand access and promote community growth. We have also performed comprehensive forecasting and downscaling experiments to showcase the capabilities and key features of our library. To our knowledge, ClimateLearn is the first large-scale, open-source effort for bridging research in weather and climate modeling with modern machine learning systems. Our library is available publicly at https://github.com/aditya-grover/climate-learn.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
377,507
2002.09169
Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework
Randomized classifiers have been shown to provide a promising approach for achieving certified robustness against adversarial attacks in deep learning. However, most existing methods only leverage Gaussian smoothing noise and only work for $\ell_2$ perturbation. We propose a general framework of adversarial certification with non-Gaussian noise and for more general types of attacks, from a unified functional optimization perspective. Our new framework allows us to identify a key trade-off between accuracy and robustness via designing smoothing distributions, helping to design new families of non-Gaussian smoothing distributions that work more efficiently for different $\ell_p$ settings, including $\ell_1$, $\ell_2$ and $\ell_\infty$ attacks. Our proposed methods achieve better certification results than previous works and provide a new perspective on randomized smoothing certification.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
164,995
2211.01964
Combining Contrastive and Non-Contrastive Losses for Fine-Tuning Pretrained Models in Speech Analysis
Embedding paralinguistic properties is a challenging task as there are only a few hours of training data available for domains such as emotional speech. One solution to this problem is to pretrain a general self-supervised speech representation model on large amounts of unlabeled speech. This pretrained model is then finetuned to a specific task. Paralinguistic properties however have notoriously high class variance, making the finetuning ineffective. In this work, we propose a two step approach to this. First we improve the embedding space, then we train an adapter to bridge the gap from the embedding space to a classification task. In order to improve the class invariance we use a combination of contrastive and non-contrastive losses to explicitly optimize for class invariant, yet discriminative features. Our approach consistently outperforms baselines that are finetuned end-to-end on multiple tasks and surpasses a benchmark on state-of-the-art emotion classification.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
328,425
2408.07286
The Design of Autonomous UAV Prototypes for Inspecting Tunnel Construction Environment
This article presents novel designs of autonomous UAV prototypes specifically developed for inspecting GPS-denied tunnel construction environments with dynamic human and robotic presence. Our UAVs integrate advanced sensor suites and robust motion planning algorithms to autonomously navigate and explore these complex environments. We validated our approach through comprehensive simulation experiments in PX4 Gazebo and Airsim Unreal Engine 4 environments. Real-world wind tests and exploration experiments demonstrate the UAVs' capability to operate stably under diverse environmental conditions without GPS assistance. This study highlights the practicality and resilience of our UAV prototypes in real-world applications.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
480,527
1810.08534
MsCGAN: Multi-scale Conditional Generative Adversarial Networks for Person Image Generation
To synthesize high-quality person images with arbitrary poses is challenging. In this paper, we propose a novel Multi-scale Conditional Generative Adversarial Networks (MsCGAN), aiming to convert the input conditional person image to a synthetic image of any given target pose, whose appearance and the texture are consistent with the input image. MsCGAN is a multi-scale adversarial network consisting of two generators and two discriminators. One generator transforms the conditional person image into a coarse image of the target pose globally, and the other is to enhance the detailed quality of the synthetic person image through a local reinforcement network. The outputs of the two generators are then merged into a synthetic, discriminant and high-resolution image. On the other hand, the synthetic image is downsampled to multiple resolutions as the input to multi-scale discriminator networks. The proposed multi-scale generators and discriminators handling different levels of visual features can benefit to synthesizing high-resolution person images with realistic appearance and texture. Experiments are conducted on the Market-1501 and DeepFashion datasets to evaluate the proposed model, and both qualitative and quantitative results demonstrate the superior performance of the proposed MsCGAN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
110,841
2112.03271
Adapting BERT for Continual Learning of a Sequence of Aspect Sentiment Classification Tasks
This paper studies continual learning (CL) of a sequence of aspect sentiment classification (ASC) tasks. Although some CL techniques have been proposed for document sentiment classification, we are not aware of any CL work on ASC. A CL system that incrementally learns a sequence of ASC tasks should address the following two issues: (1) transfer knowledge learned from previous tasks to the new task to help it learn a better model, and (2) maintain the performance of the models for previous tasks so that they are not forgotten. This paper proposes a novel capsule network based model called B-CL to address these issues. B-CL markedly improves the ASC performance on both the new task and the old tasks via forward and backward knowledge transfer. The effectiveness of B-CL is demonstrated through extensive experiments.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
true
false
false
270,147
1809.06432
Node Classification for Signed Social Networks Using Diffuse Interface Methods
Signed networks contain both positive and negative kinds of interactions like friendship and enmity. The task of node classification in non-signed graphs has proven to be beneficial in many real world applications, yet extensions to signed networks remain largely unexplored. In this paper we introduce the first analysis of node classification in signed social networks via diffuse interface methods based on the Ginzburg-Landau functional together with different extensions of the graph Laplacian to signed networks. We show that blending the information from both positive and negative interactions leads to performance improvement in real signed social networks, consistently outperforming the current state of the art.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
true
108,056
2403.00278
Shifted Interpolation for Differential Privacy
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning. It is a fundamental question to quantify their privacy leakage, yet tight characterizations remain open even in the foundational setting of convex losses. This paper improves over previous analyses by establishing (and refining) the "privacy amplification by iteration" phenomenon in the unifying framework of $f$-differential privacy--which tightly captures all aspects of the privacy loss and immediately implies tighter privacy accounting in other notions of differential privacy, e.g., $(\varepsilon,\delta)$-DP and R\'enyi DP. Our key technical insight is the construction of shifted interpolated processes that unravel the popular shifted-divergences argument, enabling generalizations beyond divergence-based relaxations of DP. Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization. Our techniques extend to many settings: convex/strongly convex, constrained/unconstrained, full/cyclic/stochastic batches, and all combinations thereof. As an immediate corollary, we recover the $f$-DP characterization of the exponential mechanism for strongly convex optimization in Gopi et al. (2022), and moreover extend this result to more general settings.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
433,928
2004.00909
Learning Representations For Images With Hierarchical Labels
Image classification has been studied extensively but there has been limited work in the direction of using non-conventional, external guidance other than traditional image-label pairs to train such models. In this thesis we present a set of methods to leverage information about the semantic hierarchy induced by class labels. In the first part of the thesis, we inject label-hierarchy knowledge to an arbitrary classifier and empirically show that availability of such external semantic information in conjunction with the visual semantics from images boosts overall performance. Taking a step further in this direction, we model more explicitly the label-label and label-image interactions by using order-preserving embedding-based models, prevalent in natural language, and tailor them to the domain of computer vision to perform image classification. Although, contrasting in nature, both the CNN-classifiers injected with hierarchical information, and the embedding-based models outperform a hierarchy-agnostic model on the newly presented, real-world ETH Entomological Collection image dataset https://www.research-collection.ethz.ch/handle/20.500.11850/365379.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
170,772
2112.08050
Exploring the Asynchronous of the Frequency Spectra of GAN-generated Facial Images
The rapid progression of Generative Adversarial Networks (GANs) has raised a concern of their misuse for malicious purposes, especially in creating fake face images. Although many proposed methods succeed in detecting GAN-based synthetic images, they are still limited by the need for large quantities of the training fake image dataset and challenges for the detector's generalizability to unknown facial images. In this paper, we propose a new approach that explores the asynchronous frequency spectra of color channels, which is simple but effective for training both unsupervised and supervised learning models to distinguish GAN-based synthetic images. We further investigate the transferability of a training model that learns from our suggested features in one source domain and validates on another target domains with prior knowledge of the features' distribution. Our experimental results show that the discrepancy of spectra in the frequency domain is a practical artifact to effectively detect various types of GAN-based generated images.
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
271,679
2203.00314
VScript: Controllable Script Generation with Visual Presentation
In order to offer a customized script tool and inspire professional scriptwriters, we present VScript. It is a controllable pipeline that generates complete scripts, including dialogues and scene descriptions, as well as presents visually using video retrieval. With an interactive interface, our system allows users to select genres and input starting words that control the theme and development of the generated script. We adopt a hierarchical structure, which first generates the plot, then the script and its visual presentation. A novel approach is also introduced to plot-guided dialogue generation by treating it as an inverse dialogue summarization. The experiment results show that our approach outperforms the baselines on both automatic and human evaluations, especially in genre control.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
282,954
2008.07796
A Hierarchical User Intention-Habit Extract Network for Credit Loan Overdue Risk Detection
More personal consumer loan products are emerging in mobile banking APP. For ease of use, application process is always simple, which means that few application information is requested for user to fill when applying for a loan, which is not conducive to construct users' credit profile. Thus, the simple application process brings huge challenges to the overdue risk detection, as higher overdue rate will result in greater economic losses to the bank. In this paper, we propose a model named HUIHEN (Hierarchical User Intention-Habit Extract Network) that leverages the users' behavior information in mobile banking APP. Due to the diversity of users' behaviors, we divide behavior sequences into sessions according to the time interval, and use the field-aware method to extract the intra-field information of behaviors. Then, we propose a hierarchical network composed of time-aware GRU and user-item-aware GRU to capture users' short-term intentions and users' long-term habits, which can be regarded as a supplement to user profile. The proposed model can improve the accuracy without increasing the complexity of the original online application process. Experimental results demonstrate the superiority of HUIHEN and show that HUIHEN outperforms other state-of-art models on all datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
192,226
1905.08545
Contrast Enhancement of Medical X-Ray Image Using Morphological Operators with Optimal Structuring Element
To guide surgical and medical treatment X-ray images have been used by physicians in every modern healthcare organization and hospitals. Doctor's evaluation process and disease identification in the area of skeletal system can be performed in a faster and efficient way with the help of X-ray imaging technique as they can depict bone structure painlessly. This paper presents an efficient contrast enhancement technique using morphological operators which will help to visualize important bone segments and soft tissues more clearly. Top-hat and Bottom-hat transform are utilized to enhance the image where gradient magnitude value is calculated for automatically selecting the structuring element (SE) size. Experimental evaluation on different x-ray imaging databases shows the effectiveness of our method which also produces comparatively better output against some existing image enhancement techniques.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,496
2107.01170
Computing Fuzzy Rough Set based Similarities with Fuzzy Inference and Its Application to Sentence Similarity Computations
Several research initiatives have been proposed for computing similarity between two Fuzzy Sets in analysis through Fuzzy Rough Sets. These techniques yield two measures viz. lower similarity and upper similarity. While in most applications only one entity is useful to further analysis and for drawing conclusions. The aim of this paper is to propose novel technique to combine Fuzzy Rough Set based lower similarity and upper similarity using Fuzzy Inference Engine. Further, the proposed approach is applied to the problem computing sentence similarity and have been evaluated on SICK2014 dataset.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
244,392
1506.03495
BoWFire: Detection of Fire in Still Images by Integrating Pixel Color and Texture Analysis
Emergency events involving fire are potentially harmful, demanding a fast and precise decision making. The use of crowdsourcing image and videos on crisis management systems can aid in these situations by providing more information than verbal/textual descriptions. Due to the usual high volume of data, automatic solutions need to discard non-relevant content without losing relevant information. There are several methods for fire detection on video using color-based models. However, they are not adequate for still image processing, because they can suffer on high false-positive results. These methods also suffer from parameters with little physical meaning, which makes fine tuning a difficult task. In this context, we propose a novel fire detection method for still images that uses classification based on color features combined with texture classification on superpixel regions. Our method uses a reduced number of parameters if compared to previous works, easing the process of fine tuning the method. Results show the effectiveness of our method of reducing false-positives while its precision remains compatible with the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
44,059
2410.12419
Mind the Context: Attention-Guided Weak-to-Strong Consistency for Enhanced Semi-Supervised Medical Image Segmentation
Medical image segmentation is a pivotal step in diagnostic and therapeutic processes, relying on high-quality annotated data that is often challenging and costly to obtain. Semi-supervised learning offers a promising approach to enhance model performance by leveraging unlabeled data. Although weak-to-strong consistency is a prevalent method in semi-supervised image segmentation, there is a scarcity of research on perturbation strategies specifically tailored for semi-supervised medical image segmentation tasks. To address this challenge, this paper introduces a simple yet efficient semi-supervised learning framework named Attention-Guided weak-to-strong Consistency Match (AIGCMatch). The AIGCMatch framework incorporates attention-guided perturbation strategies at both the image and feature levels to achieve weak-to-strong consistency regularization. This method not only preserves the structural information of medical images but also enhances the model's ability to process complex semantic information. Extensive experiments conducted on the ACDC and ISIC-2017 datasets have validated the effectiveness of AIGCMatch. Our method achieved a 90.4\% Dice score in the 7-case scenario on the ACDC dataset, surpassing the state-of-the-art methods and demonstrating its potential and efficacy in clinical settings. Additionally, on the ISIC-2017 dataset, we significantly outperformed our baseline, indicating the robustness and generalizability of AIGCMatch across different medical image segmentation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
499,021
2412.21030
Improving Location-based Thermal Emission Side-Channel Analysis Using Iterative Transfer Learning
This paper proposes the use of iterative transfer learning applied to deep learning models for side-channel attacks. Currently, most of the side-channel attack methods train a model for each individual byte, without considering the correlation between bytes. However, since the models' parameters for attacking different bytes may be similar, we can leverage transfer learning, meaning that we first train the model for one of the key bytes, then use the trained model as a pretrained model for the remaining bytes. This technique can be applied iteratively, a process known as iterative transfer learning. Experimental results show that when using thermal or power consumption map images as input, and multilayer perceptron or convolutional neural network as the model, our method improves average performance, especially when the amount of data is insufficient.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
521,434
1011.0415
Learning Networks of Stochastic Differential Equations
We consider linear models for stochastic dynamics. To any such model can be associated a network (namely a directed graph) describing which degrees of freedom interact under the dynamics. We tackle the problem of learning such a network from observation of the system trajectory over a time interval $T$. We analyze the $\ell_1$-regularized least squares algorithm and, in the setting in which the underlying network is sparse, we prove performance guarantees that are \emph{uniform in the sampling rate} as long as this is sufficiently high. This result substantiates the notion of a well defined `time complexity' for the network inference problem.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
8,099
2005.09261
Adaptive First-and Zeroth-order Methods for Weakly Convex Stochastic Optimization Problems
In this paper, we design and analyze a new family of adaptive subgradient methods for solving an important class of weakly convex (possibly nonsmooth) stochastic optimization problems. Adaptive methods that use exponential moving averages of past gradients to update search directions and learning rates have recently attracted a lot of attention for solving optimization problems that arise in machine learning. Nevertheless, their convergence analysis almost exclusively requires smoothness and/or convexity of the objective function. In contrast, we establish non-asymptotic rates of convergence of first and zeroth-order adaptive methods and their proximal variants for a reasonably broad class of nonsmooth \& nonconvex optimization problems. Experimental results indicate how the proposed algorithms empirically outperform stochastic gradient descent and its zeroth-order variant for solving such optimization problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
177,879
1812.02308
On the Inductive Bias of Word-Character-Level Multi-Task Learning for Speech Recognition
End-to-end automatic speech recognition (ASR) commonly transcribes audio signals into sequences of characters while its performance is evaluated by measuring the word-error rate (WER). This suggests that predicting sequences of words directly may be helpful instead. However, training with word-level supervision can be more difficult due to the sparsity of examples per label class. In this paper we analyze an end-to-end ASR model that combines a word-and-character representation in a multi-task learning (MTL) framework. We show that it improves on the WER and study how the word-level model can benefit from character-level supervision by analyzing the learned inductive preference bias of each model component empirically. We find that by adding character-level supervision, the MTL model interpolates between recognizing more frequent words (preferred by the word-level model) and shorter words (preferred by the character-level model).
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
115,725
2402.07859
Lissard: Long and Simple Sequential Reasoning Datasets
Language models are now capable of solving tasks that require dealing with long sequences consisting of hundreds of thousands of tokens. However, they often fail on tasks that require repetitive use of simple rules, even on sequences that are much shorter than those seen during training. For example, state-of-the-art LLMs can find common items in two lists with up to 20 items but fail when lists have 80 items. In this paper, we introduce Lissard, a benchmark comprising seven tasks whose goal is to assess the ability of models to process and generate wide-range sequence lengths, requiring repetitive procedural execution. Our evaluation of open-source (Mistral-7B and Mixtral-8x7B) and proprietary models (GPT-3.5 and GPT-4) show a consistent decline in performance across all models as the complexity of the sequence increases. The datasets and code are available at https://github.com/unicamp-dl/Lissard
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
428,870
2312.17330
Count What You Want: Exemplar Identification and Few-shot Counting of Human Actions in the Wild
This paper addresses the task of counting human actions of interest using sensor data from wearable devices. We propose a novel exemplar-based framework, allowing users to provide exemplars of the actions they want to count by vocalizing predefined sounds ''one'', ''two'', and ''three''. Our method first localizes temporal positions of these utterances from the audio sequence. These positions serve as the basis for identifying exemplars representing the action class of interest. A similarity map is then computed between the exemplars and the entire sensor data sequence, which is further fed into a density estimation module to generate a sequence of estimated density values. Summing these density values provides the final count. To develop and evaluate our approach, we introduce a diverse and realistic dataset consisting of real-world data from 37 subjects and 50 action categories, encompassing both sensor and audio data. The experiments on this dataset demonstrate the viability of the proposed method in counting instances of actions from new classes and subjects that were not part of the training data. On average, the discrepancy between the predicted count and the ground truth value is 7.47, significantly lower than the errors of the frequency-based and transformer-based methods. Our project, code and dataset can be found at https://github.com/cvlab-stonybrook/ExRAC.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
418,711
2405.17424
LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence
Recent embodied agents are primarily built based on reinforcement learning (RL) or large language models (LLMs). Among them, RL agents are efficient for deployment but only perform very few tasks. By contrast, giant LLM agents (often more than 1000B parameters) present strong generalization while demanding enormous computing resources. In this work, we combine their advantages while avoiding the drawbacks by conducting the proposed referee RL on our developed large auto-regressive model (LARM). Specifically, LARM is built upon a lightweight LLM (fewer than 5B parameters) and directly outputs the next action to execute rather than text. We mathematically reveal that classic RL feedbacks vanish in long-horizon embodied exploration and introduce a giant LLM based referee to handle this reward vanishment during training LARM. In this way, LARM learns to complete diverse open-world tasks without human intervention. Especially, LARM successfully harvests enchanted diamond equipment in Minecraft, which demands significantly longer decision-making chains than the highest achievements of prior best methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
457,908
2210.02622
Training Diverse High-Dimensional Controllers by Scaling Covariance Matrix Adaptation MAP-Annealing
Pre-training a diverse set of neural network controllers in simulation has enabled robots to adapt online to damage in robot locomotion tasks. However, finding diverse, high-performing controllers requires expensive network training and extensive tuning of a large number of hyperparameters. On the other hand, Covariance Matrix Adaptation MAP-Annealing (CMA-MAE), an evolution strategies (ES)-based quality diversity algorithm, does not have these limitations and has achieved state-of-the-art performance on standard QD benchmarks. However, CMA-MAE cannot scale to modern neural network controllers due to its quadratic complexity. We leverage efficient approximation methods in ES to propose three new CMA-MAE variants that scale to high dimensions. Our experiments show that the variants outperform ES-based baselines in benchmark robotic locomotion tasks, while being comparable with or exceeding state-of-the-art deep reinforcement learning-based quality diversity algorithms.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
true
false
false
321,714
2501.09026
Intelligent Anti-Money Laundering Solution Based upon Novel Community Detection in Massive Transaction Networks on Spark
Criminals are using every means available to launder the profits from their illegal activities into ostensibly legitimate assets. Meanwhile, most commercial anti-money laundering systems are still rule-based, which cannot adapt to the ever-changing tricks. Although some machine learning methods have been proposed, they are mainly focused on the perspective of abnormal behavior for single accounts. Considering money laundering activities are often involved in gang criminals, these methods are still not intelligent enough to crack down on criminal gangs all-sidedly. In this paper, a systematic solution is presented to find suspicious money laundering gangs. A temporal-directed Louvain algorithm has been proposed to detect communities according to relevant anti-money laundering patterns. All processes are implemented and optimized on Spark platform. This solution can greatly improve the efficiency of anti-money laundering work for financial regulation agencies.
false
false
false
true
true
false
false
false
false
false
false
false
false
true
false
false
false
false
524,983
2010.14534
Unmasking Contextual Stereotypes: Measuring and Mitigating BERT's Gender Bias
Contextualized word embeddings have been replacing standard embeddings as the representational knowledge source of choice in NLP systems. Since a variety of biases have previously been found in standard word embeddings, it is crucial to assess biases encoded in their replacements as well. Focusing on BERT (Devlin et al., 2018), we measure gender bias by studying associations between gender-denoting target words and names of professions in English and German, comparing the findings with real-world workforce statistics. We mitigate bias by fine-tuning BERT on the GAP corpus (Webster et al., 2018), after applying Counterfactual Data Substitution (CDS) (Maudslay et al., 2019). We show that our method of measuring bias is appropriate for languages such as English, but not for languages with a rich morphology and gender-marking, such as German. Our results highlight the importance of investigating bias and mitigation techniques cross-linguistically, especially in view of the current emphasis on large-scale, multilingual language models.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
203,473
2102.10681
Probabilistic Vehicle Reconstruction Using a Multi-Task CNN
The retrieval of the 3D pose and shape of objects from images is an ill-posed problem. A common way to object reconstruction is to match entities such as keypoints, edges, or contours of a deformable 3D model, used as shape prior, to their corresponding entities inferred from the image. However, such approaches are highly sensitive to model initialisation, imprecise keypoint localisations and/or illumination conditions. In this paper, we present a probabilistic approach for shape-aware 3D vehicle reconstruction from stereo images that leverages the outputs of a novel multi-task CNN. Specifically, we train a CNN that outputs probability distributions for the vehicle's orientation and for both, vehicle keypoints and wireframe edges. Together with 3D stereo information we integrate the predicted distributions into a common probabilistic framework. We believe that the CNN-based detection of wireframe edges reduces the sensitivity to illumination conditions and object contrast and that using the raw probability maps instead of inferring keypoint positions reduces the sensitivity to keypoint localisation errors. We show that our method achieves state-of-the-art results, evaluating our method on the challenging KITTI benchmark and on our own new 'Stereo-Vehicle' dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
221,174
1912.04977
Advances and Open Problems in Federated Learning
Federated learning (FL) is a machine learning setting where many clients (e.g. mobile devices or whole organizations) collaboratively train a model under the orchestration of a central server (e.g. service provider), while keeping the training data decentralized. FL embodies the principles of focused data collection and minimization, and can mitigate many of the systemic privacy risks and costs resulting from traditional, centralized machine learning and data science approaches. Motivated by the explosive growth in FL research, this paper discusses recent advances and presents an extensive collection of open problems and challenges.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
156,960
1604.02083
Some remarks on wheeled autonomous vehicles and the evolution of their control design
Recent investigations on the longitudinal and lateral control of wheeled autonomous vehicles are reported. Flatness-based techniques are first introduced via a simplified model. It depends on some physical parameters, like cornering stiffness coefficients of the tires, friction coefficient of the road, ..., which are notoriously difficult to identify. Then a model-free control strategy, which exploits the flat outputs, is proposed. Those outputs also depend on physical parameters which are poorly known, i.e., the vehicle mass and inertia and the position of the center of gravity. A totally model-free control law is therefore adopted. It employs natural output variables, namely the longitudinal velocity and the lateral deviation of the vehicle. This last method, which is easily understandable and implementable, ensures a robust trajectory tracking problem in both longitudinal and lateral dynamics. Several convincing computer simulations are displayed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
54,287
2403.05878
Frequency Domain Auto-tuning of Structured LPV Controllers for High-Precision Motion Control
Motion systems are a vital part of many industrial processes. However, meeting the increasingly stringent demands of these systems, especially concerning precision and throughput, requires novel control design methods that can go beyond the capabilities of traditional solutions. Traditional control methods often struggle with the complexity and position-dependent effects inherent in modern motion systems, leading to compromises in performance and a laborious task of controller design. This paper addresses these challenges by introducing a novel structured feedback control auto-tuning approach for multiple-input multiple-output (MIMO) motion systems. By leveraging frequency response function (FRF) estimates and the linear-parameter-varying (LPV) control framework, the proposed approach automates the controller design, while providing local stability and performance guarantees. Key innovations include norm-based magnitude optimization of the sensitivity functions, an automated stability check through a novel extended factorized Nyquist criterion, a modular structured MIMO LPV controller parameterization, and a controller discretization approach which preserves the continuous-time (CT) controller parameterization. The proposed approach is validated through experiments using a state-of-the-art moving-magnet planar actuator prototype.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
436,199
2307.00238
Unified Transfer Learning Models in High-Dimensional Linear Regression
Transfer learning plays a key role in modern data analysis when: (1) the target data are scarce but the source data are sufficient; (2) the distributions of the source and target data are heterogeneous. This paper develops an interpretable unified transfer learning model, termed as UTrans, which can detect both transferable variables and source data. More specifically, we establish the estimation error bounds and prove that our bounds are lower than those with target data only. Besides, we propose a source detection algorithm based on hypothesis testing to exclude the nontransferable data. We evaluate and compare UTrans to the existing algorithms in multiple experiments. It is shown that UTrans attains much lower estimation and prediction errors than the existing methods, while preserving interpretability. We finally apply it to the US intergenerational mobility data and compare our proposed algorithms to the classical machine learning algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
376,928
2004.05512
Reinforcement Learning via Reasoning from Demonstration
Demonstration is an appealing way for humans to provide assistance to reinforcement-learning agents. Most approaches in this area view demonstrations primarily as sources of behavioral bias. But in sparse-reward tasks, humans seem to treat demonstrations more as sources of causal knowledge. This paper proposes a framework for agents that benefit from demonstration in this human-inspired way. In this framework, agents develop causal models through observation, and reason from this knowledge to decompose tasks for effective reinforcement learning. Experimental results show that a basic implementation of Reasoning from Demonstration (RfD) is effective in a range of sparse-reward tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
172,218
2401.06362
Attention, Distillation, and Tabularization: Towards Practical Neural Network-Based Prefetching
Attention-based Neural Networks (NN) have demonstrated their effectiveness in accurate memory access prediction, an essential step in data prefetching. However, the substantial computational overheads associated with these models result in high inference latency, limiting their feasibility as practical prefetchers. To close the gap, we propose a new approach based on tabularization that significantly reduces model complexity and inference latency without sacrificing prediction accuracy. Our novel tabularization methodology takes as input a distilled, yet highly accurate attention-based model for memory access prediction and efficiently converts its expensive matrix multiplications into a hierarchy of fast table lookups. As an exemplar of the above approach, we develop DART, a prefetcher comprised of a simple hierarchy of tables. With a modest 0.09 drop in F1-score, DART reduces 99.99% of arithmetic operations from the large attention-based model and 91.83% from the distilled model. DART accelerates the large model inference by 170x and the distilled model by 9.4x. DART has comparable latency and storage costs as state-of-the-art rule-based prefetcher BO but surpasses it by 6.1% in IPC improvement. DART outperforms state-of-the-art NN-based prefetchers TransFetch by 33.1% and Voyager by 37.2% in terms of IPC improvement, primarily due to its low prefetching latency.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
true
421,130
2305.18279
Contextual Object Detection with Multimodal Large Language Models
Recent Multimodal Large Language Models (MLLMs) are remarkable in vision-language tasks, such as image captioning and question answering, but lack the essential perception ability, i.e., object detection. In this work, we address this limitation by introducing a novel research problem of contextual object detection -- understanding visible objects within different human-AI interactive contexts. Three representative scenarios are investigated, including the language cloze test, visual captioning, and question answering. Moreover, we present ContextDET, a unified multimodal model that is capable of end-to-end differentiable modeling of visual-language contexts, so as to locate, identify, and associate visual objects with language inputs for human-AI interaction. Our ContextDET involves three key submodels: (i) a visual encoder for extracting visual representations, (ii) a pre-trained LLM for multimodal context decoding, and (iii) a visual decoder for predicting bounding boxes given contextual object words. The new generate-then-detect framework enables us to detect object words within human vocabulary. Extensive experiments show the advantages of ContextDET on our proposed CODE benchmark, open-vocabulary detection, and referring image segmentation. Github: https://github.com/yuhangzang/ContextDET.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
368,934
2403.04643
QAQ: Quality Adaptive Quantization for LLM KV Cache
The emergence of LLMs has ignited a fresh surge of breakthroughs in NLP applications, particularly in domains such as question-answering systems and text generation. As the need for longer context grows, a significant bottleneck in model deployment emerges due to the linear expansion of the Key-Value (KV) cache with the context length. Existing methods primarily rely on various hypotheses, such as sorting the KV cache based on attention scores for replacement or eviction, to compress the KV cache and improve model throughput. However, heuristics used by these strategies may wrongly evict essential KV cache, which can significantly degrade model performance. In this paper, we propose QAQ, a Quality Adaptive Quantization scheme for the KV cache. We theoretically demonstrate that key cache and value cache exhibit distinct sensitivities to quantization, leading to the formulation of separate quantization strategies for their non-uniform quantization. Through the integration of dedicated outlier handling, as well as an improved attention-aware approach, QAQ achieves up to 10x the compression ratio of the KV cache size with a neglectable impact on model performance. QAQ significantly reduces the practical hurdles of deploying LLMs, opening up new possibilities for longer-context applications. The code is available at github.com/ClubieDong/KVCacheQuantization.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
435,674
2109.09323
A Shadowcasting-Based Next-Best-View Planner for Autonomous 3D Exploration
In this paper, we address the problem of autonomous exploration of unknown environments with an aerial robot equipped with a sensory set that produces large point clouds, such as LiDARs. The main goal is to gradually explore an area while planning paths and calculating information gain in short computation time, suitable for implementation on an on-board computer. To this end, we present a planner that randomly samples viewpoints in the environment map. It relies on a novel and efficient gain calculation based on the Recursive Shadowcasting algorithm. To determine the Next-Best-View (NBV), our planner uses a cuboid-based evaluation method that results in an enviably short computation time. To reduce the overall exploration time, we also use a dead end resolving strategy that allows us to quickly recover from dead ends in a challenging environment. Comparative experiments in simulation have shown that our approach outperforms the current state-of-the-art in terms of computational efficiency and total exploration time. The video of our approach can be found at https://www.youtube.com/playlist?list=PLC0C6uwoEQ8ZDhny1VdmFXLeTQOSBibQl.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
256,236
0809.0271
Randomised Variable Neighbourhood Search for Multi Objective Optimisation
Various local search approaches have recently been applied to machine scheduling problems under multiple objectives. Their foremost consideration is the identification of the set of Pareto optimal alternatives. An important aspect of successfully solving these problems lies in the definition of an appropriate neighbourhood structure. Unclear in this context remains, how interdependencies within the fitness landscape affect the resolution of the problem. The paper presents a study of neighbourhood search operators for multiple objective flow shop scheduling. Experiments have been carried out with twelve different combinations of criteria. To derive exact conclusions, small problem instances, for which the optimal solutions are known, have been chosen. Statistical tests show that no single neighbourhood operator is able to equally identify all Pareto optimal alternatives. Significant improvements however have been obtained by hybridising the solution algorithm using a randomised variable neighbourhood search technique.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
2,259
2408.04054
PLANRL: A Motion Planning and Imitation Learning Framework to Bootstrap Reinforcement Learning
Reinforcement Learning (RL) has shown remarkable progress in simulation environments, yet its application to real-world robotic tasks remains limited due to challenges in exploration and generalization. To address these issues, we introduce PLANRL, a framework that chooses when the robot should use classical motion planning and when it should learn a policy. To further improve the efficiency in exploration, we use imitation data to bootstrap the exploration. PLANRL dynamically switches between two modes of operation: reaching a waypoint using classical techniques when away from the objects and reinforcement learning for fine-grained manipulation control when about to interact with objects. PLANRL architecture is composed of ModeNet for mode classification, NavNet for waypoint prediction, and InteractNet for precise manipulation. By combining the strengths of RL and Imitation Learning (IL), PLANRL improves sample efficiency and mitigates distribution shift, ensuring robust task execution. We evaluate our approach across multiple challenging simulation environments and real-world tasks, demonstrating superior performance in terms of adaptability, efficiency, and generalization compared to existing methods. In simulations, PLANRL surpasses baseline methods by 10-15\% in training success rates at 30k samples and by 30-40\% during evaluation phases. In real-world scenarios, it demonstrates a 30-40\% higher success rate on simpler tasks compared to baselines and uniquely succeeds in complex, two-stage manipulation tasks. Datasets and supplementary materials can be found on our {https://raaslab.org/projects/NAVINACT/}.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
479,227
1702.08283
Adaptive Learning to Speed-Up Control of Prosthetic Hands: a Few Things Everybody Should Know
A number of studies have proposed to use domain adaptation to reduce the training efforts needed to control an upper-limb prosthesis exploiting pre-trained models from prior subjects. These studies generally reported impressive reductions in the required number of training samples to achieve a certain level of accuracy for intact subjects. We further investigate two popular methods in this field to verify whether this result equally applies to amputees. Our findings show instead that this improvement can largely be attributed to a suboptimal hyperparameter configuration. When hyperparameters are appropriately tuned, the standard approach that does not exploit prior information performs on par with the more complicated transfer learning algorithms. Additionally, earlier studies erroneously assumed that the number of training samples relates proportionally to the efforts required from the subject. However, a repetition of a movement is the atomic unit for subjects and the total number of repetitions should therefore be used as reliable measure for training efforts. Also when correcting for this mistake, we do not find any performance increase due to the use of prior models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
68,953
2110.15105
A Game-Theoretic Approach for Improving Generalization Ability of TSP Solvers
In this paper, we introduce a two-player zero-sum framework between a trainable \emph{Solver} and a \emph{Data Generator} to improve the generalization ability of deep learning-based solvers for Traveling Salesman Problem (TSP). Grounded in \textsl{Policy Space Response Oracle} (PSRO) methods, our two-player framework outputs a population of best-responding Solvers, over which we can mix and output a combined model that achieves the least exploitability against the Generator, and thereby the most generalizable performance on different TSP tasks. We conduct experiments on a variety of TSP instances with different types and sizes. Results suggest that our Solvers achieve the state-of-the-art performance even on tasks the Solver never meets, whilst the performance of other deep learning-based Solvers drops sharply due to over-fitting. To demonstrate the principle of our framework, we study the learning outcome of the proposed two-player game and demonstrate that the exploitability of the Solver population decreases during training, and it eventually approximates the Nash equilibrium along with the Generator.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,763
1901.01466
Addressing Objects and Their Relations: The Conversational Entity Dialogue Model
Statistical spoken dialogue systems usually rely on a single- or multi-domain dialogue model that is restricted in its capabilities of modelling complex dialogue structures, e.g., relations. In this work, we propose a novel dialogue model that is centred around entities and is able to model relations as well as multiple entities of the same type. We demonstrate in a prototype implementation benefits of relation modelling on the dialogue level and show that a trained policy using these relations outperforms the multi-domain baseline. Furthermore, we show that by modelling the relations on the dialogue level, the system is capable of processing relations present in the user input and even learns to address them in the system response.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
117,980
1705.04818
Distributed interaction between computer virus and patch: A modeling study
The decentralized patch distribution mechanism holds significant promise as an alternative to its centralized counterpart. For the purpose of accurately evaluating the performance of the decentralized patch distribution mechanism and based on the exact SIPS model that accurately captures the average dynamics of the interaction between viruses and patches, a new virus-patch interacting model, which is known as the generic SIPS model, is proposed. This model subsumes the linear SIPS model. The dynamics of the generic SIPS model is studied comprehensively. In particular, a set of criteria for the final extinction or/and long-term survival of viruses or/and patches are presented. Some conditions for the linear SIPS model to accurately capture the average dynamics of the virus-patch interaction are empirically found. As a consequence, the linear SIPS model can be adopted as a standard model for assessing the performance of the distributed patch distribution mechanism, provided the proper conditions are satisfied.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
73,384
2010.08126
Testing the Quantitative Spacetime Hypothesis using Artificial Narrative Comprehension (I) : Bootstrapping Meaning from Episodic Narrative viewed as a Feature Landscape
The problem of extracting important and meaningful parts of a sensory data stream, without prior training, is studied for symbolic sequences, by using textual narrative as a test case. This is part of a larger study concerning the extraction of concepts from spacetime processes, and their knowledge representations within hybrid symbolic-learning `Artificial Intelligence'. Most approaches to text analysis make extensive use of the evolved human sense of language and semantics. In this work, streams are parsed without knowledge of semantics, using only measurable patterns (size and time) within the changing stream of symbols -- as an event `landscape'. This is a form of interferometry. Using lightweight procedures that can be run in just a few seconds on a single CPU, this work studies the validity of the Semantic Spacetime Hypothesis, for the extraction of concepts as process invariants. This `semantic preprocessor' may then act as a front-end for more sophisticated long-term graph-based learning techniques. The results suggest that what we consider important and interesting about sensory experience is not solely based on higher reasoning, but on simple spacetime process cues, and this may be how cognitive processing is bootstrapped in the beginning.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
201,065
2406.15333
GeoLRM: Geometry-Aware Large Reconstruction Model for High-Quality 3D Gaussian Generation
In this work, we introduce the Geometry-Aware Large Reconstruction Model (GeoLRM), an approach which can predict high-quality assets with 512k Gaussians and 21 input images in only 11 GB GPU memory. Previous works neglect the inherent sparsity of 3D structure and do not utilize explicit geometric relationships between 3D and 2D images. This limits these methods to a low-resolution representation and makes it difficult to scale up to the dense views for better quality. GeoLRM tackles these issues by incorporating a novel 3D-aware transformer structure that directly processes 3D points and uses deformable cross-attention mechanisms to effectively integrate image features into 3D representations. We implement this solution through a two-stage pipeline: initially, a lightweight proposal network generates a sparse set of 3D anchor points from the posed image inputs; subsequently, a specialized reconstruction transformer refines the geometry and retrieves textural details. Extensive experimental results demonstrate that GeoLRM significantly outperforms existing models, especially for dense view inputs. We also demonstrate the practical applicability of our model with 3D generation tasks, showcasing its versatility and potential for broader adoption in real-world applications. The project page: https://linshan-bin.github.io/GeoLRM/.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
466,700
1701.00406
Raising Graphs From Randomness to Reveal Information Networks
We analyze the fine-grained connections between the average degree and the power-law degree distribution exponent in growing information networks. Our starting observation is a power-law degree distribution with a decreasing exponent and increasing average degree as a function of the network size. Our experiments are based on three Twitter at-mention networks and three more from the Koblenz Network Collection. We observe that popular network models cannot explain decreasing power-law degree distribution exponent and increasing average degree at the same time. We propose a model that is the combination of exponential growth, and a power-law developing network, in which new "homophily" edges are continuously added to nodes proportional to their current homophily degree. Parameters of the average degree growth and the power-law degree distribution exponent functions depend on the ratio of the network growth exponent parameters. Specifically, we connect the growth of the average degree to the decreasing exponent of the power-law degree distribution. Prior to our work, only one of the two cases were handled. Existing models and even their combinations can only reproduce some of our key new observations in growing information networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
66,272
2002.09437
Calibrating Deep Neural Networks using Focal Loss
Miscalibration - a mismatch between a model's confidence and its correctness - of Deep Neural Networks (DNNs) makes their predictions hard to rely on. Ideally, we want networks to be accurate, calibrated and confident. We show that, as opposed to the standard cross-entropy loss, focal loss [Lin et. al., 2017] allows us to learn models that are already very well calibrated. When combined with temperature scaling, whilst preserving accuracy, it yields state-of-the-art calibrated models. We provide a thorough analysis of the factors causing miscalibration, and use the insights we glean from this to justify the empirically excellent performance of focal loss. To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function. We perform extensive experiments on a variety of computer vision and NLP datasets, and with a wide variety of network architectures, and show that our approach achieves state-of-the-art calibration without compromising on accuracy in almost all cases. Code is available at https://github.com/torrvision/focal_calibration.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
165,058
1804.03596
A Deep Information Sharing Network for Multi-contrast Compressed Sensing MRI Reconstruction
In multi-contrast magnetic resonance imaging (MRI), compressed sensing theory can accelerate imaging by sampling fewer measurements within each contrast. The conventional optimization-based models suffer several limitations: strict assumption of shared sparse support, time-consuming optimization and "shallow" models with difficulties in encoding the rich patterns hiding in massive MRI data. In this paper, we propose the first deep learning model for multi-contrast MRI reconstruction. We achieve information sharing through feature sharing units, which significantly reduces the number of parameters. The feature sharing unit is combined with a data fidelity unit to comprise an inference block. These inference blocks are cascaded with dense connections, which allows for information transmission across different depths of the network efficiently. Our extensive experiments on various multi-contrast MRI datasets show that proposed model outperforms both state-of-the-art single-contrast and multi-contrast MRI methods in accuracy and efficiency. We show the improved reconstruction quality can bring great benefits for the later medical image analysis stage. Furthermore, the robustness of the proposed model to the non-registration environment shows its potential in real MRI applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,660
2011.06393
Heterogeneous Data-Aware Federated Learning
Federated learning (FL) is an appealing concept to perform distributed training of Neural Networks (NN) while keeping data private. With the industrialization of the FL framework, we identify several problems hampering its successful deployment, such as presence of non i.i.d data, disjoint classes, signal multi-modality across datasets. In this work, we address these problems by proposing a novel method that not only (1) aggregates generic model parameters (e.g. a common set of task generic NN layers) on server (e.g. in traditional FL), but also (2) keeps a set of parameters (e.g, a set of task specific NN layer) specific to each client. We validate our method on the traditionally used public benchmarks (e.g., Femnist) as well as on our proprietary collected dataset (i.e., traffic classification). Results show the benefit of our method, with significant advantage on extreme cases.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
206,233
2405.17754
Differential Voltage Analysis and Patterns in Parallel-Connected Pairs of Imbalanced Cells
Diagnosing imbalances in capacity and resistance within parallel-connected cells in battery packs is critical for battery management and fault detection, but it is challenging given that individual currents flowing into each cell are often unmeasured. This work introduces a novel method useful for identifying imbalances in capacity and resistance within a pair of parallel-connected cells using only voltage and current measurements from the pair. Our method utilizes differential voltage analysis (DVA) when the pair is under constant current discharge and demonstrates that features of the pair's differential voltage curve (dV/dQ), namely its mid-to-high SOC dV/dQ peak's height and skewness, are sensitive to imbalances in capacity and resistance. We analyze and explain how and why these dV/dQ peak shape features change in response to these imbalances, highlighting that the underlying current imbalance dynamics resulting from these imbalances contribute to these changes. Ultimately, we demonstrate that dV/dQ peak shape features can identify the product of capacity imbalance and resistance imbalance, but cannot uniquely identify the imbalances. This work lays the groundwork for identifying imbalances in capacity and resistance in parallel-connected cell groups in battery packs, where commonly only a single current sensor is placed for each parallel cell group.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
458,096
2101.04751
RePBubLik: Reducing the Polarized Bubble Radius with Link Insertions
The topology of the hyperlink graph among pages expressing different opinions may influence the exposure of readers to diverse content. Structural bias may trap a reader in a polarized bubble with no access to other opinions. We model readers' behavior as random walks. A node is in a polarized bubble if the expected length of a random walk from it to a page of different opinion is large. The structural bias of a graph is the sum of the radii of highly-polarized bubbles. We study the problem of decreasing the structural bias through edge insertions. Healing all nodes with high polarized bubble radius is hard to approximate within a logarithmic factor, so we focus on finding the best $k$ edges to insert to maximally reduce the structural bias. We present RePBubLik, an algorithm that leverages a variant of the random walk closeness centrality to select the edges to insert. RePBubLik obtains, under mild conditions, a constant-factor approximation. It reduces the structural bias faster than existing edge-recommendation methods, including some designed to reduce the polarization of a graph.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
215,217
2405.05243
Deep learning-based variational autoencoder for classification of quantum and classical states of light
Advancements in optical quantum technologies have been enabled by the generation, manipulation, and characterization of light, with identification based on its photon statistics. However, characterizing light and its sources through single photon measurements often requires efficient detectors and longer measurement times to obtain high-quality photon statistics. Here we introduce a deep learning-based variational autoencoder (VAE) method for classifying single photon added coherent state (SPACS), single photon added thermal state (SPACS), mixed states between coherent/SPACS and thermal/SPATS of light. Our semisupervised learning-based VAE efficiently maps the photon statistics features of light to a lower dimension, enabling quasi-instantaneous classification with low average photon counts. The proposed VAE method is robust and maintains classification accuracy in the presence of losses inherent in an experiment, such as finite collection efficiency, non-unity quantum efficiency, finite number of detectors, etc. Additionally, leveraging the transfer learning capabilities of VAE enables successful classification of data of any quality using a single trained model. We envision that such a deep learning methodology will enable better classification of quantum light and light sources even in the presence of poor detection quality.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
452,853
1310.7868
Automatic Classification of Variable Stars in Catalogs with missing data
We present an automatic classification method for astronomical catalogs with missing data. We use Bayesian networks, a probabilistic graphical model, that allows us to perform inference to pre- dict missing values given observed data and dependency relationships between variables. To learn a Bayesian network from incomplete data, we use an iterative algorithm that utilises sampling methods and expectation maximization to estimate the distributions and probabilistic dependencies of variables from data with missing values. To test our model we use three catalogs with missing data (SAGE, 2MASS and UBVI) and one complete catalog (MACHO). We examine how classification accuracy changes when information from missing data catalogs is included, how our method compares to traditional missing data approaches and at what computational cost. Integrating these catalogs with missing data we find that classification of variable objects improves by few percent and by 15% for quasar detection while keeping the computational cost the same.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
28,066
2501.15634
Be Intentional About Fairness!: Fairness, Size, and Multiplicity in the Rashomon Set
When selecting a model from a set of equally performant models, how much unfairness can you really reduce? Is it important to be intentional about fairness when choosing among this set, or is arbitrarily choosing among the set of ''good'' models good enough? Recent work has highlighted that the phenomenon of model multiplicity-where multiple models with nearly identical predictive accuracy exist for the same task-has both positive and negative implications for fairness, from strengthening the enforcement of civil rights law in AI systems to showcasing arbitrariness in AI decision-making. Despite the enormous implications of model multiplicity, there is little work that explores the properties of sets of equally accurate models, or Rashomon sets, in general. In this paper, we present five main theoretical and methodological contributions which help us to understand the relatively unexplored properties of the Rashomon set, in particular with regards to fairness. Our contributions include methods for efficiently sampling models from this set and techniques for identifying the fairest models according to key fairness metrics such as statistical parity. We also derive the probability that an individual's prediction will be flipped within the Rashomon set, as well as expressions for the set's size and the distribution of error tolerance used across models. These results lead to policy-relevant takeaways, such as the importance of intentionally looking for fair models within the Rashomon set, and understanding which individuals or groups may be more susceptible to arbitrary decisions.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
527,643
2405.15835
Analyzing the Impact of Climate Change With Major Emphasis on Pollution: A Comparative Study of ML and Statistical Models in Time Series Data
Industrial operations have grown exponentially over the last century, driving advancements in energy utilization through vehicles and machinery.This growth has significant environmental implications, necessitating the use of sophisticated technology to monitor and analyze climate data.The surge in industrial activities presents a complex challenge in forecasting its diverse environmental impacts, which vary greatly across different regions.Aim to understand these dynamics more deeply to predict and mitigate the environmental impacts of industrial activities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
457,130
2210.08397
Taxonomy of A Decision Support System for Adaptive Experimental Design in Field Robotics
Experimental design in field robotics is an adaptive human-in-the-loop decision-making process in which an experimenter learns about system performance and limitations through interactions with a robot in the form of constructed experiments. This can be challenging because of system complexity, the need to operate in unstructured environments, and the competing objectives of maximizing information gain while simultaneously minimizing experimental costs. Based on the successes in other domains, we propose the use of a Decision Support System (DSS) to amplify the human's decision-making abilities, overcome their inherent shortcomings, and enable principled decision-making in field experiments. In this work, we propose common terminology and a six-stage taxonomy of DSSs specifically for adaptive experimental design of more informative tests and reduced experimental costs. We construct and present our taxonomy using examples and trends from DSS literature, including works involving artificial intelligence and Intelligent DSSs. Finally, we identify critical technical gaps and opportunities for future research to direct the scientific community in the pursuit of next-generation DSSs for experimental design.
true
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
324,128
2411.02005
Towards a valid bibliometric measure of epistemic breadth of researchers
The concept of epistemic breadth of the work of a researcher refers to the scope of their knowledge claims, as reflected in published research reports. Studies of epistemic breadth have been hampered by the lack of a validated measure of the concept. Here we introduce a knowledge space approach to the measurement of epistemic breadth and propose to use the semantic similarity network of an author's publication record to operationalize a measure. In this approach, each paper has its own location in a common abstract vector space based on its content. Proximity in knowledge space corresponds to thematic similarity of publications. Candidate measures of epistemic breadth derived from aggregate similarity values of researchers' bodies of work are tested against external validation data of researchers known to have made a major change in research topic and against self-citation data. We find that some candidate measures co-vary well with known epistemic breadth of researchers in the empirical data and can serve as valid indicators of the concept.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
505,315
1703.04590
Learning Background-Aware Correlation Filters for Visual Tracking
Correlation Filters (CFs) have recently demonstrated excellent performance in terms of rapidly tracking objects under challenging photometric and geometric variations. The strength of the approach comes from its ability to efficiently learn - "on the fly" - how the object is changing over time. A fundamental drawback to CFs, however, is that the background of the object is not be modelled over time which can result in suboptimal results. In this paper we propose a Background-Aware CF that can model how both the foreground and background of the object varies over time. Our approach, like conventional CFs, is extremely computationally efficient - and extensive experiments over multiple tracking benchmarks demonstrate the superior accuracy and real-time performance of our method compared to the state-of-the-art trackers including those based on a deep learning paradigm.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,922
2011.07435
Functorial Manifold Learning
We adapt previous research on category theory and topological unsupervised learning to develop a functorial perspective on manifold learning, also known as nonlinear dimensionality reduction. We first characterize manifold learning algorithms as functors that map pseudometric spaces to optimization objectives and that factor through hierarchical clustering functors. We then use this characterization to prove refinement bounds on manifold learning loss functions and construct a hierarchy of manifold learning algorithms based on their equivariants. We express several popular manifold learning algorithms as functors at different levels of this hierarchy, including Metric Multidimensional Scaling, IsoMap, and UMAP. Next, we use interleaving distance to study the stability of a broad class of manifold learning algorithms. We present bounds on how closely the embeddings these algorithms produce from noisy data approximate the embeddings they would learn from noiseless data. Finally, we use our framework to derive a set of novel manifold learning algorithms, which we experimentally demonstrate are competitive with the state of the art.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
206,554
1908.06379
Concurrent Parsing of Constituency and Dependency
Constituent and dependency representation for syntactic structure share a lot of linguistic and computational characteristics, this paper thus makes the first attempt by introducing a new model that is capable of parsing constituent and dependency at the same time, so that lets either of the parsers enhance each other. Especially, we evaluate the effect of different shared network components and empirically verify that dependency parsing may be much more beneficial from constituent parsing structure. The proposed parser achieves new state-of-the-art performance for both parsing tasks, constituent and dependency on PTB and CTB benchmarks.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
142,002
2207.01698
An adaptive music generation architecture for games based on the deep learning Transformer mode
This paper presents an architecture for generating music for video games based on the Transformer deep learning model. Our motivation is to be able to customize the generation according to the taste of the player, who can select a corpus of training examples, corresponding to his preferred musical style. The system generates various musical layers, following the standard layering strategy currently used by composers designing video game music. To adapt the music generated to the game play and to the player(s) situation, we are using an arousal-valence model of emotions, in order to control the selection of musical layers. We discuss current limitations and prospects for the future, such as collaborative and interactive control of the musical components.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
306,258
2405.19749
Generating Query Recommendations via LLMs
Query recommendation systems are ubiquitous in modern search engines, assisting users in producing effective queries to meet their information needs. However, these systems require a large amount of data to produce good recommendations, such as a large collection of documents to index and query logs. In particular, query logs and user data are not available in cold start scenarios. Query logs are expensive to collect and maintain and require complex and time-consuming cascading pipelines for creating, combining, and ranking recommendations. To address these issues, we frame the query recommendation problem as a generative task, proposing a novel approach called Generative Query Recommendation (GQR). GQR uses an LLM as its foundation and does not require to be trained or fine-tuned to tackle the query recommendation problem. We design a prompt that enables the LLM to understand the specific recommendation task, even using a single example. We then improved our system by proposing a version that exploits query logs called Retriever-Augmented GQR (RA-GQR). RA-GQr dynamically composes its prompt by retrieving similar queries from query logs. GQR approaches reuses a pre-existing neural architecture resulting in a simpler and more ready-to-market approach, even in a cold start scenario. Our proposed GQR obtains state-of-the-art performance in terms of NDCG@10 and clarity score against two commercial search engines and the previous state-of-the-art approach on the Robust04 and ClueWeb09B collections, improving on average the NDCG@10 performance up to ~4% on Robust04 and ClueWeb09B w.r.t the previous best competitor. RA-GQR further improve the NDCG@10 obtaining an increase of ~11%, ~6\% on Robust04 and ClueWeb09B w.r.t the best competitor. Furthermore, our system obtained ~59% of user preferences in a blind user study, proving that our method produces the most engaging queries.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
459,046
1303.4986
Combinatorial Analysis of Multiple Networks
The study of complex networks has been historically based on simple graph data models representing relationships between individuals. However, often reality cannot be accurately captured by a flat graph model. This has led to the development of multi-layer networks. These models have the potential of becoming the reference tools in network data analysis, but require the parallel development of specific analysis methods explicitly exploiting the information hidden in-between the layers and the availability of a critical mass of reference data to experiment with the tools and investigate the real-world organization of these complex systems. In this work we introduce a real-world layered network combining different kinds of online and offline relationships, and present an innovative methodology and related analysis tools suggesting the existence of hidden motifs traversing and correlating different representation layers. We also introduce a notion of betweenness centrality for multiple networks. While some preliminary experimental evidence is reported, our hypotheses are still largely unverified, and in our opinion this calls for the availability of new analysis methods but also new reference multi-layer social network data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
23,047
1807.08485
Learning 3D Shapes as Multi-Layered Height-maps using 2D Convolutional Networks
We present a novel global representation of 3D shapes, suitable for the application of 2D CNNs. We represent 3D shapes as multi-layered height-maps (MLH) where at each grid location, we store multiple instances of height maps, thereby representing 3D shape detail that is hidden behind several layers of occlusion. We provide a novel view merging method for combining view dependent information (Eg. MLH descriptors) from multiple views. Because of the ability of using 2D CNNs, our method is highly memory efficient in terms of input resolution compared to the voxel based input. Together with MLH descriptors and our multi view merging, we achieve the state-of-the-art result in classification on ModelNet dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
103,554
1908.09913
Subdivision of point-normal pairs with application to smoothing feasible robot path
In a previous paper [11] we introduced a weighted binary average of two 2D point-normal pairs, termed circle average, and investigated subdivision schemes based on it. These schemes refine point-normal pairs in 2D, and converge to limit curves and limit normals. Such a scheme has the disadvantage that the limit normals are not the normals of the limit curve. In this paper we solve this problem by proposing a new averaging method, and obtaining a new family of algorithms based on it. We demonstrate their new editing capabilities and apply this subdivision technique to smooth a precomputed feasible polygonal point robot path.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
142,964
2409.04320
Faster Sampling from Log-Concave Densities over Polytopes via Efficient Linear Solvers
We consider the problem of sampling from a log-concave distribution $\pi(\theta) \propto e^{-f(\theta)}$ constrained to a polytope $K:=\{\theta \in \mathbb{R}^d: A\theta \leq b\}$, where $A\in \mathbb{R}^{m\times d}$ and $b \in \mathbb{R}^m$.The fastest-known algorithm \cite{mangoubi2022faster} for the setting when $f$ is $O(1)$-Lipschitz or $O(1)$-smooth runs in roughly $O(md \times md^{\omega -1})$ arithmetic operations, where the $md^{\omega -1}$ term arises because each Markov chain step requires computing a matrix inversion and determinant (here $\omega \approx 2.37$ is the matrix multiplication constant). We present a nearly-optimal implementation of this Markov chain with per-step complexity which is roughly the number of non-zero entries of $A$ while the number of Markov chain steps remains the same. The key technical ingredients are 1) to show that the matrices that arise in this Dikin walk change slowly, 2) to deploy efficient linear solvers that can leverage this slow change to speed up matrix inversion by using information computed in previous steps, and 3) to speed up the computation of the determinantal term in the Metropolis filter step via a randomized Taylor series-based estimator.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
486,354
2403.01384
On the Compressibility of Quantized Large Language Models
Deploying Large Language Models (LLMs) on edge or mobile devices offers significant benefits, such as enhanced data privacy and real-time processing capabilities. However, it also faces critical challenges due to the substantial memory requirement of LLMs. Quantization is an effective way of reducing the model size while maintaining good performance. However, even after quantization, LLMs may still be too big to fit entirely into the limited memory of edge or mobile devices and have to be partially loaded from the storage to complete the inference. In this case, the I/O latency of model loading becomes the bottleneck of the LLM inference latency. In this work, we take a preliminary step of studying applying data compression techniques to reduce data movement and thus speed up the inference of quantized LLM on memory-constrained devices. In particular, we discussed the compressibility of quantized LLMs, the trade-off between the compressibility and performance of quantized LLMs, and opportunities to optimize both of them jointly.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
434,389
1911.05439
Statistical Deformation Reconstruction Using Multi-organ Shape Features for Pancreatic Cancer Localization
Respiratory motion and the associated deformations of abdominal organs and tumors are essential information in clinical applications. However, inter- and intra-patient multi-organ deformations are complex and have not been statistically formulated, whereas single organ deformations have been widely studied. In this paper, we introduce a multi-organ deformation library and its application to deformation reconstruction based on the shape features of multiple abdominal organs. Statistical multi-organ motion/deformation models of the stomach, liver, left and right kidneys, and duodenum were generated by shape matching their region labels defined on four-dimensional computed tomography images. A total of 250 volumes were measured from 25 pancreatic cancer patients. This paper also proposes a per-region-based deformation learning using the reproducing kernel to predict the displacement of pancreatic cancer for adaptive radiotherapy. The experimental results show that the proposed concept estimates deformations better than general per-patient-based learning models and achieves a clinically acceptable estimation error with a mean distance of 1.2 $\pm$ 0.7 mm and a Hausdorff distance of 4.2 $\pm$ 2.3 mm throughout the respiratory motion.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
153,258
2210.12825
Patient-Specific Heart Model Towards Atrial Fibrillation
Atrial fibrillation is a heart rhythm disorder that affects tens of millions people worldwide. The most effective treatment is catheter ablation. This involves irreversible heating of abnormal cardiac tissue facilitated by electroanatomical mapping. However, it is difficult to consistently identify the triggers and sources that may initiate or perpetuate atrial fibrillation due to its chaotic behavior. We developed a patient-specific computational heart model that can accurately reproduce the activation patterns to help in localizing these triggers and sources. Our model has high spatial resolution, with whole-atrium temporal synchronous activity, and has patient-specific accurate electrophysiological activation patterns. A total of 15 patients data were processed: 8 in sinus rhythm, 6 in atrial flutter and 1 in atrial tachycardia. For resolution, the average simulation geometry voxel is a cube of 2.47 mm length. For synchrony, the model takes in about 1,500 local electrogram recordings, optimally fits parameters to the individual's atrium geometry and then generates whole-atrium activation patterns. For accuracy, the average local activation time error is 5.47 ms for sinus rhythm, 10.97 ms for flutter and tachycardia; and the average correlation is 0.95 for sinus rhythm, 0.81 for flutter and tachycardia. This promising result demonstrates our model is an effective building block in capturing more complex rhythms such as atrial fibrillation to guide physicians for effective ablation therapy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
325,919
2312.00352
Quantum Kernel t-Distributed Stochastic Neighbor Embedding
Data visualization is important in understanding the characteristics of data that are difficult to see directly. It is used to visualize loss landscapes and optimization trajectories to analyze optimization performance. Popular optimization analysis is performed by visualizing a loss landscape around the reached local or global minimum using principal component analysis. However, this visualization depends on the variational parameters of a quantum circuit rather than quantum states, which makes it difficult to understand the mechanism of optimization process through the property of quantum states. Here, we propose a quantum data visualization method using quantum kernels, which enables us to offer fast and highly accurate visualization of quantum states. In our numerical experiments, we visualize hand-written digits dataset and apply $k$-nearest neighbor algorithm to the low-dimensional data to quantitatively evaluate our proposed method compared with a classical kernel method. As a result, our proposed method achieves comparable accuracy to the state-of-the-art classical kernel method, meaning that the proposed visualization method based on quantum machine learning does not degrade the separability of the input higher dimensional data. Furthermore, we visualize the optimization trajectories of finding the ground states of transverse field Ising model and successfully find the trajectory characteristics. Since quantum states are higher dimensional objects that can only be seen via observables, our visualization method, which inherits the similarity of quantum data, would be useful in understanding the behavior of quantum circuits and algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
412,018
2109.03214
Robust Predictable Control
Many of the challenges facing today's reinforcement learning (RL) algorithms, such as robustness, generalization, transfer, and computational efficiency are closely related to compression. Prior work has convincingly argued why minimizing information is useful in the supervised learning setting, but standard RL algorithms lack an explicit mechanism for compression. The RL setting is unique because (1) its sequential nature allows an agent to use past information to avoid looking at future observations and (2) the agent can optimize its behavior to prefer states where decision making requires few bits. We take advantage of these properties to propose a method (RPC) for learning simple policies. This method brings together ideas from information bottlenecks, model-based RL, and bits-back coding into a simple and theoretically-justified algorithm. Our method jointly optimizes a latent-space model and policy to be self-consistent, such that the policy avoids states where the model is inaccurate. We demonstrate that our method achieves much tighter compression than prior methods, achieving up to 5x higher reward than a standard information bottleneck. We also demonstrate that our method learns policies that are more robust and generalize better to new tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
253,991
1302.4283
On the Fly Self-Organized Base Station Placement
In this paper, we address the deployment of base stations (BSs) in a one-dimensional network in which the users are randomly distributed.In order to take into account the users' distribution to optimally place the BSs we optimize the uplink MMSE sum rate. Moreover, given a massive number of antennas at the BSs we propose a novel random matrix theory-based technique so as to obtain tight approximations for the MMSE sum rate in the uplink. We investigate a cooperative (CP) scenario where the BSs jointly decode the messages and a non-cooperative (NCP) scheme in which the BS can only decode its own users. Our results show that the CP strategy considerably outperforms the NCP case. Moreover, we show that there exists a trade off in the BS deployment regarding the position of each BS. Thus, through location games we can optimize the position of each BS in order to maximize the system performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
22,136
2311.16466
The Adoption and Efficacy of Large Language Models: Evidence From Consumer Complaints in the Financial Industry
Large Language Models (LLMs) are reshaping consumer decision-making, particularly in communication with firms, yet our understanding of their impact remains limited. This research explores the effect of LLMs on consumer complaints submitted to the Consumer Financial Protection Bureau from 2015 to 2024, documenting the adoption of LLMs for drafting complaints and evaluating the likelihood of obtaining relief from financial firms. We analyzed over 1 million complaints and identified a significant increase in LLM usage following the release of ChatGPT. We find that LLM usage is associated with an increased likelihood of obtaining relief from financial firms. To investigate this relationship, we employ an instrumental variable approach to mitigate endogeneity concerns around LLM adoption. Although instrumental variables suggest a potential causal link, they cannot fully capture all unobserved heterogeneity. To further establish this causal relationship, we conducted controlled experiments, which support that LLMs can enhance the clarity and persuasiveness of consumer narratives, thereby increasing the likelihood of obtaining relief. Our findings suggest that facilitating access to LLMs can help firms better understand consumer concerns and level the playing field among consumers. This underscores the importance of policies promoting technological accessibility, enabling all consumers to effectively voice their concerns.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
410,897
1206.4835
Complex Network Analysis in Cricket : Community structure, player's role and performance index
This paper describes the applications of network methods for understanding interaction within members of sport teams.We analyze the interaction of batsmen in International Cricket matches. We generate batting partnership network (BPN) for different teams and determine the exact values of clustering coefficient, average degree, average shortest path length of the networks and compare them with the Erd\text{\"{o}}s-R\text{\'{e}}nyi model. We observe that the networks display small-world behavior and are disassortative in nature. We find that most connected batsman is not necessarily the most central and most central players are not necessarily the one with high batting averages. We study the community structure of the BPNs and identify each player's role based on inter-community and intra-community links. We observe that {\it Sir DG Bradman}, regarded as the best batsman in Cricket history does not occupy the central position in the network $-$ the so-called connector hub. We extend our analysis to quantify the performance, relative importance and effect of removing a player from the team, based on different centrality scores.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
16,748
cs/0605093
The Capacity of the Single Source Multiple Relay Single Destination Mesh Network
In this paper, we derive the capacity of a special class of mesh networks. A mesh network is defined as a heterogeneous wireless network in which the transmission among power limited nodes is assisted by powerful relays, which use the same wireless medium. We find the capacity of the mesh network when there is one source, one destination, and multiple relays. We call this channel the single source multiple relay single destination (SSMRSD) mesh network. Our approach is as follows. We first look at an upper bound on the information theoretic capacity of these networks in the Gaussian setting. We then show that the bound is achievable asymptotically using the compress-forward strategy for the multiple relay channel. Theoretically, the results indicate the value of cooperation and the utility of carefully deployed relays in wireless ad-hoc and sensor networks. The capacity characterization quantifies how the relays can be used to either conserve node energy or to increase transmission rate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,466
2309.02052
Towards Individual and Multistakeholder Fairness in Tourism Recommender Systems
This position paper summarizes our published review on individual and multistakeholder fairness in Tourism Recommender Systems (TRS). Recently, there has been growing attention to fairness considerations in recommender systems (RS). It has been acknowledged in research that fairness in RS is often closely tied to the presence of multiple stakeholders, such as end users, item providers, and platforms, as it raises concerns for the fair treatment of all parties involved. Hence, fairness in RS is a multi-faceted concept that requires consideration of the perspectives and needs of the different stakeholders to ensure fair outcomes for them. However, there may often be instances where achieving the goals of one stakeholder could conflict with those of another, resulting in trade-offs. In this paper, we emphasized addressing the unique challenges of ensuring fairness in RS within the tourism domain. We aimed to discuss potential strategies for mitigating the aforementioned challenges and examine the applicability of solutions from other domains to tackle fairness issues in tourism. By exploring cross-domain approaches and strategies for incorporating S-Fairness, we can uncover valuable insights and determine how these solutions can be adapted and implemented effectively in the context of tourism to enhance fairness in RS.
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
389,912
2201.11192
ReforesTree: A Dataset for Estimating Tropical Forest Carbon Stock with Deep Learning and Aerial Imagery
Forest biomass is a key influence for future climate, and the world urgently needs highly scalable financing schemes, such as carbon offsetting certifications, to protect and restore forests. Current manual forest carbon stock inventory methods of measuring single trees by hand are time, labour, and cost-intensive and have been shown to be subjective. They can lead to substantial overestimation of the carbon stock and ultimately distrust in forest financing. The potential for impact and scale of leveraging advancements in machine learning and remote sensing technologies is promising but needs to be of high quality in order to replace the current forest stock protocols for certifications. In this paper, we present ReforesTree, a benchmark dataset of forest carbon stock in six agro-forestry carbon offsetting sites in Ecuador. Furthermore, we show that a deep learning-based end-to-end model using individual tree detection from low cost RGB-only drone imagery is accurately estimating forest carbon stock within official carbon offsetting certification standards. Additionally, our baseline CNN model outperforms state-of-the-art satellite-based forest biomass and carbon stock estimates for this type of small-scale, tropical agro-forestry sites. We present this dataset to encourage machine learning research in this area to increase accountability and transparency of monitoring, verification and reporting (MVR) in carbon offsetting projects, as well as scaling global reforestation financing through accurate remote sensing.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
277,213
1904.11404
Compressive Multidimensional Harmonic Retrieval with Prior Knowledge
This paper concerns the problem of estimating multidimensional (MD) frequencies using prior knowledge of the signal spectral sparsity from partial time samples. In many applications, such as radar, wireless communications, and super-resolution imaging, some structural information about the signal spectrum might be known beforehand. Suppose that the frequencies lie in given intervals, the goal is to improve the frequency estimation performance by using the prior information. We study the MD Vandermonde decomposition of block Toeplitz matrices in which the frequencies are restricted to given intervals. We then propose to solve the frequency-selective atomic norm minimization by converting them into semidefinite program based on the MD Vandermonde decomposition. Numerical simulation results are presented to illustrate the good performance of the proposed method.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
128,852
2404.19031
Machine Unlearning for Document Classification
Document understanding models have recently demonstrated remarkable performance by leveraging extensive collections of user documents. However, since documents often contain large amounts of personal data, their usage can pose a threat to user privacy and weaken the bonds of trust between humans and AI services. In response to these concerns, legislation advocating ``the right to be forgotten" has recently been proposed, allowing users to request the removal of private information from computer systems and neural network models. A novel approach, known as machine unlearning, has emerged to make AI models forget about a particular class of data. In our research, we explore machine unlearning for document classification problems, representing, to the best of our knowledge, the first investigation into this area. Specifically, we consider a realistic scenario where a remote server houses a well-trained model and possesses only a small portion of training data. This setup is designed for efficient forgetting manipulation. This work represents a pioneering step towards the development of machine unlearning methods aimed at addressing privacy concerns in document analysis applications. Our code is publicly available at \url{https://github.com/leitro/MachineUnlearning-DocClassification}.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
450,477
0910.5682
Word Sense Disambiguation Using English-Spanish Aligned Phrases over Comparable Corpora
In this paper we describe a WSD experiment based on bilingual English-Spanish comparable corpora in which individual noun phrases have been identified and aligned with their respective counterparts in the other language. The evaluation of the experiment has been carried out against SemCor. We show that, with the alignment algorithm employed, potential precision is high (74.3%), however the coverage of the method is low (2.7%), due to alignments being far less frequent than we expected. Contrary to our intuition, precision does not rise consistently with the number of alignments. The coverage is low due to several factors; there are important domain differences, and English and Spanish are too close languages for this approach to be able to discriminate efficiently between senses, rendering it unsuitable for WSD, although the method may prove more productive in machine translation.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
4,824
2206.04091
Uplifting Bandits
We introduce a multi-armed bandit model where the reward is a sum of multiple random variables, and each action only alters the distributions of some of them. After each action, the agent observes the realizations of all the variables. This model is motivated by marketing campaigns and recommender systems, where the variables represent outcomes on individual customers, such as clicks. We propose UCB-style algorithms that estimate the uplifts of the actions over a baseline. We study multiple variants of the problem, including when the baseline and affected variables are unknown, and prove sublinear regret bounds for all of these. We also provide lower bounds that justify the necessity of our modeling assumptions. Experiments on synthetic and real-world datasets show the benefit of methods that estimate the uplifts over policies that do not use this structure.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
301,503