id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2109.03383
DeepZensols: Deep Natural Language Processing Framework
Reproducing results in publications by distributing publicly available source code is becoming ever more popular. Given the difficulty of reproducing machine learning (ML) experiments, there have been significant efforts in reducing the variance of these results. As in any science, the ability to consistently reproduce results effectively strengthens the underlying hypothesis of the work, and thus, should be regarded as important as the novel aspect of the research itself. The contribution of this work is a framework that is able to reproduce consistent results and provides a means of easily creating, training, and evaluating natural language processing (NLP) deep learning (DL) models.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
254,043
2301.09522
Optimising Event-Driven Spiking Neural Network with Regularisation and Cutoff
Spiking neural network (SNN), as the next generation of artificial neural network (ANN), offer a closer mimicry of natural neural networks and hold promise for significant improvements in computational efficiency. However, the current SNN is trained to infer over a fixed duration, overlooking the potential of dynamic inference in SNN. In this paper, we strengthen the marriage between SNN and event-driven processing with a proposal to consider a cutoff in SNN, which can terminate SNN anytime during inference to achieve efficient inference. Two novel optimisation techniques are presented to achieve inference efficient SNN: a Top-K cutoff and a regularisation.The proposed regularisation influences the training process, optimising SNN for the cutoff, while the Top-K cutoff technique optimises the inference phase. We conduct an extensive set of experiments on multiple benchmark frame-based datasets, such asCIFAR10/100, Tiny-ImageNet, and event-based datasets, including CIFAR10-DVS, N-Caltech101 and DVS128 Gesture. The experimental results demonstrate the effectiveness of our techniques in both ANN-to-SNN conversion and direct training, enabling SNNs to require 1.76 to 2.76x fewer timesteps for CIFAR-10, while achieving 1.64 to 1.95x fewer timesteps across all event-based datasets, with near-zero accuracy loss. These findings affirms the compatibility and potential benefits of our techniques in enhancing accuracy and reducing inference latency when integrated with existing methods. Code available: https://github.com/Dengyu-Wu/SNNCutoff
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
341,524
2403.06417
Enhanced Sparsification via Stimulative Training
Sparsification-based pruning has been an important category in model compression. Existing methods commonly set sparsity-inducing penalty terms to suppress the importance of dropped weights, which is regarded as the suppressed sparsification paradigm. However, this paradigm inactivates the dropped parts of networks causing capacity damage before pruning, thereby leading to performance degradation. To alleviate this issue, we first study and reveal the relative sparsity effect in emerging stimulative training and then propose a structured pruning framework, named STP, based on an enhanced sparsification paradigm which maintains the magnitude of dropped weights and enhances the expressivity of kept weights by self-distillation. Besides, to find an optimal architecture for the pruned network, we propose a multi-dimension architecture space and a knowledge distillation-guided exploration strategy. To reduce the huge capacity gap of distillation, we propose a subnet mutating expansion technique. Extensive experiments on various benchmarks indicate the effectiveness of STP. Specifically, without fine-tuning, our method consistently achieves superior performance at different budgets, especially under extremely aggressive pruning scenarios, e.g., remaining 95.11% Top-1 accuracy (72.43% in 76.15%) while reducing 85% FLOPs for ResNet-50 on ImageNet. Codes will be released soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
436,442
2501.17377
ASAP: Learning Generalizable Online Bin Packing via Adaptive Selection After Pruning
Recently, deep reinforcement learning (DRL) has achieved promising results in solving online 3D Bin Packing Problems (3D-BPP). However, these DRL-based policies may perform poorly on new instances due to distribution shift. Besides generalization, we also consider adaptation, completely overlooked by previous work, which aims at rapidly finetuning these policies to a new test distribution. To tackle both generalization and adaptation issues, we propose Adaptive Selection After Pruning (ASAP), which decomposes a solver's decision-making into two policies, one for pruning and one for selection. The role of the pruning policy is to remove inherently bad actions, which allows the selection policy to choose among the remaining most valuable actions. To learn these policies, we propose a training scheme based on a meta-learning phase of both policies followed by a finetuning phase of the sole selection policy to rapidly adapt it to a test distribution. Our experiments demonstrate that ASAP exhibits excellent generalization and adaptation capabilities on in-distribution and out-of-distribution instances under both discrete and continuous setup.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
528,313
1909.03372
ShapeBots: Shape-changing Swarm Robots
We introduce shape-changing swarm robots. A swarm of self-transformable robots can both individually and collectively change their configuration to display information, actuate objects, act as tangible controllers, visualize data, and provide physical affordances. ShapeBots is a concept prototype of shape-changing swarm robots. Each robot can change its shape by leveraging small linear actuators that are thin (2.5 cm) and highly extendable (up to 20cm) in both horizontal and vertical directions. The modular design of each actuator enables various shapes and geometries of self-transformation. We illustrate potential application scenarios and discuss how this type of interface opens up possibilities for the future of ubiquitous and distributed shape-changing interfaces.
true
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
144,454
2011.12388
Multiple Transmit Power Levels based NOMA for Massive Machine-type Communications
This paper proposes a tractable solution for integrating non-orthogonal multiple access (NOMA) into massive machine-type communications (mMTC) to increase the uplink connectivity. Multiple transmit power levels are provided at the user end to enable open-loop power control, which is absent from the traditional uplink NOMA with the fixed transmit power. The basics of this solution are firstly presented to analytically show the inherent performance gain in terms of the average arrival rate (AAR). Then, a practical framework based on a novel power map is proposed to associate a set of well-designed transmit power levels with each geographical region for handling the no instantaneous channel state information problem. Based on this framework, the semi-grant-free (semi-GF) transmission with two practical protocols is introduced to enhance the connectivity, which has higher AAR than both the conventional grand-based and GF transmissions. When the number of active GF devices in mMTC far exceeds the available resource blocks, the corresponding AAR tends to zero. To solve this problem, user barring techniques are employed into the semi-GF transmission to stable the traffic flow and thus increase the AAR. Lastly, promising research directions are discussed for improving the proposed networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
208,137
2410.04684
Combining Structural and Unstructured Data: A Topic-based Finite Mixture Model for Insurance Claim Prediction
Modeling insurance claim amounts and classifying claims into different risk levels are critical yet challenging tasks. Traditional predictive models for insurance claims often overlook the valuable information embedded in claim descriptions. This paper introduces a novel approach by developing a joint mixture model that integrates both claim descriptions and claim amounts. Our method establishes a probabilistic link between textual descriptions and loss amounts, enhancing the accuracy of claims clustering and prediction. In our proposed model, the latent topic/component indicator serves as a proxy for both the thematic content of the claim description and the component of loss distributions. Specifically, conditioned on the topic/component indicator, the claim description follows a multinomial distribution, while the claim amount follows a component loss distribution. We propose two methods for model calibration: an EM algorithm for maximum a posteriori estimates, and an MH-within-Gibbs sampler algorithm for the posterior distribution. The empirical study demonstrates that the proposed methods work effectively, providing interpretable claims clustering and prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
495,395
0905.0079
Multiple-Bases Belief-Propagation Decoding of High-Density Cyclic Codes
We introduce a new method for decoding short and moderate length linear block codes with dense parity-check matrix representations of cyclic form, termed multiple-bases belief-propagation (MBBP). The proposed iterative scheme makes use of the fact that a code has many structurally diverse parity-check matrices, capable of detecting different error patterns. We show that this inherent code property leads to decoding algorithms with significantly better performance when compared to standard BP decoding. Furthermore, we describe how to choose sets of parity-check matrices of cyclic form amenable for multiple-bases decoding, based on analytical studies performed for the binary erasure channel. For several cyclic and extended cyclic codes, the MBBP decoding performance can be shown to closely follow that of maximum-likelihood decoders.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
3,626
2208.03313
A Non-Asymptotic Framework for Approximate Message Passing in Spiked Models
Approximate message passing (AMP) emerges as an effective iterative paradigm for solving high-dimensional statistical problems. However, prior AMP theory -- which focused mostly on high-dimensional asymptotics -- fell short of predicting the AMP dynamics when the number of iterations surpasses $o\big(\frac{\log n}{\log\log n}\big)$ (with $n$ the problem dimension). To address this inadequacy, this paper develops a non-asymptotic framework for understanding AMP in spiked matrix estimation. Built upon new decomposition of AMP updates and controllable residual terms, we lay out an analysis recipe to characterize the finite-sample behavior of AMP in the presence of an independent initialization, which is further generalized to allow for spectral initialization. As two concrete consequences of the proposed analysis recipe: (i) when solving $\mathbb{Z}_2$ synchronization, we predict the behavior of spectrally initialized AMP for up to $O\big(\frac{n}{\mathrm{poly}\log n}\big)$ iterations, showing that the algorithm succeeds without the need of a subsequent refinement stage (as conjectured recently by \citet{celentano2021local}); (ii) we characterize the non-asymptotic behavior of AMP in sparse PCA (in the spiked Wigner model) for a broad range of signal-to-noise ratio.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
311,741
2411.07457
DecoPrompt : Decoding Prompts Reduces Hallucinations when Large Language Models Meet False Premises
While large language models (LLMs) have demonstrated increasing power, they have also called upon studies on their hallucinated outputs that deviate from factually correct statements. In this paper, we focus on one important scenario of false premises, where LLMs are distracted by misaligned claims although the model possesses the required factual knowledge to answer original questions accurately. Inspired by the observation that entropy of the false-premise prompt is closely related to its likelihood to elicit hallucination generation, we propose a new prompting algorithm, named DecoPrompt, to mitigate hallucination. DecoPrompt leverages LLMs to "decode" the false-premise prompts without really eliciting hallucination output from LLMs. We perform experiments on two datasets, demonstrating that DecoPrompt can reduce hallucinations effectively on outputs from different LLMs. Moreover, DecoPrompt exhibits cross-model transferability, which facilitates its applications to scenarios such as LLMs of large sizes or unavailable model logits.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
507,532
1612.02109
A Generalized Mixed-Integer Convex Program for Multilegged Footstep Planning on Uneven Terrain
Robot footstep planning strategies can be divided in two main approaches: discrete searches and continuous optimizations. While discrete searches have been broadly applied, continuous optimizations approaches have been restricted for humanoid platforms. This article introduces a generalized continuous-optimization approach for multilegged footstep planning which can be adapted to different platforms, regardless the number and geometry of legs. This approach leverages Mixed-Integer Convex Programming to account for the non-convex constraints that represent footstep rotation and obstacle avoidance. The planning problem is formulated as an optimization problem which considers robot geometry and reachability with linear constraints, and can be efficiently solved using optimization software. To demonstrate the functionality and adaptability of the planner, a set of tests are performed on a BH3R hexapod and a LittleDog quadruped on scenarios which can't be easily handled with discrete searches, such tests are solved efficiently in fractions of a second. This work represents, to the knowledge of the authors, the first successful implementation of a continuous optimization-based multilegged footstep planner.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
65,184
2104.14754
Exploiting Spatial Dimensions of Latent in GAN for Real-time Image Editing
Generative adversarial networks (GANs) synthesize realistic images from random latent vectors. Although manipulating the latent vectors controls the synthesized outputs, editing real images with GANs suffers from i) time-consuming optimization for projecting real images to the latent vectors, ii) or inaccurate embedding through an encoder. We propose StyleMapGAN: the intermediate latent space has spatial dimensions, and a spatially variant modulation replaces AdaIN. It makes the embedding through an encoder more accurate than existing optimization-based methods while maintaining the properties of GANs. Experimental results demonstrate that our method significantly outperforms state-of-the-art models in various image manipulation tasks such as local editing and image interpolation. Last but not least, conventional editing methods on GANs are still valid on our StyleMapGAN. Source code is available at https://github.com/naver-ai/StyleMapGAN.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
232,931
1401.5899
Kernel Least Mean Square with Adaptive Kernel Size
Kernel adaptive filters (KAF) are a class of powerful nonlinear filters developed in Reproducing Kernel Hilbert Space (RKHS). The Gaussian kernel is usually the default kernel in KAF algorithms, but selecting the proper kernel size (bandwidth) is still an open important issue especially for learning with small sample sizes. In previous research, the kernel size was set manually or estimated in advance by Silvermans rule based on the sample distribution. This study aims to develop an online technique for optimizing the kernel size of the kernel least mean square (KLMS) algorithm. A sequential optimization strategy is proposed, and a new algorithm is developed, in which the filter weights and the kernel size are both sequentially updated by stochastic gradient algorithms that minimize the mean square error (MSE). Theoretical results on convergence are also presented. The excellent performance of the new algorithm is confirmed by simulations on static function estimation and short term chaotic time series prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
30,270
1902.07636
Contributive Social Capital Extraction From Different Types of Online Data Sources
It is a recurring problem of online communication that the properties of unknown people are hard to assess. This may lead to various issues such as the spread of `fake news' from untrustworthy sources. In sociology the sum of (social) resources available to a person through their social network is often described as social capital. In this article, we look at social capital from a different angle. Instead of evaluating the advantage that people have because of their membership in a certain group, we investigate various ways to infer the social capital a person adds or may add to the network, their contributive social capital (CSC). As there is no consensus in the literature on what the social capital of a person exactly consists of, we look at various related properties: expertise, reputation, trustworthiness, and influence. The analysis of these features is investigated for five different sources of online data: microblogging (e.g., Twitter), social networking platforms (e.g., Facebook), direct communication (e.g., email), scientometrics, and threaded discussion boards (e.g., Reddit). In each field we discuss recent publications and put a focus on the data sources used, the algorithms implemented, and the performance evaluation. The findings are compared and set in context to contributive social capital extraction. The analysis algorithms are based on individual features (e.g., followers on Twitter), ratios thereof, or a person's centrality measures (e.g., PageRank). The machine learning approaches, such as straightforward classifiers (e.g., support vector machines) use ground truths that are connected to social capital. The discussion of these methods is intended to facilitate research on the topic by identifying relevant data sources and the best suited algorithms, and by providing tested methods for the evaluation of findings.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
122,026
1910.11563
Metric Classification Network in Actual Face Recognition Scene
In order to make facial features more discriminative, some new models have recently been proposed. However, almost all of these models use the traditional face verification method, where the cosine operation is performed using the features of the bottleneck layer output. However, each of these models needs to change a threshold each time it is operated on a different test set. This is very inappropriate for application in real-world scenarios. In this paper, we train a validation classifier to normalize the decision threshold, which means that the result can be obtained directly without replacing the threshold. We refer to our model as validation classifier, which achieves best result on the structure consisting of one convolution layer and six fully connected layers. To test our approach, we conduct extensive experiments on Labeled Face in the Wild (LFW) and Youtube Faces (YTF), and the relative error reduction is 25.37% and 26.60% than traditional method respectively. These experiments confirm the effectiveness of validation classifier on face recognition task.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
150,823
2403.07832
DeliGrasp: Inferring Object Properties with LLMs for Adaptive Grasp Policies
Large language models (LLMs) can provide rich physical descriptions of most worldly objects, allowing robots to achieve more informed and capable grasping. We leverage LLMs' common sense physical reasoning and code-writing abilities to infer an object's physical characteristics$\unicode{x2013}$mass $m$, friction coefficient $\mu$, and spring constant $k$$\unicode{x2013}$from a semantic description, and then translate those characteristics into an executable adaptive grasp policy. Using a two-finger gripper with a built-in depth camera that can control its torque by limiting motor current, we demonstrate that LLM-parameterized but first-principles grasp policies outperform both traditional adaptive grasp policies and direct LLM-as-code policies on a custom benchmark of 12 delicate and deformable items including food, produce, toys, and other everyday items, spanning two orders of magnitude in mass and required pick-up force. We then improve property estimation and grasp performance on variable size objects with model finetuning on property-based comparisons and eliciting such comparisons via chain-of-thought prompting. We also demonstrate how compliance feedback from DeliGrasp policies can aid in downstream tasks such as measuring produce ripeness. Our code and videos are available at: https://deligrasp.github.io
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
437,043
2208.13064
A Diversity-Aware Domain Development Methodology
The development of domain ontological models, though being a mature research arena backed by well-established methodologies, still suffer from two key shortcomings. Firstly, the issues concerning the semantic persistency of ontology concepts and their flexible reuse in domain development employing existing approaches. Secondly, due to the difficulty in understanding and reusing top-level concepts in existing foundational ontologies, the obfuscation regarding the semantic nature of domain representations. The paper grounds the aforementioned shortcomings in representation diversity and proposes a three-fold solution - (i) a pipeline for rendering concepts reuse-ready, (ii) a first characterization of a minimalistic foundational knowledge model, named foundational teleology, semantically explicating foundational distinctions enforcing the static as well as dynamic nature of domain representations, and (iii) a flexible, reuse-native methodology for diversity-aware domain development exploiting solutions (i) and (ii). The preliminary work reported validates the potentiality of the solution components.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
314,941
2207.04655
Personalizing Federated Medical Image Segmentation via Local Calibration
Medical image segmentation under federated learning (FL) is a promising direction by allowing multiple clinical sites to collaboratively learn a global model without centralizing datasets. However, using a single model to adapt to various data distributions from different sites is extremely challenging. Personalized FL tackles this issue by only utilizing partial model parameters shared from global server, while keeping the rest to adapt to its own data distribution in the local training of each site. However, most existing methods concentrate on the partial parameter splitting, while do not consider the \textit{inter-site in-consistencies} during the local training, which in fact can facilitate the knowledge communication over sites to benefit the model learning for improving the local accuracy. In this paper, we propose a personalized federated framework with \textbf{L}ocal \textbf{C}alibration (LC-Fed), to leverage the inter-site in-consistencies in both \textit{feature- and prediction- levels} to boost the segmentation. Concretely, as each local site has its alternative attention on the various features, we first design the contrastive site embedding coupled with channel selection operation to calibrate the encoded features. Moreover, we propose to exploit the knowledge of prediction-level in-consistency to guide the personalized modeling on the ambiguous regions, e.g., anatomical boundaries. It is achieved by computing a disagreement-aware map to calibrate the prediction. Effectiveness of our method has been verified on three medical image segmentation tasks with different modalities, where our method consistently shows superior performance to the state-of-the-art personalized FL methods. Code is available at https://github.com/jcwang123/FedLC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
307,268
2411.07725
ALOcc: Adaptive Lifting-based 3D Semantic Occupancy and Cost Volume-based Flow Prediction
Vision-based semantic occupancy and flow prediction plays a crucial role in providing spatiotemporal cues for real-world tasks, such as autonomous driving. Existing methods prioritize higher accuracy to cater to the demands of these tasks. In this work, we strive to improve performance by introducing a series of targeted improvements for 3D semantic occupancy prediction and flow estimation. First, we introduce an occlusion-aware adaptive lifting mechanism with a depth denoising technique to improve the robustness of 2D-to-3D feature transformation and reduce the reliance on depth priors. Second, we strengthen the semantic consistency between 3D features and their original 2D modalities by utilizing shared semantic prototypes to jointly constrain both 2D and 3D features. This is complemented by confidence- and category-based sampling strategies to tackle long-tail challenges in 3D space. To alleviate the feature encoding burden in the joint prediction of semantics and flow, we propose a BEV cost volume-based prediction method that links flow and semantic features through a cost volume and employs a classification-regression supervision scheme to address the varying flow scales in dynamic scenes. Our purely convolutional architecture framework, named ALOcc, achieves an optimal tradeoff between speed and accuracy achieving state-of-the-art results on multiple benchmarks. On Occ3D and training without the camera visible mask, our ALOcc achieves an absolute gain of 2.5\% in terms of RayIoU while operating at a comparable speed compared to the state-of-the-art, using the same input size (256$\times$704) and ResNet-50 backbone. Our method also achieves 2nd place in the CVPR24 Occupancy and Flow Prediction Competition.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
507,656
2408.04189
Artificial Intelligence based Approach for Identification and Mitigation of Cyber-Attacks in Wide-Area Control of Power Systems
We propose a generative adversarial network (GAN) based deep learning method that serves the dual role of both identification and mitigation of cyber-attacks in wide-area damping control loops of power systems. Two specific types of attacks considered are false data injection and denial-of-service (DoS). Unlike existing methods, which are either model-based or model-free and yet require two separate learning modules for detection and mitigation leading to longer response times before clearing an attack, our deep learner incorporate both goals within the same integrated framework. A Long Short-Term Memory (LSTM) encoder-decoder based GAN is proposed that captures the temporal dynamics of the power system significantly more accurately than fully-connected GANs, thereby providing better accuracy and faster response for both goals. The method is validated using the IEEE 68-bus power system model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
479,280
2105.13318
Synthetic Data Generation for Grammatical Error Correction with Tagged Corruption Models
Synthetic data generation is widely known to boost the accuracy of neural grammatical error correction (GEC) systems, but existing methods often lack diversity or are too simplistic to generate the broad range of grammatical errors made by human writers. In this work, we use error type tags from automatic annotation tools such as ERRANT to guide synthetic data generation. We compare several models that can produce an ungrammatical sentence given a clean sentence and an error type tag. We use these models to build a new, large synthetic pre-training data set with error tag frequency distributions matching a given development set. Our synthetic data set yields large and consistent gains, improving the state-of-the-art on the BEA-19 and CoNLL-14 test sets. We also show that our approach is particularly effective in adapting a GEC system, trained on mixed native and non-native English, to a native English test set, even surpassing real training data consisting of high-quality sentence pairs.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
237,268
2306.06102
Backup Plan Constrained Model Predictive Control with Guaranteed Stability
This article proposes and evaluates a new safety concept called backup plan safety for path planning of autonomous vehicles under mission uncertainty using model predictive control (MPC). Backup plan safety is defined as the ability to complete an alternative mission when the primary mission is aborted. To include this new safety concept in control problems, we formulate a feasibility maximization problem aiming to maximize the feasibility of the primary and alternative missions. The feasibility maximization problem is based on multi-objective MPC, where each objective (cost function) is associated with a different mission and balanced by a weight vector. Furthermore, the feasibility maximization problem incorporates additional control input horizons toward the alternative missions on top of the control input horizon toward the primary mission, denoted as multi-horizon inputs, to evaluate the cost for each mission. We develop the backup plan constrained MPC algorithm, which designs the weight vector that ensures asymptotic stability of the closed-loop system, and generates the optimal control input by solving the feasibility maximization problem with computational efficiency. The performance of the proposed algorithm is validated through simulations of a UAV path planning problem.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
372,454
1611.04655
Motion Estimated-Compensated Reconstruction with Preserved-Features in Free-Breathing Cardiac MRI
To develop an efficient motion-compensated reconstruction technique for free-breathing cardiac magnetic resonance imaging (MRI) that allows high-quality images to be reconstructed from multiple undersampled single-shot acquisitions. The proposed method is a joint image reconstruction and motion correction method consisting of several steps, including a non-rigid motion extraction and a motion-compensated reconstruction. The reconstruction includes a denoising with the Beltrami regularization, which offers an ideal compromise between feature preservation and staircasing reduction. Results were assessed in simulation, phantom and volunteer experiments. The proposed joint image reconstruction and motion correction method exhibits visible quality improvement over previous methods while reconstructing sharper edges. Moreover, when the acceleration factor increases, standard methods show blurry results while the proposed method preserves image quality. The method was applied to free-breathing single-shot cardiac MRI, successfully achieving high image quality and higher spatial resolution than conventional segmented methods, with the potential to offer high-quality delayed enhancement scans in challenging patients.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
63,879
2203.09279
Transfer learning for cross-modal demand prediction of bike-share and public transit
The urban transportation system is a combination of multiple transport modes, and the interdependencies across those modes exist. This means that the travel demand across different travel modes could be correlated as one mode may receive demand from or create demand for another mode, not to mention natural correlations between different demand time series due to general demand flow patterns across the network. It is expectable that cross-modal ripple effects become more prevalent, with Mobility as a Service. Therefore, by propagating demand data across modes, a better demand prediction could be obtained. To this end, this study explores various machine learning models and transfer learning strategies for cross-modal demand prediction. The trip data of bike-share, metro, and taxi are processed as the station-level passenger flows, and then the proposed prediction method is tested in the large-scale case studies of Nanjing and Chicago. The results suggest that prediction models with transfer learning perform better than unimodal prediction models. Furthermore, stacked Long Short-Term Memory model performs particularly well in cross-modal demand prediction. These results verify our combined method's forecasting improvement over existing benchmarks and demonstrate the good transferability for cross-modal demand prediction in multiple cities.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
286,100
2012.03243
V2I-Based Platooning Design with Delay Awareness
This paper studies the vehicle platooning system based on vehicle-to-infrastructure (V2I) communication, where all the vehicles in the platoon upload their driving state information to the roadside unit (RSU), and RSU makes the platoon control decisions with the assistance of edge computing. By addressing the delay concern, a platoon control approach is proposed to achieve plant stability and string stability. The effects of the time headway, communication and edge computing delays on the stability are quantified. The velocity and size of the stable platoon are calculated, which show the impacts of the radio parameters such as massive MIMO antennas and frequency band on the platoon configuration. The handover performance between RSUs in the V2I-based platooning system is quantified by considering the effects of the RSU's coverage and platoon size, which demonstrates that the velocity of a stable platoon should be appropriately chosen, in order to meet the V2I's Quality-of-Service and handover constraints.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
210,047
2205.06871
Near-Negative Distinction: Giving a Second Life to Human Evaluation Datasets
Precisely assessing the progress in natural language generation (NLG) tasks is challenging, and human evaluation to establish a preference in a model's output over another is often necessary. However, human evaluation is usually costly, difficult to reproduce, and non-reusable. In this paper, we propose a new and simple automatic evaluation method for NLG called Near-Negative Distinction (NND) that repurposes prior human annotations into NND tests. In an NND test, an NLG model must place a higher likelihood on a high-quality output candidate than on a near-negative candidate with a known error. Model performance is established by the number of NND tests a model passes, as well as the distribution over task-specific errors the model fails on. Through experiments on three NLG tasks (question generation, question answering, and summarization), we show that NND achieves a higher correlation with human judgments than standard NLG evaluation metrics. We then illustrate NND evaluation in four practical scenarios, for example performing fine-grain model analysis, or studying model training dynamics. Our findings suggest that NND can give a second life to human annotations and provide low-cost NLG evaluation.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
296,379
2212.05606
Transductive Linear Probing: A Novel Framework for Few-Shot Node Classification
Few-shot node classification is tasked to provide accurate predictions for nodes from novel classes with only few representative labeled nodes. This problem has drawn tremendous attention for its projection to prevailing real-world applications, such as product categorization for newly added commodity categories on an E-commerce platform with scarce records or diagnoses for rare diseases on a patient similarity graph. To tackle such challenging label scarcity issues in the non-Euclidean graph domain, meta-learning has become a successful and predominant paradigm. More recently, inspired by the development of graph self-supervised learning, transferring pretrained node embeddings for few-shot node classification could be a promising alternative to meta-learning but remains unexposed. In this work, we empirically demonstrate the potential of an alternative framework, \textit{Transductive Linear Probing}, that transfers pretrained node embeddings, which are learned from graph contrastive learning methods. We further extend the setting of few-shot node classification from standard fully supervised to a more realistic self-supervised setting, where meta-learning methods cannot be easily deployed due to the shortage of supervision from training classes. Surprisingly, even without any ground-truth labels, transductive linear probing with self-supervised graph contrastive pretraining can outperform the state-of-the-art fully supervised meta-learning based methods under the same protocol. We hope this work can shed new light on few-shot node classification problems and foster future research on learning from scarcely labeled instances on graphs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
335,834
2407.09486
ENOVA: Autoscaling towards Cost-effective and Stable Serverless LLM Serving
Since the increasing popularity of large language model (LLM) backend systems, it is common and necessary to deploy stable serverless serving of LLM on multi-GPU clusters with autoscaling. However, there exist challenges because the diversity and co-location of applications in multi-GPU clusters will lead to low service quality and GPU utilization. To address them, we build ENOVA, a deployment, monitoring and autoscaling service towards serverless LLM serving. ENOVA deconstructs the execution process of LLM service comprehensively, based on which ENOVA designs a configuration recommendation module for automatic deployment on any GPU clusters and a performance detection module for autoscaling. On top of them, ENOVA implements a deployment execution engine for multi-GPU cluster scheduling. The experiment results show that ENOVA significantly outperforms other state-of-the-art methods and is suitable for wide deployment in large online systems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
472,585
2005.02865
An accurate methodology for surface tension modeling in OpenFOAM
In this paper a numerical methodology for surface tension modeling is presented, with an emphasis on the implementation in the OpenFOAM framework. The methodology relies on a combination of (i) a well-balanced approach based on the Ghost Fluid Method (GFM), including the jump of density and pressure directly in the numerical discretization of the pressure equation, and (ii) Height Functions to evaluate the interface curvature, implemented, to the authors' knowledge, for the first time in OpenFOAM. The method is able to significantly reduce spurious currents (almost to machine accuracy) for a stationary droplet, showing second order convergence both for the curvature and the interface shape. Accurate results are also obtained for additional test cases such as translating droplets, capillary oscillations and rising bubbles, for which numerical results are comparable to what obtained by other numerical codes in the same conditions. Finally, the Height Functions method is extended to include the treatment of contact angles, both for sessile droplets and droplets suspended under the effect of gravity, showing a very good agreement with the theoretical prediction. The code works in parallel mode and details on the actual implementation in OpenFOAM are included to facilitate the reproducibility of the results.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
175,993
1812.00049
The Indus Script and Economics. A Role for Indus Seals and Tablets in Rationing and Administration of Labor
The Indus script remains one of the last major undeciphered scripts of the ancient world. We focus here on Indus inscriptions on a group of miniature tablets discovered by Meadow and Kenoyer in Harappa in 1997. By drawing parallels with proto-Elamite and proto-Cuneiform inscriptions, we explore how these miniature tablets may have been used to record rations allocated to porters or laborers. We then show that similar inscriptions are found on stamp seals, leading to the potentially provocative conclusion that rather than simply indicating ownership of property, Indus seals may have been used for generating tokens, tablets and sealings for repetitive economic transactions such as rations and exchange of canonical amounts of goods, grains, animals, and labor in a barter-based economy.
false
false
false
true
false
false
false
false
true
false
false
false
false
true
false
false
false
false
115,140
2501.17704
Inferring Implicit Goals Across Differing Task Models
One of the significant challenges to generating value-aligned behavior is to not only account for the specified user objectives but also any implicit or unspecified user requirements. The existence of such implicit requirements could be particularly common in settings where the user's understanding of the task model may differ from the agent's estimate of the model. Under this scenario, the user may incorrectly expect some agent behavior to be inevitable or guaranteed. This paper addresses such expectation mismatch in the presence of differing models by capturing the possibility of unspecified user subgoal in the context of a task captured as a Markov Decision Process (MDP) and querying for it as required. Our method identifies bottleneck states and uses them as candidates for potential implicit subgoals. We then introduce a querying strategy that will generate the minimal number of queries required to identify a policy guaranteed to achieve the underlying goal. Our empirical evaluations demonstrate the effectiveness of our approach in inferring and achieving unstated goals across various tasks.
false
false
false
false
true
false
false
true
false
false
true
false
false
false
false
false
false
false
528,425
2306.08729
Towards vision-based dual arm robotic fruit harvesting
Interest in agricultural robotics has increased considerably in recent years due to benefits such as improvement in productivity and labor reduction. However, current problems associated with unstructured environments make the development of robotic harvesters challenging. Most research in agricultural robotics focuses on single arm manipulation. Here, we propose a dual-arm approach. We present a dual-arm fruit harvesting robot equipped with a RGB-D camera, cutting and collecting tools. We exploit the cooperative task description to maximize the capabilities of the dual-arm robot. We designed a Hierarchical Quadratic Programming based control strategy to fulfill the set of hard constrains related to the robot and environment: robot joint limits, robot self-collisions, robot-fruit and robot-tree collisions. We combine deep learning and standard image processing algorithms to detect and track fruits as well as the tree trunk in the scene. We validate our perception methods on real-world RGB-D images and our control method on simulated experiments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
373,513
2111.07158
Robust Deep Reinforcement Learning for Extractive Legal Summarization
Automatic summarization of legal texts is an important and still a challenging task since legal documents are often long and complicated with unusual structures and styles. Recent advances of deep models trained end-to-end with differentiable losses can well-summarize natural text, yet when applied to legal domain, they show limited results. In this paper, we propose to use reinforcement learning to train current deep summarization models to improve their performance on the legal domain. To this end, we adopt proximal policy optimization methods and introduce novel reward functions that encourage the generation of candidate summaries satisfying both lexical and semantic criteria. We apply our method to training different summarization backbones and observe a consistent and significant performance gain across 3 public legal datasets.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
266,294
1501.03002
An Improvement to the Domain Adaptation Bound in a PAC-Bayesian context
This paper provides a theoretical analysis of domain adaptation based on the PAC-Bayesian theory. We propose an improvement of the previous domain adaptation bound obtained by Germain et al. in two ways. We first give another generalization bound tighter and easier to interpret. Moreover, we provide a new analysis of the constant term appearing in the bound that can be of high interest for developing new algorithmic solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
39,236
2412.07228
T-TIME: Test-Time Information Maximization Ensemble for Plug-and-Play BCIs
Objective: An electroencephalogram (EEG)-based brain-computer interface (BCI) enables direct communication between the human brain and a computer. Due to individual differences and non-stationarity of EEG signals, such BCIs usually require a subject-specific calibration session before each use, which is time-consuming and user-unfriendly. Transfer learning (TL) has been proposed to shorten or eliminate this calibration, but existing TL approaches mainly consider offline settings, where all unlabeled EEG trials from the new user are available. Methods: This paper proposes Test-Time Information Maximization Ensemble (T-TIME) to accommodate the most challenging online TL scenario, where unlabeled EEG data from the new user arrive in a stream, and immediate classification is performed. T-TIME initializes multiple classifiers from the aligned source data. When an unlabeled test EEG trial arrives, T-TIME first predicts its labels using ensemble learning, and then updates each classifier by conditional entropy minimization and adaptive marginal distribution regularization. Our code is publicized. Results: Extensive experiments on three public motor imagery based BCI datasets demonstrated that T-TIME outperformed about 20 classical and state-of-the-art TL approaches. Significance: To our knowledge, this is the first work on test time adaptation for calibration-free EEG-based BCIs, making plug-and-play BCIs possible.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
515,578
2405.13077
GPT-4 Jailbreaks Itself with Near-Perfect Success Using Self-Explanation
Research on jailbreaking has been valuable for testing and understanding the safety and security issues of large language models (LLMs). In this paper, we introduce Iterative Refinement Induced Self-Jailbreak (IRIS), a novel approach that leverages the reflective capabilities of LLMs for jailbreaking with only black-box access. Unlike previous methods, IRIS simplifies the jailbreaking process by using a single model as both the attacker and target. This method first iteratively refines adversarial prompts through self-explanation, which is crucial for ensuring that even well-aligned LLMs obey adversarial instructions. IRIS then rates and enhances the output given the refined prompt to increase its harmfulness. We find that IRIS achieves jailbreak success rates of 98% on GPT-4, 92% on GPT-4 Turbo, and 94% on Llama-3.1-70B in under 7 queries. It significantly outperforms prior approaches in automatic, black-box, and interpretable jailbreaking, while requiring substantially fewer queries, thereby establishing a new standard for interpretable jailbreaking methods.
false
false
false
false
true
false
false
false
true
false
false
false
true
false
false
false
false
false
455,796
1804.00117
Multi-label Learning with Missing Labels using Mixed Dependency Graphs
This work focuses on the problem of multi-label learning with missing labels (MLML), which aims to label each test instance with multiple class labels given training instances that have an incomplete/partial set of these labels. The key point to handle missing labels is propagating the label information from provided labels to missing labels, through a dependency graph that each label of each instance is treated as a node. We build this graph by utilizing different types of label dependencies. Specifically, the instance-level similarity is served as undirected edges to connect the label nodes across different instances and the semantic label hierarchy is used as directed edges to connect different classes. This base graph is referred to as the mixed dependency graph, as it includes both undirected and directed edges. Furthermore, we present another two types of label dependencies to connect the label nodes across different classes. One is the class co-occurrence, which is also encoded as undirected edges. Combining with the base graph, we obtain a new mixed graph, called MG-CO (mixed graph with co-occurrence). The other is the sparse and low rank decomposition of the whole label matrix, to embed high-order dependencies over all labels. Combining with the base graph, the new mixed graph is called as MG-SL (mixed graph with sparse and low rank decomposition). Based on MG-CO and MG-SL, we propose two convex transductive formulations of the MLML problem, denoted as MLMG-CO and MLMG-SL, respectively. Two important applications, including image annotation and tag based image retrieval, can be jointly handled using our proposed methods. Experiments on benchmark datasets show that our methods give significant improvements in performance and robustness to missing labels over the state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
93,944
2410.13853
AutoAL: Automated Active Learning with Differentiable Query Strategy Search
As deep learning continues to evolve, the need for data efficiency becomes increasingly important. Considering labeling large datasets is both time-consuming and expensive, active learning (AL) provides a promising solution to this challenge by iteratively selecting the most informative subsets of examples to train deep neural networks, thereby reducing the labeling cost. However, the effectiveness of different AL algorithms can vary significantly across data scenarios, and determining which AL algorithm best fits a given task remains a challenging problem. This work presents the first differentiable AL strategy search method, named AutoAL, which is designed on top of existing AL sampling strategies. AutoAL consists of two neural nets, named SearchNet and FitNet, which are optimized concurrently under a differentiable bi-level optimization framework. For any given task, SearchNet and FitNet are iteratively co-optimized using the labeled data, learning how well a set of candidate AL algorithms perform on that task. With the optimal AL strategies identified, SearchNet selects a small subset from the unlabeled pool for querying their annotations, enabling efficient training of the task model. Experimental results demonstrate that AutoAL consistently achieves superior accuracy compared to all candidate AL algorithms and other selective AL approaches, showcasing its potential for adapting and integrating multiple existing AL methods across diverse tasks and domains. Code will be available at: https://github.com/haizailache999/AutoAL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
499,730
1709.04579
Autonomous Extracting a Hierarchical Structure of Tasks in Reinforcement Learning and Multi-task Reinforcement Learning
Reinforcement learning (RL), while often powerful, can suffer from slow learning speeds, particularly in high dimensional spaces. The autonomous decomposition of tasks and use of hierarchical methods hold the potential to significantly speed up learning in such domains. This paper proposes a novel practical method that can autonomously decompose tasks, by leveraging association rule mining, which discovers hidden relationship among entities in data mining. We introduce a novel method called ARM-HSTRL (Association Rule Mining to extract Hierarchical Structure of Tasks in Reinforcement Learning). It extracts temporal and structural relationships of sub-goals in RL, and multi-task RL. In particular,it finds sub-goals and relationship among them. It is shown the significant efficiency and performance of the proposed method in two main topics of RL.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
80,691
2012.04224
KNN-enhanced Deep Learning Against Noisy Labels
Supervised learning on Deep Neural Networks (DNNs) is data hungry. Optimizing performance of DNN in the presence of noisy labels has become of paramount importance since collecting a large dataset will usually bring in noisy labels. Inspired by the robustness of K-Nearest Neighbors (KNN) against data noise, in this work, we propose to apply deep KNN for label cleanup. Our approach leverages DNNs for feature extraction and KNN for ground-truth label inference. We iteratively train the neural network and update labels to simultaneously proceed towards higher label recovery rate and better classification performance. Experiment results show that under the same setting, our approach outperforms existing label correction methods and achieves better accuracy on multiple datasets, e.g.,76.78% on Clothing1M dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
210,387
1406.1476
A Context-aware Delayed Agglomeration Framework for Electron Microscopy Segmentation
Electron Microscopy (EM) image (or volume) segmentation has become significantly important in recent years as an instrument for connectomics. This paper proposes a novel agglomerative framework for EM segmentation. In particular, given an over-segmented image or volume, we propose a novel framework for accurately clustering regions of the same neuron. Unlike existing agglomerative methods, the proposed context-aware algorithm divides superpixels (over-segmented regions) of different biological entities into different subsets and agglomerates them separately. In addition, this paper describes a "delayed" scheme for agglomerative clustering that postpones some of the merge decisions, pertaining to newly formed bodies, in order to generate a more confident boundary prediction. We report significant improvements attained by the proposed approach in segmentation accuracy over existing standard methods on 2D and 3D datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
33,635
1309.3752
Novel Repair-by-Transfer Codes and Systematic Exact-MBR Codes with Lower Complexities and Smaller Field Sizes
The $(n,k,d)$ regenerating code is a class of $(n,k)$ erasure codes with the capability to recover a lost code fragment from other $d$ existing code fragments. This paper concentrates on the design of exact regenerating codes at Minimum Bandwidth Regenerating (MBR) points. For $d=n-1$, a class of $(n,k,d=n-1)$ Exact-MBR codes, termed as repair-by-transfer codes, have been developed in prior work to avoid arithmetic operations in node repairing process. The first result of this paper presents a new class of repair-by-transfer codes via congruent transformations. As compared with the prior works, the advantages of the proposed codes include: i) The minimum of the finite field size is significantly reduced from $n \choose 2$ to $n$. ii) The encoding complexity is decreased from $n^4$ to $n^3$. As shown in simulations, the proposed repair-by-transfer codes have lower computational overhead when $n$ is greater than a specific constant. The second result of this paper presents a new form of coding matrix for product-matrix Exact-MBR codes. The proposed coding matrix includes a number of advantages: i). The minimum of the finite field size is reduced from $n-k+d$ to $n$. ii). The fast Reed-Solomon erasure coding algorithms can be applied on the Exact-MBR codes to reduce the time complexities.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
27,043
2012.10545
A 3D GAN for Improved Large-pose Facial Recognition
Facial recognition using deep convolutional neural networks relies on the availability of large datasets of face images. Many examples of identities are needed, and for each identity, a large variety of images are needed in order for the network to learn robustness to intra-class variation. In practice, such datasets are difficult to obtain, particularly those containing adequate variation of pose. Generative Adversarial Networks (GANs) provide a potential solution to this problem due to their ability to generate realistic, synthetic images. However, recent studies have shown that current methods of disentangling pose from identity are inadequate. In this work we incorporate a 3D morphable model into the generator of a GAN in order to learn a nonlinear texture model from in-the-wild images. This allows generation of new, synthetic identities, and manipulation of pose, illumination and expression without compromising the identity. Our synthesised data is used to augment training of facial recognition networks with performance evaluated on the challenging CFP and CPLFW datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,369
2209.12427
Learning Continuous Control Policies for Information-Theoretic Active Perception
This paper proposes a method for learning continuous control policies for active landmark localization and exploration using an information-theoretic cost. We consider a mobile robot detecting landmarks within a limited sensing range, and tackle the problem of learning a control policy that maximizes the mutual information between the landmark states and the sensor observations. We employ a Kalman filter to convert the partially observable problem in the landmark state to Markov decision process (MDP), a differentiable field of view to shape the reward, and an attention-based neural network to represent the control policy. The approach is further unified with active volumetric mapping to promote exploration in addition to landmark localization. The performance is demonstrated in several simulated landmark localization tasks in comparison with benchmark methods.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
319,532
2401.02884
MsDC-DEQ-Net: Deep Equilibrium Model (DEQ) with Multi-scale Dilated Convolution for Image Compressive Sensing (CS)
Compressive sensing (CS) is a technique that enables the recovery of sparse signals using fewer measurements than traditional sampling methods. To address the computational challenges of CS reconstruction, our objective is to develop an interpretable and concise neural network model for reconstructing natural images using CS. We achieve this by mapping one step of the iterative shrinkage thresholding algorithm (ISTA) to a deep network block, representing one iteration of ISTA. To enhance learning ability and incorporate structural diversity, we integrate aggregated residual transformations (ResNeXt) and squeeze-and-excitation (SE) mechanisms into the ISTA block. This block serves as a deep equilibrium layer, connected to a semi-tensor product network (STP-Net) for convenient sampling and providing an initial reconstruction. The resulting model, called MsDC-DEQ-Net, exhibits competitive performance compared to state-of-the-art network-based methods. It significantly reduces storage requirements compared to deep unrolling methods, using only one iteration block instead of multiple iterations. Unlike deep unrolling models, MsDC-DEQ-Net can be iteratively used, gradually improving reconstruction accuracy while considering computation trade-offs. Additionally, the model benefits from multi-scale dilated convolutions, further enhancing performance.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
419,876
2201.12451
Extracting Finite Automata from RNNs Using State Merging
One way to interpret the behavior of a blackbox recurrent neural network (RNN) is to extract from it a more interpretable discrete computational model, like a finite state machine, that captures its behavior. In this work, we propose a new method for extracting finite automata from RNNs inspired by the state merging paradigm from grammatical inference. We demonstrate the effectiveness of our method on the Tomita languages benchmark, where we find that it is able to extract faithful automata from RNNs trained on all languages in the benchmark. We find that extraction performance is aided by the number of data provided during the extraction process, as well as, curiously, whether the RNN model is trained for additional epochs after perfectly learning its target language. We use our method to analyze this phenomenon, finding that training beyond convergence is useful because it leads to compression of the internal state space of the RNN. This finding demonstrates how our method can be used for interpretability and analysis of trained RNN models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
277,648
2405.04396
Predicting Transonic Flowfields in Non-Homogeneous Unstructured Grids Using Autoencoder Graph Convolutional Networks
This paper focuses on addressing challenges posed by non-homogeneous unstructured grids, commonly used in Computational Fluid Dynamics (CFD). Their prevalence in CFD scenarios has motivated the exploration of innovative approaches for generating reduced-order models. The core of our approach centers on geometric deep learning, specifically the utilization of graph convolutional network (GCN). The novel Autoencoder GCN architecture enhances prediction accuracy by propagating information to distant nodes and emphasizing influential points. This architecture, with GCN layers and encoding/decoding modules, reduces dimensionality based on pressure-gradient values. The autoencoder structure improves the network capability to identify key features, contributing to a more robust and accurate predictive model. To validate the proposed methodology, we analyzed two different test cases: wing-only model and wing--body configuration. Precise reconstruction of steady-state distributed quantities within a two-dimensional parametric space underscores the reliability and versatility of the implemented approach.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
452,558
2305.15216
Open Source High Fidelity Modeling of a Type 5 Wind Turbine Drivetrain for Grid Integration
The increasing integration of renewable energy resources in evolving bulk power system (BPS) is impacting the system inertia. Type-5 wind turbine generation has the potential to behave like a traditional synchronous generator and can help improve system inertia. Hydraulic torque converter (TC) and gearbox with torque limiting feature are integral parts of a Type-5 wind turbine unit. High fidelity model of Type-5 wind turbine including these core components is not openly and widely available for grid integration and transient stability studies. This hinders appropriate assessment of Type-5 wind power plant's contribution to bulk grid resilience. This work develops a TC model based on those generally used in automobile's transmission system. Moreover, the concept of torsional coupling is leveraged to integrate the TC and gearbox system dynamics. The entire integrated model will be open sourced and publicly available for grid integration studies.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
367,508
2403.16043
Semantic Is Enough: Only Semantic Information For NeRF Reconstruction
Recent research that combines implicit 3D representation with semantic information, like Semantic-NeRF, has proven that NeRF model could perform excellently in rendering 3D structures with semantic labels. This research aims to extend the Semantic Neural Radiance Fields (Semantic-NeRF) model by focusing solely on semantic output and removing the RGB output component. We reformulate the model and its training procedure to leverage only the cross-entropy loss between the model semantic output and the ground truth semantic images, removing the colour data traditionally used in the original Semantic-NeRF approach. We then conduct a series of identical experiments using the original and the modified Semantic-NeRF model. Our primary objective is to obverse the impact of this modification on the model performance by Semantic-NeRF, focusing on tasks such as scene understanding, object detection, and segmentation. The results offer valuable insights into the new way of rendering the scenes and provide an avenue for further research and development in semantic-focused 3D scene understanding.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
440,847
2207.04214
Adaptive Structural Similarity Preserving for Unsupervised Cross Modal Hashing
Cross-modal hashing is an important approach for multimodal data management and application. Existing unsupervised cross-modal hashing algorithms mainly rely on data features in pre-trained models to mine their similarity relationships. However, their optimization objectives are based on the static metric between the original uni-modal features, without further exploring data correlations during the training. In addition, most of them mainly focus on association mining and alignment among pairwise instances in continuous space but ignore the latent structural correlations contained in the semantic hashing space. In this paper, we propose an unsupervised hash learning framework, namely Adaptive Structural Similarity Preservation Hashing (ASSPH), to solve the above problems. Firstly, we propose an adaptive learning scheme, with limited data and training batches, to enrich semantic correlations of unlabeled instances during the training process and meanwhile to ensure a smooth convergence of the training process. Secondly, we present an asymmetric structural semantic representation learning scheme. We introduce structural semantic metrics based on graph adjacency relations during the semantic reconstruction and correlation mining stage and meanwhile align the structure semantics in the hash space with an asymmetric binary optimization process. Finally, we conduct extensive experiments to validate the enhancements of our work in comparison with existing works.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
307,124
0903.1624
Instanton-based Techniques for Analysis and Reduction of Error Floors of LDPC Codes
We describe a family of instanton-based optimization methods developed recently for the analysis of the error floors of low-density parity-check (LDPC) codes. Instantons are the most probable configurations of the channel noise which result in decoding failures. We show that the general idea and the respective optimization technique are applicable broadly to a variety of channels, discrete or continuous, and variety of sub-optimal decoders. Specifically, we consider: iterative belief propagation (BP) decoders, Gallager type decoders, and linear programming (LP) decoders performing over the additive white Gaussian noise channel (AWGNC) and the binary symmetric channel (BSC). The instanton analysis suggests that the underlying topological structures of the most probable instanton of the same code but different channels and decoders are related to each other. Armed with this understanding of the graphical structure of the instanton and its relation to the decoding failures, we suggest a method to construct codes whose Tanner graphs are free of these structures, and thus have less significant error floors.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
3,317
2011.14277
Intrinsic Knowledge Evaluation on Chinese Language Models
Recent NLP tasks have benefited a lot from pre-trained language models (LM) since they are able to encode knowledge of various aspects. However, current LM evaluations focus on downstream performance, hence lack to comprehensively inspect in which aspect and to what extent have they encoded knowledge. This paper addresses both queries by proposing four tasks on syntactic, semantic, commonsense, and factual knowledge, aggregating to a total of $39,308$ questions covering both linguistic and world knowledge in Chinese. Throughout experiments, our probes and knowledge data prove to be a reliable benchmark for evaluating pre-trained Chinese LMs. Our work is publicly available at https://github.com/ZhiruoWang/ChnEval.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
208,726
1502.07996
Sparse Time-Frequency Representation for Signals with Fast Varying Instantaneous Frequency
Time-frequency distributions have been used to provide high resolution representation in a large number of signal processing applications. However, high resolution and accurate instantaneous frequency (IF) estimation usually depend on the employed distribution and complexity of signal phase function. To ensure an efficient IF tracking for various types of signals, the class of complex time distributions has been developed. These distributions facilitate analysis in the cases when standard distributions cannot provide satisfactory results (e.g., for highly nonstationary signal phase). In that sense, an ambiguity based form of the forth order complex-time distribution is considered, in a new compressive sensing (CS) context. CS is an intensively growing approach in signal processing that allows efficient analysis and reconstruction of randomly undersampled signals. In this paper, the randomly chosen ambiguity domain coefficients serve as CS measurements. By exploiting sparsity in the time-frequency plane, it is possible to obtain highly concentrated IF using just small number of random coefficients from ambiguity domain. Moreover, in noisy signal case, this CS approach can be efficiently combined with the L-statistics producing robust time-frequency representations. Noisy coefficients are firstly removed using the L-statistics and then reconstructed by using CS algorithm. The theoretical considerations are illustrated using experimental results.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
40,633
2201.01490
Debiased Learning from Naturally Imbalanced Pseudo-Labels
Pseudo-labels are confident predictions made on unlabeled target data by a classifier trained on labeled source data. They are widely used for adapting a model to unlabeled data, e.g., in a semi-supervised learning setting. Our key insight is that pseudo-labels are naturally imbalanced due to intrinsic data similarity, even when a model is trained on balanced source data and evaluated on balanced target data. If we address this previously unknown imbalanced classification problem arising from pseudo-labels instead of ground-truth training labels, we could remove model biases towards false majorities created by pseudo-labels. We propose a novel and effective debiased learning method with pseudo-labels, based on counterfactual reasoning and adaptive margins: The former removes the classifier response bias, whereas the latter adjusts the margin of each class according to the imbalance of pseudo-labels. Validated by extensive experimentation, our simple debiased learning delivers significant accuracy gains over the state-of-the-art on ImageNet-1K: 26% for semi-supervised learning with 0.2% annotations and 9% for zero-shot learning. Our code is available at: https://github.com/frank-xwang/debiased-pseudo-labeling.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
274,270
2206.04783
ReFace: Real-time Adversarial Attacks on Face Recognition Systems
Deep neural network based face recognition models have been shown to be vulnerable to adversarial examples. However, many of the past attacks require the adversary to solve an input-dependent optimization problem using gradient descent which makes the attack impractical in real-time. These adversarial examples are also tightly coupled to the attacked model and are not as successful in transferring to different models. In this work, we propose ReFace, a real-time, highly-transferable attack on face recognition models based on Adversarial Transformation Networks (ATNs). ATNs model adversarial example generation as a feed-forward neural network. We find that the white-box attack success rate of a pure U-Net ATN falls substantially short of gradient-based attacks like PGD on large face recognition datasets. We therefore propose a new architecture for ATNs that closes this gap while maintaining a 10000x speedup over PGD. Furthermore, we find that at a given perturbation magnitude, our ATN adversarial perturbations are more effective in transferring to new face recognition models than PGD. ReFace attacks can successfully deceive commercial face recognition services in a transfer attack setting and reduce face identification accuracy from 82% to 16.4% for AWS SearchFaces API and Azure face verification accuracy from 91% to 50.1%.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
301,763
2203.12552
Organic log-domain integrator synapse
Synapses play a critical role in memory, learning, and cognition. Their main functions include converting pre-synaptic voltage spikes to post-synaptic currents, as well as scaling the input signal. Several brain-inspired architectures have been proposed to emulate the behavior of biological synapses. While these are useful to explore the properties of nervous systems, the challenge of making biocompatible and flexible circuits with biologically plausible time constants and tunable gain remains. Here, a physically flexible organic log-domain integrator synaptic circuit is shown to address this challenge. In particular, the circuit is fabricated using organic-based materials that are electrically active, offer flexibility and biocompatibility, as well as time constants (critical in learning neural codes and encoding spatiotemporal patterns) that are biologically plausible. Using a 10 nF synaptic capacitor, the time constant reached 126 ms and 221 ms before and during bending, respectively. The flexible synaptic circuit is characterized before and during bending, followed by studies on the effects of weighting voltage, synaptic capacitance, and disparity in pre-synaptic signals on the time constant.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
287,307
2402.14568
LLM-DA: Data Augmentation via Large Language Models for Few-Shot Named Entity Recognition
Despite the impressive capabilities of large language models (LLMs), their performance on information extraction tasks is still not entirely satisfactory. However, their remarkable rewriting capabilities and extensive world knowledge offer valuable insights to improve these tasks. In this paper, we propose $LLM-DA$, a novel data augmentation technique based on LLMs for the few-shot NER task. To overcome the limitations of existing data augmentation methods that compromise semantic integrity and address the uncertainty inherent in LLM-generated text, we leverage the distinctive characteristics of the NER task by augmenting the original data at both the contextual and entity levels. Our approach involves employing 14 contextual rewriting strategies, designing entity replacements of the same type, and incorporating noise injection to enhance robustness. Extensive experiments demonstrate the effectiveness of our approach in enhancing NER model performance with limited data. Furthermore, additional analyses provide further evidence supporting the assertion that the quality of the data we generate surpasses that of other existing methods.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
431,739
2309.07139
A Traffic Management Framework for On-Demand Urban Air Mobility Systems
Urban Air Mobility (UAM) offers a solution to current traffic congestion by providing on-demand air mobility in urban areas. Effective traffic management is crucial for efficient operation of UAM systems, especially for high-demand scenarios. In this paper, we present a centralized traffic management framework for on-demand UAM systems. Specifically, we provide a scheduling policy, called VertiSync, which schedules the aircraft for either servicing trip requests or rebalancing in the system subject to aircraft safety margins and energy requirements. We characterize the system-level throughput of VertiSync, which determines the demand threshold at which passenger waiting times transition from being stabilized to being increasing over time. We show that the proposed policy is able to maximize throughput for sufficiently large fleet sizes. We demonstrate the performance of VertiSync through a case study for the city of Los Angeles, and show that it significantly reduces passenger waiting times compared to a first-come first-serve scheduling policy.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
true
false
false
true
391,670
2410.07695
Test-Time Intensity Consistency Adaptation for Shadow Detection
Shadow detection is crucial for accurate scene understanding in computer vision, yet it is challenged by the diverse appearances of shadows caused by variations in illumination, object geometry, and scene context. Deep learning models often struggle to generalize to real-world images due to the limited size and diversity of training datasets. To address this, we introduce TICA, a novel framework that leverages light-intensity information during test-time adaptation to enhance shadow detection accuracy. TICA exploits the inherent inconsistencies in light intensity across shadow regions to guide the model toward a more consistent prediction. A basic encoder-decoder model is initially trained on a labeled dataset for shadow detection. Then, during the testing phase, the network is adjusted for each test sample by enforcing consistent intensity predictions between two augmented input image versions. This consistency training specifically targets both foreground and background intersection regions to identify shadow regions within images accurately for robust adaptation. Extensive evaluations on the ISTD and SBU shadow detection datasets reveal that TICA significantly demonstrates that TICA outperforms existing state-of-the-art methods, achieving superior results in balanced error rate (BER).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
496,753
2309.12756
Towards an MLOps Architecture for XAI in Industrial Applications
Machine learning (ML) has become a popular tool in the industrial sector as it helps to improve operations, increase efficiency, and reduce costs. However, deploying and managing ML models in production environments can be complex. This is where Machine Learning Operations (MLOps) comes in. MLOps aims to streamline this deployment and management process. One of the remaining MLOps challenges is the need for explanations. These explanations are essential for understanding how ML models reason, which is key to trust and acceptance. Better identification of errors and improved model accuracy are only two resulting advantages. An often neglected fact is that deployed models are bypassed in practice when accuracy and especially explainability do not meet user expectations. We developed a novel MLOps software architecture to address the challenge of integrating explanations and feedback capabilities into the ML development and deployment processes. In the project EXPLAIN, our architecture is implemented in a series of industrial use cases. The proposed MLOps software architecture has several advantages. It provides an efficient way to manage ML models in production environments. Further, it allows for integrating explanations into the development and deployment processes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
393,922
2501.04286
Mapping the Edge of Chaos: Fractal-Like Boundaries in The Trainability of Decoder-Only Transformer Models
In the realm of fractal geometry, intricate structures emerge from simple iterative processes that partition parameter spaces into regions of stability and instability. Likewise, training large language models involves iteratively applying update functions, such as Adam, where even slight hyperparameter adjustments can shift the training process from convergence to divergence. Recent evidence from miniature neural networks suggests that the boundary separating these outcomes displays fractal characteristics. Building on these insights, this study extends them to medium-sized, decoder-only transformer architectures by employing a more consistent convergence measure and examining the learning rate hyperparameter landscape for attention and fully connected layers. The results show that the trainability frontier is not a simple threshold; rather, it forms a self-similar yet seemingly random structure at multiple scales, with statistically consistent and repeating patterns. Within this landscape, a region of stable convergence is surrounded by a complex chaotic border, illustrating the sensitive nature of the underlying training dynamics.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
523,165
2201.11987
Computer-aided Recognition and Assessment of a Porous Bioelastomer on Ultrasound Images for Regenerative Medicine Applications
Biodegradable elastic scaffolds have attracted more and more attention in the field of soft tissue repair and tissue engineering. These scaffolds made of porous bioelastomers support tissue ingrowth along with their own degradation. It is necessary to develop a computer-aided analyzing method based on ultrasound images to identify the degradation performance of the scaffold, not only to obviate the need to do destructive testing, but also to monitor the scaffold's degradation and tissue ingrowth over time. It is difficult using a single traditional image processing algorithm to extract continuous and accurate contour of a porous bioelastomer. This paper proposes a joint algorithm for the bioelastomer's contour detection and a texture feature extraction method for monitoring the degradation behavior of the bioelastomer. Mean-shift clustering method is used to obtain the bioelastomer's and native tissue's clustering feature information. Then the OTSU image binarization method automatically selects the optimal threshold value to convert the grayscale ultrasound image into a binary image. The Canny edge detector is used to extract the complete bioelastomer's contour. The first-order and second-order statistical features of texture are extracted. The proposed joint algorithm not only achieves the ideal extraction of the bioelastomer's contours in ultrasound images, but also gives valuable feedback of the degradation behavior of the bioelastomer at the implant site based on the changes of texture characteristics and contour area. The preliminary results of this study suggest that the proposed computer-aided image processing techniques have values and potentials in the non-invasive analysis of tissue scaffolds in vivo based on ultrasound images and may help tissue engineers evaluate the tissue scaffold's degradation and cellular ingrowth progress and improve the scaffold designs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
277,483
1807.05185
Model Reconstruction from Model Explanations
We show through theory and experiment that gradient-based explanations of a model quickly reveal the model itself. Our results speak to a tension between the desire to keep a proprietary model secret and the ability to offer model explanations. On the theoretical side, we give an algorithm that provably learns a two-layer ReLU network in a setting where the algorithm may query the gradient of the model with respect to chosen inputs. The number of queries is independent of the dimension and nearly optimal in its dependence on the model size. Of interest not only from a learning-theoretic perspective, this result highlights the power of gradients rather than labels as a learning primitive. Complementing our theory, we give effective heuristics for reconstructing models from gradient explanations that are orders of magnitude more query-efficient than reconstruction attacks relying on prediction interfaces.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
102,877
1508.03599
Efficient Redundancy Techniques for Latency Reduction in Cloud Systems
In cloud computing systems, assigning a task to multiple servers and waiting for the earliest copy to finish is an effective method to combat the variability in response time of individual servers, and reduce latency. But adding redundancy may result in higher cost of computing resources, as well as an increase in queueing delay due to higher traffic load. This work helps understand when and how redundancy gives a cost-efficient reduction in latency. For a general task service time distribution, we compare different redundancy strategies in terms of the number of redundant tasks, and time when they are issued and canceled. We get the insight that the log-concavity of the task service time creates a dichotomy of when adding redundancy helps. If the service time distribution is log-convex (i.e. log of the tail probability is convex) then adding maximum redundancy reduces both latency and cost. And if it is log-concave (i.e. log of the tail probability is concave), then less redundancy, and early cancellation of redundant tasks is more effective. Using these insights, we design a general redundancy strategy that achieves a good latency-cost trade-off for an arbitrary service time distribution. This work also generalizes and extends some results in the analysis of fork-join queues.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
46,015
2402.18884
Supervised Contrastive Representation Learning: Landscape Analysis with Unconstrained Features
Recent findings reveal that over-parameterized deep neural networks, trained beyond zero training-error, exhibit a distinctive structural pattern at the final layer, termed as Neural-collapse (NC). These results indicate that the final hidden-layer outputs in such networks display minimal within-class variations over the training set. While existing research extensively investigates this phenomenon under cross-entropy loss, there are fewer studies focusing on its contrastive counterpart, supervised contrastive (SC) loss. Through the lens of NC, this paper employs an analytical approach to study the solutions derived from optimizing the SC loss. We adopt the unconstrained features model (UFM) as a representative proxy for unveiling NC-related phenomena in sufficiently over-parameterized deep networks. We show that, despite the non-convexity of SC loss minimization, all local minima are global minima. Furthermore, the minimizer is unique (up to a rotation). We prove our results by formalizing a tight convex relaxation of the UFM. Finally, through this convex formulation, we delve deeper into characterizing the properties of global solutions under label-imbalanced training data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
433,603
2406.13337
Medical Spoken Named Entity Recognition
Spoken Named Entity Recognition (NER) aims to extracting named entities from speech and categorizing them into types like person, location, organization, etc. In this work, we present VietMed-NER - the first spoken NER dataset in the medical domain. To our best knowledge, our real-world dataset is the largest spoken NER dataset in the world in terms of the number of entity types, featuring 18 distinct types. Secondly, we present baseline results using various state-of-the-art pre-trained models: encoder-only and sequence-to-sequence. We found that pre-trained multilingual models XLM-R outperformed all monolingual models on both reference text and ASR output. Also in general, encoders perform better than sequence-to-sequence models for the NER task. By simply translating, the transcript is applicable not just to Vietnamese but to other languages as well. All code, data and models are made publicly available here: https://github.com/leduckhai/MultiMed
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
465,806
2211.15428
Explanation on Pretraining Bias of Finetuned Vision Transformer
As the number of fine tuning of pretrained models increased, understanding the bias of pretrained model is essential. However, there is little tool to analyse transformer architecture and the interpretation of the attention maps is still challenging. To tackle the interpretability, we propose Input-Attribution and Attention Score Vector (IAV) which measures the similarity between attention map and input-attribution and shows the general trend of interpretable attention patterns. We empirically explain the pretraining bias of supervised and unsupervised pretrained ViT models, and show that each head in ViT has a specific range of agreement on the decision of the classification. We show that generalization, robustness and entropy of attention maps are not property of pretraining types. On the other hand, IAV trend can separate the pretraining types.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
333,260
1805.02932
Cooperative Control of Multiple Agents with Unknown High-frequency Gain Signs under Unbalanced and Switching Topologies
Existing results on cooperative control of multi-agent systems with unknown control directions require that the underlying topology is either fixed with a strongly connected graph or switching between different strongly connected graphs. Furthermore, in most cases the graph is assumed to be balanced. This paper proposes a new class of nonlinear PI based algorithms to relax these requirements and allow for unbalanced and switching topologies having a jointly strongly connected basis. This is made possible for single-integrator (SI) and double-integrator (DI) agents with non-identical unknown control directions by a suitable selection of the distributed nonlinear PI functions. Moreover, as a special case, the proposed algorithms are applied to strongly connected and fixed graphs. Finally, simulation examples are given to show the validity of our theoretical results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
96,952
2003.02638
Metric-Based Imitation Learning Between Two Dissimilar Anthropomorphic Robotic Arms
The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential. One major challenge in imitation learning is the correspondence problem: how to establish corresponding states and actions between expert and learner, when the embodiments of the agents are different (morphology, dynamics, degrees of freedom, etc.). Many existing approaches in imitation learning circumvent the correspondence problem, for example, kinesthetic teaching or teleoperation, which are performed on the robot. In this work we explicitly address the correspondence problem by introducing a distance measure between dissimilar embodiments. This measure is then used as a loss function for static pose imitation and as a feedback signal within a model-free deep reinforcement learning framework for dynamic movement imitation between two anthropomorphic robotic arms in simulation. We find that the measure is well suited for describing the similarity between embodiments and for learning imitation policies by distance minimization.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
166,991
2402.16517
Discovering Artificial Viscosity Models for Discontinuous Galerkin Approximation of Conservation Laws using Physics-Informed Machine Learning
Finite element-based high-order solvers of conservation laws offer large accuracy but face challenges near discontinuities due to the Gibbs phenomenon. Artificial viscosity is a popular and effective solution to this problem based on physical insight. In this work, we present a physics-informed machine learning algorithm to automate the discovery of artificial viscosity models in a non-supervised paradigm. The algorithm is inspired by reinforcement learning and trains a neural network acting cell-by-cell (the viscosity model) by minimizing a loss defined as the difference with respect to a reference solution thanks to automatic differentiation. This enables a dataset-free training procedure. We prove that the algorithm is effective by integrating it into a state-of-the-art Runge-Kutta discontinuous Galerkin solver. We showcase several numerical tests on scalar and vectorial problems, such as Burgers' and Euler's equations in one and two dimensions. Results demonstrate that the proposed approach trains a model that is able to outperform classical viscosity models. Moreover, we show that the learnt artificial viscosity model is able to generalize across different problems and parameters.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
432,601
2303.01150
Multi-UAV Adaptive Path Planning Using Deep Reinforcement Learning
Efficient aerial data collection is important in many remote sensing applications. In large-scale monitoring scenarios, deploying a team of unmanned aerial vehicles (UAVs) offers improved spatial coverage and robustness against individual failures. However, a key challenge is cooperative path planning for the UAVs to efficiently achieve a joint mission goal. We propose a novel multi-agent informative path planning approach based on deep reinforcement learning for adaptive terrain monitoring scenarios using UAV teams. We introduce new network feature representations to effectively learn path planning in a 3D workspace. By leveraging a counterfactual baseline, our approach explicitly addresses credit assignment to learn cooperative behaviour. Our experimental evaluation shows improved planning performance, i.e. maps regions of interest more quickly, with respect to non-counterfactual variants. Results on synthetic and real-world data show that our approach has superior performance compared to state-of-the-art non-learning-based methods, while being transferable to varying team sizes and communication constraints.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
348,848
1707.06992
Ideological Sublations: Resolution of Dialectic in Population-based Optimization
A population-based optimization algorithm was designed, inspired by two main thinking modes in philosophy, both based on dialectic concept and thesis-antithesis paradigm. They impose two different kinds of dialectics. Idealistic and materialistic antitheses are formulated as optimization models. Based on the models, the population is coordinated for dialectical interactions. At the population-based context, the formulated optimization models are reduced to a simple detection problem for each thinker (particle). According to the assigned thinking mode to each thinker and her/his measurements of corresponding dialectic with other candidate particles, they deterministically decide to interact with a thinker in maximum dialectic with their theses. The position of a thinker at maximum dialectic is known as an available antithesis among the existing solutions. The dialectical interactions at each ideological community are distinguished by meaningful distributions of step-sizes for each thinking mode. In fact, the thinking modes are regarded as exploration and exploitation elements of the proposed algorithm. The result is a delicate balance without any requirement for adjustment of step-size coefficients. Main parameter of the proposed algorithm is the number of particles appointed to each thinking modes, or equivalently for each kind of motions. An additional integer parameter is defined to boost the stability of the final algorithm in some particular problems. The proposed algorithm is evaluated by a testbed of 12 single-objective continuous benchmark functions. Moreover, its performance and speed were highlighted in sparse reconstruction and antenna selection problems, at the context of compressed sensing and massive MIMO, respectively. The results indicate fast and efficient performance in comparison with well-known evolutionary algorithms and dedicated state-of-the-art algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
true
77,527
2305.13168
LLMs for Knowledge Graph Construction and Reasoning: Recent Capabilities and Future Opportunities
This paper presents an exhaustive quantitative and qualitative evaluation of Large Language Models (LLMs) for Knowledge Graph (KG) construction and reasoning. We engage in experiments across eight diverse datasets, focusing on four representative tasks encompassing entity and relation extraction, event extraction, link prediction, and question-answering, thereby thoroughly exploring LLMs' performance in the domain of construction and inference. Empirically, our findings suggest that LLMs, represented by GPT-4, are more suited as inference assistants rather than few-shot information extractors. Specifically, while GPT-4 exhibits good performance in tasks related to KG construction, it excels further in reasoning tasks, surpassing fine-tuned models in certain cases. Moreover, our investigation extends to the potential generalization ability of LLMs for information extraction, leading to the proposition of a Virtual Knowledge Extraction task and the development of the corresponding VINE dataset. Based on these empirical findings, we further propose AutoKG, a multi-agent-based approach employing LLMs and external sources for KG construction and reasoning. We anticipate that this research can provide invaluable insights for future undertakings in the field of knowledge graphs. The code and datasets are in https://github.com/zjunlp/AutoKG.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
true
false
366,378
2303.12660
Structural Measures of Resilience for Supply Chains
We investigate the structural factors that drive cascading failures in production networks, focusing on quantifying these risks with a topological resilience metric corresponding to the largest exogenous systemic shock that the production network can withstand, such that almost all of the network survives with high probability. We model failures using a node percolation process where systemic shocks cause suppliers to fail, leading to further breakdowns. We classify networks into two categories -- resilient and fragile -- based on their ability to handle shocks as the network grows large, and give bounds on their resilience. We show that the main factors affecting resilience are the number of raw products (primary sector), the number of final goods (final sector), and the source and supply dependencies. Further, we give methods to lower bound resilience based on bounding the cascade size with a linear program that can be efficiently calculated. We establish connections between our model, the independent cascade model, the Risk Exposure Index, and the Eisenberg-Noe contagion model. We give an almost linear-time deterministic algorithm to approximate the cascade size, which matches known lower bounds up to logarithmic factors. Finally, we design intervention algorithms and show that under reasonable assumptions, targeting nodes based on Katz centrality in the edge-reversed network is optimal. Finally, we account for network heterogeneities and validate our findings with real-world data.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
353,330
1901.10258
RED-Attack: Resource Efficient Decision based Attack for Machine Learning
Due to data dependency and model leakage properties, Deep Neural Networks (DNNs) exhibit several security vulnerabilities. Several security attacks exploited them but most of them require the output probability vector. These attacks can be mitigated by concealing the output probability vector. To address this limitation, decision-based attacks have been proposed which can estimate the model but they require several thousand queries to generate a single untargeted attack image. However, in real-time attacks, resources and attack time are very crucial parameters. Therefore, in resource-constrained systems, e.g., autonomous vehicles where an untargeted attack can have a catastrophic effect, these attacks may not work efficiently. To address this limitation, we propose a resource efficient decision-based methodology which generates the imperceptible attack, i.e., the RED-Attack, for a given black-box model. The proposed methodology follows two main steps to generate the imperceptible attack, i.e., classification boundary estimation and adversarial noise optimization. Firstly, we propose a half-interval search-based algorithm for estimating a sample on the classification boundary using a target image and a randomly selected image from another class. Secondly, we propose an optimization algorithm which first, introduces a small perturbation in some randomly selected pixels of the estimated sample. Then to ensure imperceptibility, it optimizes the distance between the perturbed and target samples. For illustration, we evaluate it for CFAR-10 and German Traffic Sign Recognition (GTSR) using state-of-the-art networks.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
119,975
2306.05150
Bayesian Optimization of Expensive Nested Grey-Box Functions
We consider the problem of optimizing a grey-box objective function, i.e., nested function composed of both black-box and white-box functions. A general formulation for such grey-box problems is given, which covers the existing grey-box optimization formulations as special cases. We then design an optimism-driven algorithm to solve it. Under certain regularity assumptions, our algorithm achieves similar regret bound as that for the standard black-box Bayesian optimization algorithm, up to a constant multiplicative term depending on the Lipschitz constants of the functions considered. We further extend our method to the constrained case and discuss special cases. For the commonly used kernel functions, the regret bounds allow us to derive a convergence rate to the optimal solution. Experimental results show that our grey-box optimization method empirically improves the speed of finding the global optimal solution significantly, as compared to the standard black-box optimization algorithm.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
372,078
2305.19525
Discovering New Interpretable Conservation Laws as Sparse Invariants
Discovering conservation laws for a given dynamical system is important but challenging. In a theorist setup (differential equations and basis functions are both known), we propose the Sparse Invariant Detector (SID), an algorithm that auto-discovers conservation laws from differential equations. Its algorithmic simplicity allows robustness and interpretability of the discovered conserved quantities. We show that SID is able to rediscover known and even discover new conservation laws in a variety of systems. For two examples in fluid mechanics and atmospheric chemistry, SID discovers 14 and 3 conserved quantities, respectively, where only 12 and 2 were previously known to domain experts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
369,566
2206.10942
List-Decodable Covariance Estimation
We give the first polynomial time algorithm for \emph{list-decodable covariance estimation}. For any $\alpha > 0$, our algorithm takes input a sample $Y \subseteq \mathbb{R}^d$ of size $n\geq d^{\mathsf{poly}(1/\alpha)}$ obtained by adversarially corrupting an $(1-\alpha)n$ points in an i.i.d. sample $X$ of size $n$ from the Gaussian distribution with unknown mean $\mu_*$ and covariance $\Sigma_*$. In $n^{\mathsf{poly}(1/\alpha)}$ time, it outputs a constant-size list of $k = k(\alpha)= (1/\alpha)^{\mathsf{poly}(1/\alpha)}$ candidate parameters that, with high probability, contains a $(\hat{\mu},\hat{\Sigma})$ such that the total variation distance $TV(\mathcal{N}(\mu_*,\Sigma_*),\mathcal{N}(\hat{\mu},\hat{\Sigma}))<1-O_{\alpha}(1)$. This is the statistically strongest notion of distance and implies multiplicative spectral and relative Frobenius distance approximation for parameters with dimension independent error. Our algorithm works more generally for $(1-\alpha)$-corruptions of any distribution $D$ that possesses low-degree sum-of-squares certificates of two natural analytic properties: 1) anti-concentration of one-dimensional marginals and 2) hypercontractivity of degree 2 polynomials. Prior to our work, the only known results for estimating covariance in the list-decodable setting were for the special cases of list-decodable linear regression and subspace recovery due to Karmarkar, Klivans, and Kothari (2019), Raghavendra and Yau (2019 and 2020) and Bakshi and Kothari (2020). These results need superpolynomial time for obtaining any subconstant error in the underlying dimension. Our result implies the first polynomial-time \emph{exact} algorithm for list-decodable linear regression and subspace recovery that allows, in particular, to obtain $2^{-\mathsf{poly}(d)}$ error in polynomial-time. Our result also implies an improved algorithm for clustering non-spherical mixtures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
304,094
1906.00748
Improving Minimal Gated Unit for Sequential Data
In order to obtain a model which can process sequential data related to machine translation and speech recognition faster and more accurately, we propose adopting Chrono Initializer as the initialization method of Minimal Gated Unit. We evaluated the method with two tasks: adding task and copy task. As a result of the experiment, the effectiveness of the proposed method was confirmed.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
133,501
2205.14761
Modeling Disagreement in Automatic Data Labelling for Semi-Supervised Learning in Clinical Natural Language Processing
Computational models providing accurate estimates of their uncertainty are crucial for risk management associated with decision making in healthcare contexts. This is especially true since many state-of-the-art systems are trained using the data which has been labelled automatically (self-supervised mode) and tend to overfit. In this work, we investigate the quality of uncertainty estimates from a range of current state-of-the-art predictive models applied to the problem of observation detection in radiology reports. This problem remains understudied for Natural Language Processing in the healthcare domain. We demonstrate that Gaussian Processes (GPs) provide superior performance in quantifying the risks of 3 uncertainty labels based on the negative log predictive probability (NLPP) evaluation metric and mean maximum predicted confidence levels (MMPCL), whilst retaining strong predictive performance.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
299,479
1903.03642
Improved Robustness and Safety for Autonomous Vehicle Control with Adversarial Reinforcement Learning
To improve efficiency and reduce failures in autonomous vehicles, research has focused on developing robust and safe learning methods that take into account disturbances in the environment. Existing literature in robust reinforcement learning poses the learning problem as a two player game between the autonomous system and disturbances. This paper examines two different algorithms to solve the game, Robust Adversarial Reinforcement Learning and Neural Fictitious Self Play, and compares performance on an autonomous driving scenario. We extend the game formulation to a semi-competitive setting and demonstrate that the resulting adversary better captures meaningful disturbances that lead to better overall performance. The resulting robust policy exhibits improved driving efficiency while effectively reducing collision rates compared to baseline control policies produced by traditional reinforcement learning methods.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
123,777
2202.08176
Bias and unfairness in machine learning models: a systematic literature review
One of the difficulties of artificial intelligence is to ensure that model decisions are fair and free of bias. In research, datasets, metrics, techniques, and tools are applied to detect and mitigate algorithmic unfairness and bias. This study aims to examine existing knowledge on bias and unfairness in Machine Learning models, identifying mitigation methods, fairness metrics, and supporting tools. A Systematic Literature Review found 40 eligible articles published between 2017 and 2022 in the Scopus, IEEE Xplore, Web of Science, and Google Scholar knowledge bases. The results show numerous bias and unfairness detection and mitigation approaches for ML technologies, with clearly defined metrics in the literature, and varied metrics can be highlighted. We recommend further research to define the techniques and metrics that should be employed in each case to standardize and ensure the impartiality of the machine learning model, thus, allowing the most appropriate metric to detect bias and unfairness in a given context.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
280,790
2309.13081
Transitioning To The Digital Generation Case Studies (Previous Digital Point Studies In Japan Cases:1993-2023)
In this paper, we discuss at The 8th International Workshop on Application of Big Data for Computational Social Science, October 26-29, 2023, Venice, Italy. To achieve the realization of the Global and Innovation Gateway for All (GIGA) initiative (2019), proposed in December 2019 by the Primary and Secondary Education Planning Division of the Elementary and Secondary Education Bureau of the Ministry of Education, Culture, Sports, Science and Technology, a movement has emerged to utilize information and communication technology (ICT) in the field of education. The history of ICT education in Japan dates back to the 100 Schools Project (1994), which aimed to provide network access environments, and the New 100 Schools Project (1997), which marked the beginning of full-scale ICT education in Japan. In this paper, we discuss the usage dynamics of smartphone-based learning applications among young people (analyzing data from January to September 2020) and their current status. Further, the results are summarized and future research topics and issues are discussed. The results show that there are situations in which ICT learning environments can be effectively utilized and others in which they cannot, depending on the differences between digital students and analog students who utilize ICT in their studies; this indicates that we are currently in a transition to a generation of digital natives. ICT education has both advantages and disadvantages, and it is expected that it will be used in combination with conventional educational methods while assessing the characteristics of ICT education in the future. Of course, there are many challenges. We plan to discuss how to appeal in this regard at the Workshop.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
394,051
2306.09177
Dis-AE: Multi-domain & Multi-task Generalisation on Real-World Clinical Data
Clinical data is often affected by clinically irrelevant factors such as discrepancies between measurement devices or differing processing methods between sites. In the field of machine learning (ML), these factors are known as domains and the distribution differences they cause in the data are known as domain shifts. ML models trained using data from one domain often perform poorly when applied to data from another domain, potentially leading to wrong predictions. As such, developing machine learning models that can generalise well across multiple domains is a challenging yet essential task in the successful application of ML in clinical practice. In this paper, we propose a novel disentangled autoencoder (Dis-AE) neural network architecture that can learn domain-invariant data representations for multi-label classification of medical measurements even when the data is influenced by multiple interacting domain shifts at once. The model utilises adversarial training to produce data representations from which the domain can no longer be determined. We evaluate the model's domain generalisation capabilities on synthetic datasets and full blood count (FBC) data from blood donors as well as primary and secondary care patients, showing that Dis-AE improves model generalisation on multiple domains simultaneously while preserving clinically relevant information.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
373,704
2306.12133
Fundamental Performance Bounds for Carrier Phase Positioning in Cellular Networks
The carrier phase of cellular signals can be utilized for highly accurate positioning, with the potential for orders-of-magnitude performance improvements compared to standard time-difference-of-arrival positioning. Due to the integer ambiguities, standard performance evaluation tools such as the Cram\'er-Rao bound (CRB) are overly optimistic. In this paper, a new performance bound, called the mixed-integer CRB (MICRB) is introduced that explicitly accounts for this integer ambiguity. While computationally more complex than the standard CRB, the MICRB can accurately predict positioning performance, as verified by numerical simulations, and hence it serves as a useful guide to choose the system parameters that facilitate carrier phase positioning.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
374,842
cs/0011007
Tree-gram Parsing: Lexical Dependencies and Structural Relations
This paper explores the kinds of probabilistic relations that are important in syntactic disambiguation. It proposes that two widely used kinds of relations, lexical dependencies and structural relations, have complementary disambiguation capabilities. It presents a new model based on structural relations, the Tree-gram model, and reports experiments showing that structural relations should benefit from enrichment by lexical dependencies.
true
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
537,247
1306.6671
Extended Subspace Error Localization for Rate-Adaptive Distributed Source Coding
A subspace-based approach for rate-adaptive distributed source coding (DSC) based on discrete Fourier transform (DFT) codes is developed. Punctured DFT codes can be used to implement rate-adaptive source coding, however they perform poorly after even moderate puncturing since the performance of the subspace error localization degrades severely. The proposed subspace-based error localization extends and improves the existing one, based on additional syndrome, and is naturally suitable for rate-adaptive distributed source coding architecture.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
25,494
2211.15406
Automated Detection of Dolphin Whistles with Convolutional Networks and Transfer Learning
Effective conservation of maritime environments and wildlife management of endangered species require the implementation of efficient, accurate and scalable solutions for environmental monitoring. Ecoacoustics offers the advantages of non-invasive, long-duration sampling of environmental sounds and has the potential to become the reference tool for biodiversity surveying. However, the analysis and interpretation of acoustic data is a time-consuming process that often requires a great amount of human supervision. This issue might be tackled by exploiting modern techniques for automatic audio signal analysis, which have recently achieved impressive performance thanks to the advances in deep learning research. In this paper we show that convolutional neural networks can indeed significantly outperform traditional automatic methods in a challenging detection task: identification of dolphin whistles from underwater audio recordings. The proposed system can detect signals even in the presence of ambient noise, at the same time consistently reducing the likelihood of producing false positives and false negatives. Our results further support the adoption of artificial intelligence technology to improve the automatic monitoring of marine ecosystems.
false
false
true
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
333,245
2109.00141
Storing Multi-model Data in RDBMSs based on Reinforcement Learning
How to manage various data in a unified way is a significant research topic in the field of databases. To address this problem, researchers have proposed multi-model databases to support multiple data models in a uniform platform with a single unified query language. However, since relational databases are predominant in the current market, it is expensive to replace them with others. Besides, due to the theories and technologies of RDBMSs having been enhanced over decades, it is hard to use few years to develop a multi-model database that can be compared with existing RDBMSs in handling security, query optimization, transaction management, etc. In this paper, we reconsider employing relational databases to store and query multi-model data. Unfortunately, the mismatch between the complexity of multi-model data structure and the simplicity of flat relational tables makes this difficult. Against this challenge, we utilize the reinforcement learning (RL) method to learn a relational schema by interacting with an RDBMS. Instead of using the classic Q-learning algorithm, we propose a variant Q-learning algorithm, called \textit{Double Q-tables}, to reduce the dimension of the original Q-table and improve learning efficiency. Experimental results show that our approach could learn a relational schema outperforming the existing multi-model storage schema in terms of query time and space consumption.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
253,017
2203.02656
Deep Partial Multiplex Network Embedding
Network embedding is an effective technique to learn the low-dimensional representations of nodes in networks. Real-world networks are usually with multiplex or having multi-view representations from different relations. Recently, there has been increasing interest in network embedding on multiplex data. However, most existing multiplex approaches assume that the data is complete in all views. But in real applications, it is often the case that each view suffers from the missing of some data and therefore results in partial multiplex data. In this paper, we present a novel Deep Partial Multiplex Network Embedding approach to deal with incomplete data. In particular, the network embeddings are learned by simultaneously minimizing the deep reconstruction loss with the autoencoder neural network, enforcing the data consistency across views via common latent subspace learning, and preserving the data topological structure within the same network through graph Laplacian. We further prove the orthogonal invariant property of the learned embeddings and connect our approach with the binary embedding techniques. Experiments on four multiplex benchmarks demonstrate the superior performance of the proposed approach over several state-of-the-art methods on node classification, link prediction and clustering tasks.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
283,817
2204.03919
Network Shuffling: Privacy Amplification via Random Walks
Recently, it is shown that shuffling can amplify the central differential privacy guarantees of data randomized with local differential privacy. Within this setup, a centralized, trusted shuffler is responsible for shuffling by keeping the identities of data anonymous, which subsequently leads to stronger privacy guarantees for systems. However, introducing a centralized entity to the originally local privacy model loses some appeals of not having any centralized entity as in local differential privacy. Moreover, implementing a shuffler in a reliable way is not trivial due to known security issues and/or requirements of advanced hardware or secure computation technology. Motivated by these practical considerations, we rethink the shuffle model to relax the assumption of requiring a centralized, trusted shuffler. We introduce network shuffling, a decentralized mechanism where users exchange data in a random-walk fashion on a network/graph, as an alternative of achieving privacy amplification via anonymity. We analyze the threat model under such a setting, and propose distributed protocols of network shuffling that is straightforward to implement in practice. Furthermore, we show that the privacy amplification rate is similar to other privacy amplification techniques such as uniform shuffling. To our best knowledge, among the recently studied intermediate trust models that leverage privacy amplification techniques, our work is the first that is not relying on any centralized entity to achieve privacy amplification.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
290,477
2105.13975
Relation Matters in Sampling: A Scalable Multi-Relational Graph Neural Network for Drug-Drug Interaction Prediction
Sampling is an established technique to scale graph neural networks to large graphs. Current approaches however assume the graphs to be homogeneous in terms of relations and ignore relation types, critically important in biomedical graphs. Multi-relational graphs contain various types of relations that usually come with variable frequency and have different importance for the problem at hand. We propose an approach to modeling the importance of relation types for neighborhood sampling in graph neural networks and show that we can learn the right balance: relation-type probabilities that reflect both frequency and importance. Our experiments on drug-drug interaction prediction show that state-of-the-art graph neural networks profit from relation-dependent sampling in terms of both accuracy and efficiency.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
237,474
2310.05473
Sentence-level Prompts Benefit Composed Image Retrieval
Composed image retrieval (CIR) is the task of retrieving specific images by using a query that involves both a reference image and a relative caption. Most existing CIR models adopt the late-fusion strategy to combine visual and language features. Besides, several approaches have also been suggested to generate a pseudo-word token from the reference image, which is further integrated into the relative caption for CIR. However, these pseudo-word-based prompting methods have limitations when target image encompasses complex changes on reference image, e.g., object removal and attribute modification. In this work, we demonstrate that learning an appropriate sentence-level prompt for the relative caption (SPRC) is sufficient for achieving effective composed image retrieval. Instead of relying on pseudo-word-based prompts, we propose to leverage pretrained V-L models, e.g., BLIP-2, to generate sentence-level prompts. By concatenating the learned sentence-level prompt with the relative caption, one can readily use existing text-based image retrieval models to enhance CIR performance. Furthermore, we introduce both image-text contrastive loss and text prompt alignment loss to enforce the learning of suitable sentence-level prompts. Experiments show that our proposed method performs favorably against the state-of-the-art CIR methods on the Fashion-IQ and CIRR datasets. The source code and pretrained model are publicly available at https://github.com/chunmeifeng/SPRC
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
398,167
1709.04770
The Arbitrarily Varying Broadcast Channel with Degraded Message Sets with Causal Side Information at the Encoder
In this work, we study the arbitrarily varying broadcast channel (AVBC), when state information is available at the transmitter in a causal manner. We establish inner and outer bounds on both the random code capacity region and the deterministic code capacity region with degraded message sets. The capacity region is then determined for a class of channels satisfying a condition on the mutual informations between the strategy variables and the channel outputs. As an example, we consider the arbitrarily varying binary symmetric broadcast channel with correlated noises. We show cases where the condition holds, hence the capacity region is determined, and other cases where there is a gap between the bounds.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
80,730
2307.00660
Minimum Levels of Interpretability for Artificial Moral Agents
As artificial intelligence (AI) models continue to scale up, they are becoming more capable and integrated into various forms of decision-making systems. For models involved in moral decision-making, also known as artificial moral agents (AMA), interpretability provides a way to trust and understand the agent's internal reasoning mechanisms for effective use and error correction. In this paper, we provide an overview of this rapidly-evolving sub-field of AI interpretability, introduce the concept of the Minimum Level of Interpretability (MLI) and recommend an MLI for various types of agents, to aid their safe deployment in real-world settings.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
377,082
2302.09572
Rethinking Data-Free Quantization as a Zero-Sum Game
Data-free quantization (DFQ) recovers the performance of quantized network (Q) without accessing the real data, but generates the fake sample via a generator (G) by learning from full-precision network (P) instead. However, such sample generation process is totally independent of Q, specialized as failing to consider the adaptability of the generated samples, i.e., beneficial or adversarial, over the learning process of Q, resulting into non-ignorable performance loss. Building on this, several crucial questions -- how to measure and exploit the sample adaptability to Q under varied bit-width scenarios? how to generate the samples with desirable adaptability to benefit the quantized network? -- impel us to revisit DFQ. In this paper, we answer the above questions from a game-theory perspective to specialize DFQ as a zero-sum game between two players -- a generator and a quantized network, and further propose an Adaptability-aware Sample Generation (AdaSG) method. Technically, AdaSG reformulates DFQ as a dynamic maximization-vs-minimization game process anchored on the sample adaptability. The maximization process aims to generate the sample with desirable adaptability, such sample adaptability is further reduced by the minimization process after calibrating Q for performance recovery. The Balance Gap is defined to guide the stationarity of the game process to maximally benefit Q. The theoretical analysis and empirical studies verify the superiority of AdaSG over the state-of-the-arts. Our code is available at https://github.com/hfutqian/AdaSG.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
346,479
2209.11615
Robust Domain Adaptation for Machine Reading Comprehension
Most domain adaptation methods for machine reading comprehension (MRC) use a pre-trained question-answer (QA) construction model to generate pseudo QA pairs for MRC transfer. Such a process will inevitably introduce mismatched pairs (i.e., noisy correspondence) due to i) the unavailable QA pairs in target documents, and ii) the domain shift during applying the QA construction model to the target domain. Undoubtedly, the noisy correspondence will degenerate the performance of MRC, which however is neglected by existing works. To solve such an untouched problem, we propose to construct QA pairs by additionally using the dialogue related to the documents, as well as a new domain adaptation method for MRC. Specifically, we propose Robust Domain Adaptation for Machine Reading Comprehension (RMRC) method which consists of an answer extractor (AE), a question selector (QS), and an MRC model. Specifically, RMRC filters out the irrelevant answers by estimating the correlation to the document via the AE, and extracts the questions by fusing the candidate questions in multiple rounds of dialogue chats via the QS. With the extracted QA pairs, MRC is fine-tuned and provides the feedback to optimize the QS through a novel reinforced self-training method. Thanks to the optimization of the QS, our method will greatly alleviate the noisy correspondence problem caused by the domain shift. To the best of our knowledge, this could be the first study to reveal the influence of noisy correspondence in domain adaptation MRC models and show a feasible way to achieve robustness to mismatched pairs. Extensive experiments on three datasets demonstrate the effectiveness of our method.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
319,248
2110.09574
Multilingual Domain Adaptation for NMT: Decoupling Language and Domain Information with Adapters
Adapter layers are lightweight, learnable units inserted between transformer layers. Recent work explores using such layers for neural machine translation (NMT), to adapt pre-trained models to new domains or language pairs, training only a small set of parameters for each new setting (language pair or domain). In this work we study the compositionality of language and domain adapters in the context of Machine Translation. We aim to study, 1) parameter-efficient adaptation to multiple domains and languages simultaneously (full-resource scenario) and 2) cross-lingual transfer in domains where parallel data is unavailable for certain language pairs (partial-resource scenario). We find that in the partial resource scenario a naive combination of domain-specific and language-specific adapters often results in `catastrophic forgetting' of the missing languages. We study other ways to combine the adapters to alleviate this issue and maximize cross-lingual transfer. With our best adapter combinations, we obtain improvements of 3-4 BLEU on average for source languages that do not have in-domain data. For target languages without in-domain data, we achieve a similar improvement by combining adapters with back-translation. Supplementary material is available at https://tinyurl.com/r66stbxj
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
261,849
2102.08058
Capacity-Achieving Private Information Retrieval Schemes from Uncoded Storage Constrained Servers with Low Sub-packetization
This paper investigates reducing sub-packetization of capacity-achieving schemes for uncoded Storage Constrained Private Information Retrieval (SC-PIR) systems. In the SC-PIR system, a user aims to retrieve one out of $K$ files from $N$ servers while revealing nothing about its identity to any individual server, in which the $K$ files are stored at the $N$ servers in an uncoded form and each server can store up to $\mu K$ equivalent files, where $\mu$ is the normalized storage capacity of each server. We first prove that there exists a capacity-achieving SC-PIR scheme for a given storage design if and only if all the packets are stored exactly at $M\triangleq \mu N$ servers for $\mu$ such that $M=\mu N\in\{2,3,\ldots,N\}$. Then, the optimal sub-packetization for capacity-achieving linear SC-PIR schemes is characterized as the solution to an optimization problem, which is typically hard to solve because of involving indicator functions. Moreover, a new notion of array called Storage Design Array (SDA) is introduced for the SC-PIR system. With any given SDA, an associated capacity-achieving SC-PIR scheme is constructed. Next, the SC-PIR schemes that have equal-size packets are investigated. Furthermore, the optimal equal-size sub-packetization among all capacity-achieving linear SC-PIR schemes characterized by Woolsey et al. is proved to be $\frac{N(M-1)}{\gcd(N,M)}$. Finally, by allowing unequal size of packets, a greedy SDA construction is proposed, where the sub-packetization of the associated SC-PIR scheme is upper bounded by $\frac{N(M-1)}{\gcd(N,M)}$. Among all capacity-achieving linear SC-PIR schemes, the sub-packetization is optimal when $\min\{M,N-M\}|N$ or $M=N$, and within a multiplicative gap $\frac{\min\{M,N-M\}}{\gcd(N,M)}$ of the optimal one otherwise. In particular, for the case $N=d\cdot M\pm1$ where $d\geq 2$, another SDA is constructed to obtain lower sub-packetization.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
220,326
2406.12605
Attack and Defense of Deep Learning Models in the Field of Web Attack Detection
The challenge of WAD (web attack detection) is growing as hackers continuously refine their methods to evade traditional detection. Deep learning models excel in handling complex unknown attacks due to their strong generalization and adaptability. However, they are vulnerable to backdoor attacks, where contextually irrelevant fragments are inserted into requests, compromising model stability. While backdoor attacks are well studied in image recognition, they are largely unexplored in WAD. This paper introduces backdoor attacks in WAD, proposing five methods and corresponding defenses. Testing on textCNN, biLSTM, and tinybert models shows an attack success rate over 87%, reducible through fine-tuning. Future research should focus on backdoor defenses in WAD. All the code and data of this paper can be obtained at https://anonymous.4open.science/r/attackDefenceinDL-7E05
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
465,483