id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2201.05905 | SS-3DCapsNet: Self-supervised 3D Capsule Networks for Medical
Segmentation on Less Labeled Data | Capsule network is a recent new deep network architecture that has been applied successfully for medical image segmentation tasks. This work extends capsule networks for volumetric medical image segmentation with self-supervised learning. To improve on the problem of weight initialization compared to previous capsule networks, we leverage self-supervised learning for capsule networks pre-training, where our pretext-task is optimized by self-reconstruction. Our capsule network, SS-3DCapsNet, has a UNet-based architecture with a 3D Capsule encoder and 3D CNNs decoder. Our experiments on multiple datasets including iSeg-2017, Hippocampus, and Cardiac demonstrate that our 3D capsule network with self-supervised pre-training considerably outperforms previous capsule networks and 3D-UNets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 275,540 |
1302.2244 | Efficient Data Gathering in Wireless Sensor Networks Based on Matrix
Completion and Compressive Sensing | Gathering data in an energy efficient manner in wireless sensor networks is an important design challenge. In wireless sensor networks, the readings of sensors always exhibit intra-temporal and inter-spatial correlations. Therefore, in this letter, we use low rank matrix completion theory to explore the inter-spatial correlation and use compressive sensing theory to take advantage of intra-temporal correlation. Our method, dubbed MCCS, can significantly reduce the amount of data that each sensor must send through network and to the sink, thus prolong the lifetime of the whole networks. Experiments using real datasets demonstrate the feasibility and efficacy of our MCCS method. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 21,928 |
1603.06352 | Online Learning with Low Rank Experts | We consider the problem of prediction with expert advice when the losses of the experts have low-dimensional structure: they are restricted to an unknown $d$-dimensional subspace. We devise algorithms with regret bounds that are independent of the number of experts and depend only on the rank $d$. For the stochastic model we show a tight bound of $\Theta(\sqrt{dT})$, and extend it to a setting of an approximate $d$ subspace. For the adversarial model we show an upper bound of $O(d\sqrt{T})$ and a lower bound of $\Omega(\sqrt{dT})$. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 53,484 |
1810.06468 | Towards Intention Prediction for Handheld Robots: a Case of Simulated
Block Copying | Within this work, we explore intention inference for user actions in the context of a handheld robot setup. Handheld robots share the shape and properties of handheld tools while being able to process task information and aid manipulation. Here, we propose an intention prediction model to enhance cooperative task solving. Within a block copy task, we collect eye gaze data using a robot-mounted remote eye tracker which is used to create a profile of visual attention for task-relevant objects in the workspace scene. These profiles are used to make predictions about user actions i.e. which block will be picked up next and where it will be placed. Our results show that our proposed model can predict user actions well in advance with an accuracy of 87.94% (500ms prior) for picking and 93.25% (1500 ms prior) for placing actions. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 110,439 |
2110.15907 | Learning to Be Cautious | A key challenge in the field of reinforcement learning is to develop agents that behave cautiously in novel situations. It is generally impossible to anticipate all situations that an autonomous system may face or what behavior would best avoid bad outcomes. An agent that could learn to be cautious would overcome this challenge by discovering for itself when and how to behave cautiously. In contrast, current approaches typically embed task-specific safety information or explicit cautious behaviors into the system, which is error-prone and imposes extra burdens on practitioners. In this paper, we present both a sequence of tasks where cautious behavior becomes increasingly non-obvious, as well as an algorithm to demonstrate that it is possible for a system to \emph{learn} to be cautious. The essential features of our algorithm are that it characterizes reward function uncertainty without task-specific safety information and uses this uncertainty to construct a robust policy. Specifically, we construct robust policies with a $k$-of-$N$ counterfactual regret minimization (CFR) subroutine given a learned reward function uncertainty represented by a neural network ensemble belief. These policies exhibit caution in each of our tasks without any task-specific safety tuning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 264,038 |
1906.08663 | Modeling AGI Safety Frameworks with Causal Influence Diagrams | Proposals for safe AGI systems are typically made at the level of frameworks, specifying how the components of the proposed system should be trained and interact with each other. In this paper, we model and compare the most promising AGI safety frameworks using causal influence diagrams. The diagrams show the optimization objective and causal assumptions of the framework. The unified representation permits easy comparison of frameworks and their assumptions. We hope that the diagrams will serve as an accessible and visual introduction to the main AGI safety frameworks. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 135,936 |
2404.10155 | The Fault in our Stars: Quality Assessment of Code Generation Benchmarks | Large Language Models (LLMs) are gaining popularity among software engineers. A crucial aspect of developing effective code generation LLMs is to evaluate these models using a robust benchmark. Evaluation benchmarks with quality issues can provide a false sense of performance. In this work, we conduct the first-of-its-kind study of the quality of prompts within benchmarks used to compare the performance of different code generation models. To conduct this study, we analyzed 3,566 prompts from 9 code generation benchmarks to identify quality issues in them. We also investigated whether fixing the identified quality issues in the benchmarks' prompts affects a model's performance. We also studied memorization issues of the evaluation dataset, which can put into question a benchmark's trustworthiness. We found that code generation evaluation benchmarks mainly focused on Python and coding exercises and had very limited contextual dependencies to challenge the model. These datasets and the developers' prompts suffer from quality issues like spelling and grammatical errors, unclear sentences to express developers' intent, and not using proper documentation style. Fixing all these issues in the benchmarks can lead to a better performance for Python code generation, but not a significant improvement was observed for Java code generation. We also found evidence that GPT-3.5-Turbo and CodeGen-2.5 models may have data contamination issues. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 446,969 |
2410.05628 | Versatile Motion Language Models for Multi-Turn Interactive Agents | Recent advancements in large language models (LLMs) have greatly enhanced their ability to generate natural and contextually relevant text, making AI interactions more human-like. However, generating and understanding interactive human-like motion, where two individuals engage in coordinated movements, remains a challenge due to the complexity of modeling these coordinated interactions. Furthermore, a versatile model is required to handle diverse interactive scenarios, such as chat systems that follow user instructions or adapt to their assigned role while adjusting interaction dynamics. To tackle this problem, we introduce VIM, short for the Versatile Interactive Motion language model, which integrates both language and motion modalities to effectively understand, generate, and control interactive motions in multi-turn conversational contexts. To address the scarcity of multi-turn interactive motion data, we introduce a synthetic dataset, INERT-MT2, where we utilize pre-trained models to create diverse instructional datasets with interactive motion. Our approach first trains a motion tokenizer that encodes interactive motions into residual discrete tokens. In the pretraining stage, the model learns to align motion and text representations with these discrete tokens. During the instruction fine-tuning stage, VIM adapts to multi-turn conversations using the INTER-MT2 dataset. We evaluate the versatility of our method across motion-related tasks, motion to text, text to motion, reaction generation, motion editing, and reasoning about motion sequences. The results highlight the versatility and effectiveness of proposed method in handling complex interactive motion synthesis. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 495,830 |
2002.11103 | Who is the Centre of the Movie Universe? Using Python and NetworkX to
Analyse the Social Network of Movie Stars | This paper provides the technical details of an article originally published in The Conversation in February 2020. The purpose is to use centrality measures to analyse the social network of movie stars and thereby identify the most "important" actors in the movie business. The analysis is presented in a step-by-step, tutorial-like fashion and makes use of the Python programming language together with the NetworkX library. It reveals that the most central actors in the network are those with lengthy acting careers, such as Christopher Lee, Nassar, Sukumari, Michael Caine, Om Puri, Jackie Chan, and Robert De Niro. We also present similar results for the movie releases of each decade. These indicate that the most central actors since the turn of the millennium include people like Angelina Jolie, Brahmanandam, Samuel L. Jackson, Nassar, and Ben Kingsley. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 165,613 |
2208.05776 | Neural Networks for Scalar Input and Functional Output | The regression of a functional response on a set of scalar predictors can be a challenging task, especially if there is a large number of predictors, or the relationship between those predictors and the response is nonlinear. In this work, we propose a solution to this problem: a feed-forward neural network (NN) designed to predict a functional response using scalar inputs. First, we transform the functional response to a finite-dimensional representation and construct an NN that outputs this representation. Then, we propose to modify the output of an NN via the objective function and introduce different objective functions for network training. The proposed models are suited for both regularly and irregularly spaced data, and a roughness penalty can be further applied to control the smoothness of the predicted curve. The difficulty in implementing both those features lies in the definition of objective functions that can be back-propagated. In our experiments, we demonstrate that our model outperforms the conventional function-on-scalar regression model in multiple scenarios while computationally scaling better with the dimension of the predictors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 312,502 |
1811.04369 | User Modeling for Task Oriented Dialogues | We introduce end-to-end neural network based models for simulating users of task-oriented dialogue systems. User simulation in dialogue systems is crucial from two different perspectives: (i) automatic evaluation of different dialogue models, and (ii) training task-oriented dialogue systems. We design a hierarchical sequence-to-sequence model that first encodes the initial user goal and system turns into fixed length representations using Recurrent Neural Networks (RNN). It then encodes the dialogue history using another RNN layer. At each turn, user responses are decoded from the hidden representations of the dialogue level RNN. This hierarchical user simulator (HUS) approach allows the model to capture undiscovered parts of the user goal without the need of an explicit dialogue state tracking. We further develop several variants by utilizing a latent variable model to inject random variations into user responses to promote diversity in simulated user responses and a novel goal regularization mechanism to penalize divergence of user responses from the initial user goal. We evaluate the proposed models on movie ticket booking domain by systematically interacting each user simulator with various dialogue system policies trained with different objectives and users. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 113,072 |
2311.03626 | PINNs-TF2: Fast and User-Friendly Physics-Informed Neural Networks in
TensorFlow V2 | Physics-informed neural networks (PINNs) have gained prominence for their capability to tackle supervised learning tasks that conform to physical laws, notably nonlinear partial differential equations (PDEs). This paper presents "PINNs-TF2", a Python package built on the TensorFlow V2 framework. It not only accelerates PINNs implementation but also simplifies user interactions by abstracting complex PDE challenges. We underscore the pivotal role of compilers in PINNs, highlighting their ability to boost performance by up to 119x. Across eight diverse examples, our package, integrated with XLA compilers, demonstrated its flexibility and achieved an average speed-up of 18.12 times over TensorFlow V1. Moreover, a real-world case study is implemented to underscore the compilers' potential to handle many trainable parameters and large batch sizes. For community engagement and future enhancements, our package's source code is openly available at: https://github.com/rezaakb/pinns-tf2. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 405,919 |
2103.03102 | Benchmarking Robustness of Deep Learning Classifiers Using Two-Factor
Perturbation | This paper adds to the fundamental body of work on benchmarking the robustness of deep learning (DL) classifiers. We innovate a new benchmarking methodology to evaluate robustness of DL classifiers. Also, we introduce a new four-quadrant statistical visualization tool, including minimum accuracy, maximum accuracy, mean accuracy, and coefficient of variation, for benchmarking robustness of DL classifiers. To measure robust DL classifiers, we created a comprehensive 69 benchmarking image set, including a clean set, sets with single factor perturbations, and sets with two-factor perturbation conditions. After collecting experimental results, we first report that using two-factor perturbed images improves both robustness and accuracy of DL classifiers. The two-factor perturbation includes (1) two digital perturbations (salt & pepper noise and Gaussian noise) applied in both sequences, and (2) one digital perturbation (salt & pepper noise) and a geometric perturbation (rotation) applied in both sequences. All source codes, related image sets, and preliminary data, figures are shared on a GitHub website to support future academic research and industry projects. The web resources locate at https://github.com/caperock/robustai | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 223,173 |
2011.14565 | Deep Implicit Templates for 3D Shape Representation | Deep implicit functions (DIFs), as a kind of 3D shape representation, are becoming more and more popular in the 3D vision community due to their compactness and strong representation power. However, unlike polygon mesh-based templates, it remains a challenge to reason dense correspondences or other semantic relationships across shapes represented by DIFs, which limits its applications in texture transfer, shape analysis and so on. To overcome this limitation and also make DIFs more interpretable, we propose Deep Implicit Templates, a new 3D shape representation that supports explicit correspondence reasoning in deep implicit representations. Our key idea is to formulate DIFs as conditional deformations of a template implicit function. To this end, we propose Spatial Warping LSTM, which decomposes the conditional spatial transformation into multiple affine transformations and guarantees generalization capability. Moreover, the training loss is carefully designed in order to achieve high reconstruction accuracy while learning a plausible template with accurate correspondences in an unsupervised manner. Experiments show that our method can not only learn a common implicit template for a collection of shapes, but also establish dense correspondences across all the shapes simultaneously without any supervision. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 208,814 |
2112.11856 | Semantically enriched spatial modelling of industrial indoor
environments enabling location-based services | This paper presents a concept for a software system called RAIL representing industrial indoor environments in a dynamic spatial model, aimed at easing development and provision of location-based services. RAIL integrates data from different sensor modalities and additional contextual information through a unified interface. Approaches to environmental modelling from other domains are reviewed and analyzed for their suitability regarding the requirements for our target domains; intralogistics and production. Subsequently a novel way of modelling data representing indoor space, and an architecture for the software system are proposed. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 272,820 |
1809.05676 | Deterministic Implementations for Reproducibility in Deep Reinforcement
Learning | While deep reinforcement learning (DRL) has led to numerous successes in recent years, reproducing these successes can be extremely challenging. One reproducibility challenge particularly relevant to DRL is nondeterminism in the training process, which can substantially affect the results. Motivated by this challenge, we study the positive impacts of deterministic implementations in eliminating nondeterminism in training. To do so, we consider the particular case of the deep Q-learning algorithm, for which we produce a deterministic implementation by identifying and controlling all sources of nondeterminism in the training process. One by one, we then allow individual sources of nondeterminism to affect our otherwise deterministic implementation, and measure the impact of each source on the variance in performance. We find that individual sources of nondeterminism can substantially impact the performance of agent, illustrating the benefits of deterministic implementations. In addition, we also discuss the important role of deterministic implementations in achieving exact replicability of results. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 107,841 |
2308.06182 | Noise-Resilient Designs for Optical Neural Networks | All analog signal processing is fundamentally subject to noise, and this is also the case in modern implementations of Optical Neural Networks (ONNs). Therefore, to mitigate noise in ONNs, we propose two designs that are constructed from a given, possibly trained, Neural Network (NN) that one wishes to implement. Both designs have the capability that the resulting ONNs gives outputs close to the desired NN. To establish the latter, we analyze the designs mathematically. Specifically, we investigate a probabilistic framework for the first design that establishes that the design is correct, i.e., for any feed-forward NN with Lipschitz continuous activation functions, an ONN can be constructed that produces output arbitrarily close to the original. ONNs constructed with the first design thus also inherit the universal approximation property of NNs. For the second design, we restrict the analysis to NNs with linear activation functions and characterize the ONNs' output distribution using exact formulas. Finally, we report on numerical experiments with LeNet ONNs that give insight into the number of components required in these designs for certain accuracy gains. We specifically study the effect of noise as a function of the depth of an ONN. The results indicate that in practice, adding just a few components in the manner of the first or the second design can already be expected to increase the accuracy of ONNs considerably. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 385,054 |
2410.02796 | Toward Adaptive Tracking and Communication via an Airborne Maneuverable
Bi-Static ISAC System | In this letter, we propose an airborne maneuverable bi-static integrated sensing and communication system where both the transmitter and receiver are unmanned aerial vehicles. By timely forming a dynamic bi-static range based on the motion information of the target, such a system can provide an adaptive two dimensional tracking and communication services. Towards this end, a trajectory optimization problem for both transmits and receive UAV is formulated to achieve high-accurate motion state estimation by minimizing the time-variant Cramer Rao bound, subject to the sufficient communication signal-to-noise ratio to maintain communication channel prediction error. Then we develop an efficient approach based on the successive convex approximation technique and the S-procedure to address the problem. Numerical results demonstrate that our proposed airborne maneuverable bi-static ISAC system is able to obtain higher tracking accuracy compared with the static or semi-dynamic ISAC system. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 494,464 |
2003.06100 | Learning by Sampling and Compressing: Efficient Graph Representation
Learning with Extremely Limited Annotations | Graph convolution network (GCN) attracts intensive research interest with broad applications. While existing work mainly focused on designing novel GCN architectures for better performance, few of them studied a practical yet challenging problem: How to learn GCNs from data with extremely limited annotation? In this paper, we propose a new learning method by sampling strategy and model compression to overcome this challenge. Our approach has multifold advantages: 1) the adaptive sampling strategy largely suppresses the GCN training deviation over uniform sampling; 2) compressed GCN-based methods with a smaller scale of parameters need fewer labeled data to train; 3) the smaller scale of training data is beneficial to reduce the human resource cost to label them. We choose six popular GCN baselines and conduct extensive experiments on three real-world datasets. The results show that by applying our method, all GCN baselines cut down the annotation requirement by as much as 90$\%$ and compress the scale of parameters more than 6$\times$ without sacrificing their strong performance. It verifies that the training method could extend the existing semi-supervised GCN-based methods to the scenarios with the extremely small scale of labeled data. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 168,025 |
2208.14708 | Classical-to-quantum convolutional neural network transfer learning | Machine learning using quantum convolutional neural networks (QCNNs) has demonstrated success in both quantum and classical data classification. In previous studies, QCNNs attained a higher classification accuracy than their classical counterparts under the same training conditions in the few-parameter regime. However, the general performance of large-scale quantum models is difficult to examine because of the limited size of quantum circuits, which can be reliably implemented in the near future. We propose transfer learning as an effective strategy for utilizing small QCNNs in the noisy intermediate-scale quantum era to the full extent. In the classical-to-quantum transfer learning framework, a QCNN can solve complex classification problems without requiring a large-scale quantum circuit by utilizing a pre-trained classical convolutional neural network (CNN). We perform numerical simulations of QCNN models with various sets of quantum convolution and pooling operations for MNIST data classification under transfer learning, in which a classical CNN is trained with Fashion-MNIST data. The results show that transfer learning from classical to quantum CNN performs considerably better than purely classical transfer learning models under similar training conditions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 315,403 |
2302.00967 | Energy Efficiency of Training Neural Network Architectures: An Empirical
Study | The evaluation of Deep Learning models has traditionally focused on criteria such as accuracy, F1 score, and related measures. The increasing availability of high computational power environments allows the creation of deeper and more complex models. However, the computations needed to train such models entail a large carbon footprint. In this work, we study the relations between DL model architectures and their environmental impact in terms of energy consumed and CO$_2$ emissions produced during training by means of an empirical study using Deep Convolutional Neural Networks. Concretely, we study: (i) the impact of the architecture and the location where the computations are hosted on the energy consumption and emissions produced; (ii) the trade-off between accuracy and energy efficiency; and (iii) the difference on the method of measurement of the energy consumed using software-based and hardware-based tools. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 343,430 |
1610.05729 | Using Centroidal Voronoi Tessellations to Scale Up the Multi-dimensional
Archive of Phenotypic Elites Algorithm | The recently introduced Multi-dimensional Archive of Phenotypic Elites (MAP-Elites) is an evolutionary algorithm capable of producing a large archive of diverse, high-performing solutions in a single run. It works by discretizing a continuous feature space into unique regions according to the desired discretization per dimension. While simple, this algorithm has a main drawback: it cannot scale to high-dimensional feature spaces since the number of regions increase exponentially with the number of dimensions. In this paper, we address this limitation by introducing a simple extension of MAP-Elites that has a constant, pre-defined number of regions irrespective of the dimensionality of the feature space. Our main insight is that methods from computational geometry could partition a high-dimensional space into well-spread geometric regions. In particular, our algorithm uses a centroidal Voronoi tessellation (CVT) to divide the feature space into a desired number of regions; it then places every generated individual in its closest region, replacing a less fit one if the region is already occupied. We demonstrate the effectiveness of the new "CVT-MAP-Elites" algorithm in high-dimensional feature spaces through comparisons against MAP-Elites in maze navigation and hexapod locomotion tasks. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 62,554 |
2208.06956 | ARIEL: Adversarial Graph Contrastive Learning | Contrastive learning is an effective unsupervised method in graph representation learning, and the key component of contrastive learning lies in the construction of positive and negative samples. Previous methods usually utilize the proximity of nodes in the graph as the principle. Recently, the data-augmentation-based contrastive learning method has advanced to show great power in the visual domain, and some works extended this method from images to graphs. However, unlike the data augmentation on images, the data augmentation on graphs is far less intuitive and much harder to provide high-quality contrastive samples, which leaves much space for improvement. In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints. We develop a new technique called information regularization for stable training and use subgraph sampling for scalability. We generalize our method from node-level contrastive learning to the graph level by treating each graph instance as a super-node. ARIEL consistently outperforms the current graph contrastive learning methods for both node-level and graph-level classification tasks on real-world datasets. We further demonstrate that ARIEL is more robust in the face of adversarial attacks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 312,880 |
0810.4611 | Learning Isometric Separation Maps | Maximum Variance Unfolding (MVU) and its variants have been very successful in embedding data-manifolds in lower dimensional spaces, often revealing the true intrinsic dimension. In this paper we show how to also incorporate supervised class information into an MVU-like method without breaking its convexity. We call this method the Isometric Separation Map and we show that the resulting kernel matrix can be used as a binary/multiclass Support Vector Machine-like method in a semi-supervised (transductive) framework. We also show that the method always finds a kernel matrix that linearly separates the training data exactly without projecting them in infinite dimensional spaces. In traditional SVMs we choose a kernel and hope that the data become linearly separable in the kernel space. In this paper we show how the hyperplane can be chosen ad-hoc and the kernel is trained so that data are always linearly separable. Comparisons with Large Margin SVMs show comparable performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 2,557 |
2312.12610 | Enhancing predictive capabilities in fusion burning plasmas through
surrogate-based optimization in core transport solvers | This work presents the PORTALS framework, which leverages surrogate modeling and optimization techniques to enable the prediction of core plasma profiles and performance with nonlinear gyrokinetic simulations at significantly reduced cost, with no loss of accuracy. The efficiency of PORTALS is benchmarked against standard methods, and its full potential is demonstrated on a unique, simultaneous 5-channel (electron temperature, ion temperature, electron density, impurity density and angular rotation) prediction of steady-state profiles in a DIII-D ITER Similar Shape plasma with GPU-accelerated, nonlinear CGYRO. This paper also provides general guidelines for accurate performance predictions in burning plasmas and the impact of transport modeling in fusion pilot plants studies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 417,021 |
2501.04839 | DRL-Based Medium-Term Planning of Renewable-Integrated Self-Scheduling
Cascaded Hydropower to Guide Wholesale Market Participation | For self-scheduling cascaded hydropower (S-CHP) facilities, medium-term planning is a critical step that coordinates water availability over the medium-term horizon, providing water usage guidance for their short-term operations in wholesale market participation. Typically, medium-term planning strategies (e.g., reservoir storage targets at the end of each short-term period) are determined by either optimization methods or rules of thumb. However, with the integration of variable renewable energy sources (VRESs), optimization-based methods suffer from deviations between the anticipated and actual reservoir storage, while rules of thumb could be financially conservative, thereby compromising short-term operating profitability in wholesale market participation. This paper presents a deep reinforcement learning (DRL)-based framework to derive medium-term planning policies for VRES-integrated S-CHPs (VS-CHPs), which can leverage contextual information underneath individual short-term periods and train planning policies by their induced short-term operating profits in wholesale market participation. The proposed DRL-based framework offers two practical merits. First, its planning strategies consider both seasonal requirements of reservoir storage and needs for short-term operating profits. Second, it adopts a multi-parametric programming-based strategy to accelerate the expensive training process associated with multi-step short-term operations. Finally, the DRL-based framework is evaluated on a real-world VS-CHP, demonstrating its advantages over current practice. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 523,362 |
2106.15538 | Measurement-Based Parameter Identification of DC-DC Converters with
Adaptive Approximate Bayesian Computation | The recent advances in power plants and energy resources have extended the applications of DC-DC converters in the power systems (especially in the context of DC micro-grids). Parameter identification can extract the parameters of the converters and generate accurate discrete simulation models. In this paper, we propose a measurement-based converter parameter calibration method by an adaptive Approximate Bayesian Computation with sequential Monte Carlo sampler (ABC SMC), which estimates the parameters related to passive and parasitic components. At first, we propose to find suitable prior distribution for the parameter which we don't know the prior information about them. With having prior distributions, we can use the ABC SMC to find the exact values of the parameters of the converter. We chose the distance function carefully and based on the simulations we assigned the best method for the threshold sequencing. For improving the computationally of the algorithm, we propose an adaptive weight that helps the algorithm to find the optimal values with fewer simulations. The effectiveness of the proposed method is validated for a DC-DC buck converter. The results show that the proposed approach can accurately and efficiently estimate the posterior distributions of the buck parameters subject to gross errors in the prior distributions of the parameters. The proposed algorithm can also be applied to other parameter identifications and optimization applications such as rectifiers, filters, or power supplies, among others. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 243,800 |
2406.09940 | Implementing engrams from a machine learning perspective: XOR as a basic
motif | We have previously presented the idea of how complex multimodal information could be represented in our brains in a compressed form, following mechanisms similar to those employed in machine learning tools, like autoencoders. In this short comment note we reflect, mainly with a didactical purpose, upon the basic question for a biological implementation: what could be the mechanism working as a loss function, and how it could be connected to a neuronal network providing the required feedback to build a simple training configuration. We present our initial ideas based on a basic motif that implements an XOR switch, using few excitatory and inhibitory neurons. Such motif is guided by a principle of homeostasis, and it implements a loss function that could provide feedback to other neuronal structures, establishing a control system. We analyse the presence of this XOR motif in the connectome of C.Elegans, and indicate the relationship with the well-known lateral inhibition motif. We then explore how to build a basic biological neuronal structure with learning capacity integrating this XOR motif. Guided by the computational analogy, we show an initial example that indicates the feasibility of this approach, applied to learning binary sequences, like it is the case for simple melodies. In summary, we provide didactical examples exploring the parallelism between biological and computational learning mechanisms, identifying basic motifs and training procedures, and how an engram encoding a melody could be built using a simple recurrent network involving both excitatory and inhibitory neurons. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 464,154 |
1606.05784 | Hitting times of local and global optima in genetic algorithms with very
high selection pressure | The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant less than one. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 57,469 |
1212.0467 | Low-rank Matrix Completion using Alternating Minimization | Alternating minimization represents a widely applicable and empirically successful approach for finding low-rank matrices that best fit the given data. For example, for the problem of low-rank matrix completion, this method is believed to be one of the most accurate and efficient, and formed a major component of the winning entry in the Netflix Challenge. In the alternating minimization approach, the low-rank target matrix is written in a bi-linear form, i.e. $X = UV^\dag$; the algorithm then alternates between finding the best $U$ and the best $V$. Typically, each alternating step in isolation is convex and tractable. However the overall problem becomes non-convex and there has been almost no theoretical understanding of when this approach yields a good result. In this paper we present first theoretical analysis of the performance of alternating minimization for matrix completion, and the related problem of matrix sensing. For both these problems, celebrated recent results have shown that they become well-posed and tractable once certain (now standard) conditions are imposed on the problem. We show that alternating minimization also succeeds under similar conditions. Moreover, compared to existing results, our paper shows that alternating minimization guarantees faster (in particular, geometric) convergence to the true matrix, while allowing a simpler analysis. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 20,100 |
2402.09081 | Low-Rank Extragradient Methods for Scalable Semidefinite Optimization | We consider several classes of highly important semidefinite optimization problems that involve both a convex objective function (smooth or nonsmooth) and additional linear or nonlinear smooth and convex constraints, which are ubiquitous in statistics, machine learning, combinatorial optimization, and other domains. We focus on high-dimensional and plausible settings in which the problem admits a low-rank solution which also satisfies a low-rank complementarity condition. We provide several theoretical results proving that, under these circumstances, the well-known Extragradient method, when initialized in the proximity of an optimal primal-dual solution, converges to a solution of the constrained optimization problem with its standard convergence rates guarantees, using only low-rank singular value decompositions (SVD) to project onto the positive semidefinite cone, as opposed to computationally-prohibitive full-rank SVDs required in worst-case. Our approach is supported by numerical experiments conducted with a dataset of Max-Cut instances. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 429,359 |
2001.01248 | Exploiting Event Cameras for Spatio-Temporal Prediction of Fast-Changing
Trajectories | This paper investigates trajectory prediction for robotics, to improve the interaction of robots with moving targets, such as catching a bouncing ball. Unexpected, highly-non-linear trajectories cannot easily be predicted with regression-based fitting procedures, therefore we apply state of the art machine learning, specifically based on Long-Short Term Memory (LSTM) architectures. In addition, fast moving targets are better sensed using event cameras, which produce an asynchronous output triggered by spatial change, rather than at fixed temporal intervals as with traditional cameras. We investigate how LSTM models can be adapted for event camera data, and in particular look at the benefit of using asynchronously sampled data. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 159,448 |
2103.02926 | Calibrated simplex-mapping classification | We propose a novel methodology for general multi-class classification in arbitrary feature spaces, which results in a potentially well-calibrated classifier. Calibrated classifiers are important in many applications because, in addition to the prediction of mere class labels, they also yield a confidence level for each of their predictions. In essence, the training of our classifier proceeds in two steps. In a first step, the training data is represented in a latent space whose geometry is induced by a regular $(n-1)$-dimensional simplex, $n$ being the number of classes. We design this representation in such a way that it well reflects the feature space distances of the datapoints to their own- and foreign-class neighbors. In a second step, the latent space representation of the training data is extended to the whole feature space by fitting a regression model to the transformed data. With this latent-space representation, our calibrated classifier is readily defined. We rigorously establish its core theoretical properties and benchmark its prediction and calibration properties by means of various synthetic and real-world data sets from different application domains. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 223,115 |
2106.06933 | Active Learning for Network Traffic Classification: A Technical Study | Network Traffic Classification (NTC) has become an important feature in various network management operations, e.g., Quality of Service (QoS) provisioning and security services. Machine Learning (ML) algorithms as a popular approach for NTC can promise reasonable accuracy in classification and deal with encrypted traffic. However, ML-based NTC techniques suffer from the shortage of labeled traffic data which is the case in many real-world applications. This study investigates the applicability of an active form of ML, called Active Learning (AL), in NTC. AL reduces the need for a large number of labeled examples by actively choosing the instances that should be labeled. The study first provides an overview of NTC and its fundamental challenges along with surveying the literature on ML-based NTC methods. Then, it introduces the concepts of AL, discusses it in the context of NTC, and review the literature in this field. Further, challenges and open issues in AL-based classification of network traffic are discussed. Moreover, as a technical survey, some experiments are conducted to show the broad applicability of AL in NTC. The simulation results show that AL can achieve high accuracy with a small amount of data. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 240,692 |
2111.05670 | DeCOM: Decomposed Policy for Constrained Cooperative Multi-Agent
Reinforcement Learning | In recent years, multi-agent reinforcement learning (MARL) has presented impressive performance in various applications. However, physical limitations, budget restrictions, and many other factors usually impose \textit{constraints} on a multi-agent system (MAS), which cannot be handled by traditional MARL frameworks. Specifically, this paper focuses on constrained MASes where agents work \textit{cooperatively} to maximize the expected team-average return under various constraints on expected team-average costs, and develops a \textit{constrained cooperative MARL} framework, named DeCOM, for such MASes. In particular, DeCOM decomposes the policy of each agent into two modules, which empowers information sharing among agents to achieve better cooperation. In addition, with such modularization, the training algorithm of DeCOM separates the original constrained optimization into an unconstrained optimization on reward and a constraints satisfaction problem on costs. DeCOM then iteratively solves these problems in a computationally efficient manner, which makes DeCOM highly scalable. We also provide theoretical guarantees on the convergence of DeCOM's policy update algorithm. Finally, we validate the effectiveness of DeCOM with various types of costs in both toy and large-scale (with 500 agents) environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 265,851 |
1905.10884 | Bayesian Learning of Sum-Product Networks | Sum-product networks (SPNs) are flexible density estimators and have received significant attention due to their attractive inference properties. While parameter learning in SPNs is well developed, structure learning leaves something to be desired: Even though there is a plethora of SPN structure learners, most of them are somewhat ad-hoc and based on intuition rather than a clear learning principle. In this paper, we introduce a well-principled Bayesian framework for SPN structure learning. First, we decompose the problem into i) laying out a computational graph, and ii) learning the so-called scope function over the graph. The first is rather unproblematic and akin to neural network architecture validation. The second represents the effective structure of the SPN and needs to respect the usual structural constraints in SPN, i.e. completeness and decomposability. While representing and learning the scope function is somewhat involved in general, in this paper, we propose a natural parametrisation for an important and widely used special case of SPNs. These structural parameters are incorporated into a Bayesian model, such that simultaneous structure and parameter learning is cast into monolithic Bayesian posterior inference. In various experiments, our Bayesian SPNs often improve test likelihoods over greedy SPN learners. Further, since the Bayesian framework protects against overfitting, we can evaluate hyper-parameters directly on the Bayesian model score, waiving the need for a separate validation set, which is especially beneficial in low data regimes. Bayesian SPNs can be applied to heterogeneous domains and can easily be extended to nonparametric formulations. Moreover, our Bayesian approach is the first, which consistently and robustly learns SPN structures under missing data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 132,234 |
2502.10303 | Reinforcement Learning in Strategy-Based and Atari Games: A Review of
Google DeepMinds Innovations | Reinforcement Learning (RL) has been widely used in many applications, particularly in gaming, which serves as an excellent training ground for AI models. Google DeepMind has pioneered innovations in this field, employing reinforcement learning algorithms, including model-based, model-free, and deep Q-network approaches, to create advanced AI models such as AlphaGo, AlphaGo Zero, and MuZero. AlphaGo, the initial model, integrates supervised learning and reinforcement learning to master the game of Go, surpassing professional human players. AlphaGo Zero refines this approach by eliminating reliance on human gameplay data, instead utilizing self-play for enhanced learning efficiency. MuZero further extends these advancements by learning the underlying dynamics of game environments without explicit knowledge of the rules, achieving adaptability across various games, including complex Atari games. This paper reviews the significance of reinforcement learning applications in Atari and strategy-based games, analyzing these three models, their key innovations, training processes, challenges encountered, and improvements made. Additionally, we discuss advancements in the field of gaming, including MiniZero and multi-agent models, highlighting future directions and emerging AI models from Google DeepMind. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 533,811 |
1612.01749 | FoCUS: Fourier-based Coded Ultrasound | Modern imaging systems typically use single-carrier short pulses for transducer excitation. Coded signals together with pulse compression are successfully used in radar and communication to increase the amount of transmitted energy. Previous research verified significant improvement in SNR and imaging depth for ultrasound imaging with coded signals. Since pulse compression needs to be applied at each transducer element, the implementation of coded excitation (CE) in array imaging is computationally complex. Applying pulse compression on the beamformer output reduces the computational load but also degrades both the axial and lateral point spread function (PSF) compromising image quality. In this work we present an approach for efficient implementation of pulse compression by integrating it into frequency domain beamforming. This method leads to significant reduction in the amount of computations without affecting axial resolution. The lateral resolution is dictated by the factor of savings in computational load. We verify the performance of our method on a Verasonics imaging system and compare the resulting images to time-domain processing. We show that up to 77 fold reduction in computational complexity can be achieved in a typical imaging setups. The efficient implementation makes CE a feasible approach in array imaging paving the way to enhanced SNR as well as improved imaging depth and frame-rate. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,139 |
2309.09374 | Fully Convolutional Generative Machine Learning Method for Accelerating
Non-Equilibrium Greens Function Simulations | This work describes a novel simulation approach that combines machine learning and device modelling simulations. The device simulations are based on the quantum mechanical non-equilibrium Greens function (NEGF) approach and the machine learning method is an extension to a convolutional generative network. We have named our new simulation approach ML-NEGF and we have implemented it in our in-house simulator called NESS (nano-electronics simulations software). The reported results demonstrate the improved convergence speed of the ML-NEGF method in comparison to the standard NEGF approach. The trained ML model effectively learns the underlying physics of nano-sheet transistor behaviour, resulting in faster convergence of the coupled Poisson-NEGF simulations. Quantitatively, our ML- NEGF approach achieves an average convergence acceleration of 60%, substantially reducing the computational time while maintaining the same accuracy. | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 392,581 |
2310.13787 | Enhancing Illicit Activity Detection using XAI: A Multimodal Graph-LLM
Framework | Financial cybercrime prevention is an increasing issue with many organisations and governments. As deep learning models have progressed to identify illicit activity on various financial and social networks, the explainability behind the model decisions has been lacklustre with the investigative analyst at the heart of any deep learning platform. In our paper, we present a state-of-the-art, novel multimodal proactive approach to addressing XAI in financial cybercrime detection. We leverage a triad of deep learning models designed to distill essential representations from transaction sequencing, subgraph connectivity, and narrative generation to significantly streamline the analyst's investigative process. Our narrative generation proposal leverages LLM to ingest transaction details and output contextual narrative for an analyst to understand a transaction and its metadata much further. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 401,574 |
2303.08466 | Mining False Positive Examples for Text-Based Person Re-identification | Text-based person re-identification (ReID) aims to identify images of the targeted person from a large-scale person image database according to a given textual description. However, due to significant inter-modal gaps, text-based person ReID remains a challenging problem. Most existing methods generally rely heavily on the similarity contributed by matched word-region pairs, while neglecting mismatched word-region pairs which may play a decisive role. Accordingly, we propose to mine false positive examples (MFPE) via a jointly optimized multi-branch architecture to handle this problem. MFPE contains three branches including a false positive mining (FPM) branch to highlight the role of mismatched word-region pairs. Besides, MFPE delicately designs a cross-relu loss to increase the gap of similarity scores between matched and mismatched word-region pairs. Extensive experiments on CUHK-PEDES demonstrate the superior effectiveness of MFPE. Our code is released at https://github.com/xx-adeline/MFPE. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,658 |
2112.13237 | CABACE: Injecting Character Sequence Information and Domain Knowledge
for Enhanced Acronym and Long-Form Extraction | Acronyms and long-forms are commonly found in research documents, more so in documents from scientific and legal domains. Many acronyms used in such documents are domain-specific and are very rarely found in normal text corpora. Owing to this, transformer-based NLP models often detect OOV (Out of Vocabulary) for acronym tokens, especially for non-English languages, and their performance suffers while linking acronyms to their long forms during extraction. Moreover, pretrained transformer models like BERT are not specialized to handle scientific and legal documents. With these points being the overarching motivation behind this work, we propose a novel framework CABACE: Character-Aware BERT for ACronym Extraction, which takes into account character sequences in text and is adapted to scientific and legal domains by masked language modelling. We further use an objective with an augmented loss function, adding the max loss and mask loss terms to the standard cross-entropy loss for training CABACE. We further leverage pseudo labelling and adversarial data generation to improve the generalizability of the framework. Experimental results prove the superiority of the proposed framework in comparison to various baselines. Additionally, we show that the proposed framework is better suited than baseline models for zero-shot generalization to non-English languages, thus reinforcing the effectiveness of our approach. Our team BacKGProp secured the highest scores on the French dataset, second-highest on Danish and Vietnamese, and third-highest in the English-Legal dataset on the global leaderboard for the acronym extraction (AE) shared task at SDU AAAI-22. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 273,182 |
1903.10794 | RecSys-DAN: Discriminative Adversarial Networks for Cross-Domain
Recommender Systems | Data sparsity and data imbalance are practical and challenging issues in cross-domain recommender systems. This paper addresses those problems by leveraging the concepts which derive from representation learning, adversarial learning and transfer learning (particularly, domain adaptation). Although various transfer learning methods have shown promising performance in this context, our proposed novel method RecSys-DAN focuses on alleviating the cross-domain and within-domain data sparsity and data imbalance and learns transferable latent representations for users, items and their interactions. Different from existing approaches, the proposed method transfers the latent representations from a source domain to a target domain in an adversarial way. The mapping functions in the target domain are learned by playing a min-max game with an adversarial loss, aiming to generate domain indistinguishable representations for a discriminator. Four neural architectural instances of ResSys-DAN are proposed and explored. Empirical results on real-world Amazon data show that, even without using labeled data (i.e., ratings) in the target domain, RecSys-DAN achieves competitive performance as compared to the state-of-the-art supervised methods. More importantly, RecSys-DAN is highly flexible to both unimodal and multimodal scenarios, and thus it is more robust to the cold-start recommendation which is difficult for previous methods. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 125,365 |
2304.12908 | Direct Collocation Methods for Trajectory Optimization in Constrained
Robotic Systems | Direct collocation methods are powerful tools to solve trajectory optimization problems in robotics. While their resulting trajectories tend to be dynamically accurate, they may also present large kinematic errors in the case of constrained mechanical systems, i.e., those whose state coordinates are subject to holonomic or nonholonomic constraints, like loop-closure or rolling-contact constraints. These constraints confine the robot trajectories to an implicitly-defined manifold, which complicates the computation of accurate solutions. Discretization errors inherent to the transcription of the problem easily make the trajectories drift away from this manifold, which results in physically inconsistent motions that are difficult to track with a controller. This paper reviews existing methods to deal with this problem and proposes new ones to overcome their limitations. Current approaches either disregard the kinematic constraints (which leads to drift accumulation) or modify the system dynamics to keep the trajectory close to the manifold (which adds artificial forces or energy dissipation to the system). The methods we propose, in contrast, achieve full drift elimination on the discrete trajectory, or even along the continuous one, without artificial modifications of the system dynamics. We illustrate and compare the methods using various examples of different complexity. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 360,386 |
2210.13454 | A Novel Block-Wise Index Modulation Scheme for High-Mobility OTFS
Communications | As a promising technique for high-mobility wireless communications, orthogonal time frequency space (OTFS) has been proved to enjoy excellent advantages with respect to traditional orthogonal frequency division multiplexing (OFDM). However, a challenging problem is to design efficient systems to further improve the performance. In this paper, we propose a novel block-wise index modulation (IM) scheme for OTFS systems, named Doppler-IM with OTFS (DoIM-OTFS), where a block of Doppler resource bins are activated simultaneously. For practical implementation, we develop a low complexity customized message passing (CMP) algorithm for our proposed DoIM-OTFS scheme. Simulation results demonstrate our proposed DoIM-OTFS system outperforms traditional OTFS system without IM. The proposed CMP algorithm can achieve desired performance and robustness to the imperfect channel state information (CSI). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 326,171 |
1503.08167 | Normalization of Non-Standard Words in Croatian Texts | This paper presents text normalization which is an integral part of any text-to-speech synthesis system. Text normalization is a set of methods with a task to write non-standard words, like numbers, dates, times, abbreviations, acronyms and the most common symbols, in their full expanded form are presented. The whole taxonomy for classification of non-standard words in Croatian language together with rule-based normalization methods combined with a lookup dictionary are proposed. Achieved token rate for normalization of Croatian texts is 95%, where 80% of expanded words are in correct morphological form. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 41,554 |
2101.12365 | Sharp Bounds on the Approximation Rates, Metric Entropy, and $n$-widths
of Shallow Neural Networks | In this article, we study approximation properties of the variation spaces corresponding to shallow neural networks with a variety of activation functions. We introduce two main tools for estimating the metric entropy, approximation rates, and $n$-widths of these spaces. First, we introduce the notion of a smoothly parameterized dictionary and give upper bounds on the non-linear approximation rates, metric entropy and $n$-widths of their absolute convex hull. The upper bounds depend upon the order of smoothness of the parameterization. This result is applied to dictionaries of ridge functions corresponding to shallow neural networks, and they improve upon existing results in many cases. Next, we provide a method for lower bounding the metric entropy and $n$-widths of variation spaces which contain certain classes of ridge functions. This result gives sharp lower bounds on the $L^2$-approximation rates, metric entropy, and $n$-widths for variation spaces corresponding to neural networks with a range of important activation functions, including ReLU$^k$ activation functions and sigmoidal activation functions with bounded variation. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 217,542 |
2109.08059 | FOMO: Topics versus documents in legal eDiscovery | In the United States, the parties to a lawsuit are required to search through their electronically stored information to find documents that are relevant to the specific case and produce them to their opposing party. Negotiations over the scope of these searches often reflect a fear that something will be missed (Fear of Missing Out: FOMO). A Recall level of 80%, for example, means that 20% of the relevant documents will be left unproduced. This paper makes the argument that eDiscovery is the process of identifying responsive information, not identifying documents. Documents are the carriers of the information; they are not the direct targets of the process. A given document may contain one or more topics or factoids and a factoid may appear in more than one document. The coupon collector's problem, Heaps law, and other analyses provide ways to model the problem of finding information from among documents. In eDiscovery, however, the parties do not know how many factoids there might be in a collection or their probabilities. This paper describes a simple model that estimates the confidence that a fact will be omitted from the produced set (the identified set), while being contained in the missed set. Two data sets are then analyzed, a small set involving microaggressions and larger set involving classification of web pages. Both show that it is possible to discover at least one example of each available topic within a relatively small number of documents, meaning the further effort will not return additional novel information. The smaller data set is also used to investigate whether the non-random order of searching for responsive documents commonly used in eDiscovery (called continuous active learning) affects the distribution of topics-it does not. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 255,761 |
2409.06745 | Personalized Knowledge Tracing through Student Representation
Reconstruction and Class Imbalance Mitigation | Knowledge tracing is a technique that predicts students' future performance by analyzing their learning process through historical interactions with intelligent educational platforms, enabling a precise evaluation of their knowledge mastery. Recent studies have achieved significant progress by leveraging powerful deep neural networks. These models construct complex input representations using questions, skills, and other auxiliary information but overlook individual student characteristics, which limits the capability for personalized assessment. Additionally, the available datasets in the field exhibit class imbalance issues. The models that simply predict all responses as correct without substantial effort can yield impressive accuracy. In this paper, we propose PKT, a novel approach for personalized knowledge tracing. PKT reconstructs representations from sequences of interactions with a tutoring platform to capture latent information about the students. Moreover, PKT incorporates focal loss to improve prioritize minority classes, thereby achieving more balanced predictions. Extensive experimental results on four publicly available educational datasets demonstrate the advanced predictive performance of PKT in comparison with 16 state-of-the-art models. To ensure the reproducibility of our research, the code is publicly available at https://anonymous.4open.science/r/PKT. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 487,261 |
2309.17253 | Secondary Defense Strategies of AC Microgrids Against Generally
Unbounded Attacks | This paper develops a fully distributed attack-resilient secondary defense strategies for AC microgrids, addressing more generally unbounded attacks on control input channels than those addressed in existing literature. The secondary control of local inverter includes consensus-based voltage and current regulators utilizing relative information from neighboring inverters. This distributed control approach relies on localized control and a sparse communication network, making it susceptible to malicious cyber-physical attacks that can impair consensus performance and potentially destabilize the overall microgrid. In contrast to existing solutions that are limited to addressing either bounded faults, noises or unbounded attacks with bounded first-order time derivatives, we aim to surpass these constraints and enhance the defense capabilities of counteracting cyber-physical attacks by enabling the AC microgrids adopting the proposed strategies to withstand a much wider range of unbounded cyber-attack signals. Fully distributed attack-resilient secondary defense strategies are developed for AC microgrids to counteract the detrimental effects of generally unbounded attacks on control input channels. Rigorous proofs using Lyapunov techniques demonstrate that the proposed defense strategies accomplish the uniformly ultimately bounded convergence on frequency regulation and achieve voltage containment and active power sharing simultaneously for multi-inverter-based AC microgrids in the face of generally unbounded attacks. The proposed defense strategies are validated on a modified IEEE 34-bus test feeder benchmark system incorporating four inverter-based DERs. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 395,697 |
1710.10654 | Delivery Time Minimization in Edge Caching: Synergistic Benefits of
Subspace Alignment and Zero Forcing | An emerging trend of next generation communication systems is to provide network edges with additional capabilities such as additional storage resources in the form of caches to reduce file delivery latency. To investigate this aspect, we study the fundamental limits of a cache-aided wireless network consisting of one central base station, $M$ transceivers and $K$ receivers from a latency-centric perspective. We use the normalized delivery time (NDT) to capture the per-bit latency for the worst-case file request pattern at high signal-to-noise ratios (SNR), normalized with respect to a reference interference-free system with unlimited transceiver cache capabilities. For various special cases with $M=\{1,2\}$ and $K=\{1,2,3\}$ that satisfy $M+K\leq 4$, we establish the optimal tradeoff between cache storage and latency. This is facilitated through establishing a novel converse (for arbitrary $M$ and $K$) and an achievability scheme on the NDT. Our achievability scheme is a synergistic combination of multicasting, zero-forcing beamforming and interference alignment. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 83,435 |
2409.20264 | First Order System Least Squares Neural Networks | We introduce a conceptual framework for numerically solving linear elliptic, parabolic, and hyperbolic PDEs on bounded, polytopal domains in euclidean spaces by deep neural networks. The PDEs are recast as minimization of a least-squares (LSQ for short) residual of an equivalent, well-posed first-order system, over parametric families of deep neural networks. The associated LSQ residual is a) equal or proportional to a weak residual of the PDE, b) additive in terms of contributions from localized subnetworks, indicating locally ``out-of-equilibrium'' of neural networks with respect to the PDE residual, c) serves as numerical loss function for neural network training, and d) constitutes, even with incomplete training, a computable, (quasi-)optimal numerical error estimator in the context of adaptive LSQ finite element methods. In addition, an adaptive neural network growth strategy is proposed which, assuming exact numerical minimization of the LSQ loss functional, yields sequences of neural networks with realizations that converge rate-optimally to the exact solution of the first order system LSQ formulation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 493,062 |
1511.02992 | Traffic Sign Classification Using Deep Inception Based Convolutional
Networks | In this work, we propose a novel deep network for traffic sign classification that achieves outstanding performance on GTSRB surpassing all previous methods. Our deep network consists of spatial transformer layers and a modified version of inception module specifically designed for capturing local and global features together. This features adoption allows our network to classify precisely intraclass samples even under deformations. Use of spatial transformer layer makes this network more robust to deformations such as translation, rotation, scaling of input images. Unlike existing approaches that are developed with hand-crafted features, multiple deep networks with huge parameters and data augmentations, our method addresses the concern of exploding parameters and augmentations. We have achieved the state-of-the-art performance of 99.81\% on GTSRB dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 48,704 |
2406.01994 | 3D Imaging of Complex Specular Surfaces by Fusing Polarimetric and
Deflectometric Information | Accurate and fast 3D imaging of specular surfaces still poses major challenges for state-of-the-art optical measurement principles. Frequently used methods, such as phase-measuring deflectometry (PMD) or shape-from-polarization (SfP), rely on strong assumptions about the measured objects, limiting their generalizability in broader application areas like medical imaging, industrial inspection, virtual reality, or cultural heritage analysis. In this paper, we introduce a measurement principle that utilizes a novel technique to effectively encode and decode the information contained in a light field reflected off a specular surface. We combine polarization cues from SfP with geometric information obtained from PMD to resolve all arising ambiguities in the 3D measurement. Moreover, our approach removes the unrealistic orthographic imaging assumption for SfP, which significantly improves the respective results. We showcase our new technique by demonstrating single-shot and multi-shot measurements on complex-shaped specular surfaces, displaying an evaluated accuracy of surface normals below $0.6^\circ$. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 460,564 |
2403.11678 | Exploring 3D-aware Latent Spaces for Efficiently Learning Numerous
Scenes | We present a method enabling the scaling of NeRFs to learn a large number of semantically-similar scenes. We combine two techniques to improve the required training time and memory cost per scene. First, we learn a 3D-aware latent space in which we train Tri-Plane scene representations, hence reducing the resolution at which scenes are learned. Moreover, we present a way to share common information across scenes, hence allowing for a reduction of model complexity to learn a particular scene. Our method reduces effective per-scene memory costs by 44% and per-scene time costs by 86% when training 1000 scenes. Our project page can be found at https://3da-ae.github.io . | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 438,809 |
1311.7080 | Cross-Domain Sparse Coding | Sparse coding has shown its power as an effective data representation method. However, up to now, all the sparse coding approaches are limited within the single domain learning problem. In this paper, we extend the sparse coding to cross domain learning problem, which tries to learn from a source domain to a target domain with significant different distribution. We impose the Maximum Mean Discrepancy (MMD) criterion to reduce the cross-domain distribution difference of sparse codes, and also regularize the sparse codes by the class labels of the samples from both domains to increase the discriminative ability. The encouraging experiment results of the proposed cross-domain sparse coding algorithm on two challenging tasks --- image classification of photograph and oil painting domains, and multiple user spam detection --- show the advantage of the proposed method over other cross-domain data representation methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 28,707 |
2308.11684 | User Identity Linkage in Social Media Using Linguistic and Social
Interaction Features | Social media users often hold several accounts in their effort to multiply the spread of their thoughts, ideas, and viewpoints. In the particular case of objectionable content, users tend to create multiple accounts to bypass the combating measures enforced by social media platforms and thus retain their online identity even if some of their accounts are suspended. User identity linkage aims to reveal social media accounts likely to belong to the same natural person so as to prevent the spread of abusive/illegal activities. To this end, this work proposes a machine learning-based detection model, which uses multiple attributes of users' online activity in order to identify whether two or more virtual identities belong to the same real natural person. The models efficacy is demonstrated on two cases on abusive and terrorism-related Twitter content. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 387,242 |
1912.01166 | Different Set Domain Adaptation for Brain-Computer Interfaces: A Label
Alignment Approach | A brain-computer interface (BCI) system usually needs a long calibration session for each new subject/task to adjust its parameters, which impedes its transition from the laboratory to real-world applications. Domain adaptation, which leverages labeled data from auxiliary subjects/tasks (source domains), has demonstrated its effectiveness in reducing such calibration effort. Currently, most domain adaptation approaches require the source domains to have the same feature space and label space as the target domain, which limits their applications, as the auxiliary data may have different feature spaces and/or different label spaces. This paper considers different set domain adaptation for BCIs, i.e., the source and target domains have different label spaces. We introduce a practical setting of different label sets for BCIs, and propose a novel label alignment (LA) approach to align the source label space with the target label space. It has three desirable properties: 1) LA only needs as few as one labeled sample from each class of the target subject; 2) LA can be used as a preprocessing step before different feature extraction and classification algorithms; and, 3) LA can be integrated with other domain adaptation approaches to achieve even better performance. Experiments on two motor imagery datasets demonstrated the effectiveness of LA. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 156,005 |
2307.10225 | First-Order Stable Model Semantics with Intensional Functions | In classical logic, nonBoolean fluents, such as the location of an object, can be naturally described by functions. However, this is not the case in answer set programs, where the values of functions are pre-defined, and nonmonotonicity of the semantics is related to minimizing the extents of predicates but has nothing to do with functions. We extend the first-order stable model semantics by Ferraris, Lee, and Lifschitz to allow intensional functions -- functions that are specified by a logic program just like predicates are specified. We show that many known properties of the stable model semantics are naturally extended to this formalism and compare it with other related approaches to incorporating intensional functions. Furthermore, we use this extension as a basis for defining Answer Set Programming Modulo Theories (ASPMT), analogous to the way that Satisfiability Modulo Theories (SMT) is defined, allowing for SMT-like effective first-order reasoning in the context of ASP. Using SMT solving techniques involving functions, ASPMT can be applied to domains containing real numbers and alleviates the grounding problem. We show that other approaches to integrating ASP and CSP/SMT can be related to special cases of ASPMT in which functions are limited to non-intensional ones. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 380,472 |
2311.06074 | Two-compartment neuronal spiking model expressing brain-state specific
apical-amplification, -isolation and -drive regimes | Mounting experimental evidence suggests that brain-state-specific neural mechanisms, supported by connectomic architectures, play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence (e.g., from sensory systems). These mechanisms operate across multiple spatial and temporal scales, necessitating dedicated support at the levels of individual neurons and synapses. A notable feature within the neocortex is the structure of large, deep pyramidal neurons, which exhibit a distinctive separation between an apical dendritic compartment and a basal dendritic/perisomatic compartment. This separation is characterized by distinct patterns of incoming connections and brain-state-specific activation mechanisms, namely, apical amplification, isolation, and drive, which are associated with wakefulness, deeper NREM sleep stages, and REM sleep, respectively. The cognitive roles of apical mechanisms have been demonstrated in behaving animals. In contrast, classical models of learning in spiking networks are based on single-compartment neurons, lacking the ability to describe the integration of apical and basal/somatic information. This work aims to provide the computational community with a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning. This model includes a piece-wise linear transfer function (ThetaPlanes) at the highest abstraction level, making it suitable for use in large-scale bio-inspired artificial intelligence systems. A machine learning evolutionary algorithm, guided by a set of fitness functions, selected the parameters that define neurons expressing the desired apical mechanisms. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 406,808 |
1911.01352 | Learning from Explanations with Neural Execution Tree | While deep neural networks have achieved impressive performance on a range of NLP tasks, these data-hungry models heavily rely on labeled data, which restricts their applications in scenarios where data annotation is expensive. Natural language (NL) explanations have been demonstrated very useful additional supervision, which can provide sufficient domain knowledge for generating more labeled data over new instances, while the annotation time only doubles. However, directly applying them for augmenting model learning encounters two challenges: (1) NL explanations are unstructured and inherently compositional, which asks for a modularized model to represent their semantics, (2) NL explanations often have large numbers of linguistic variants, resulting in low recall and limited generalization ability. In this paper, we propose a novel Neural Execution Tree (NExT) framework to augment training data for text classification using NL explanations. After transforming NL explanations into executable logical forms by semantic parsing, NExT generalizes different types of actions specified by the logical forms for labeling data instances, which substantially increases the coverage of each NL explanation. Experiments on two NLP tasks (relation extraction and sentiment analysis) demonstrate its superiority over baseline methods. Its extension to multi-hop question answering achieves performance gain with light annotation effort. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 152,080 |
1910.06832 | Discriminator optimal transport | Within a broad class of generative adversarial networks, we show that discriminator optimization process increases a lower bound of the dual cost function for the Wasserstein distance between the target distribution $p$ and the generator distribution $p_G$. It implies that the trained discriminator can approximate optimal transport (OT) from $p_G$ to $p$.Based on some experiments and a bit of OT theory, we propose a discriminator optimal transport (DOT) scheme to improve generated images. We show that it improves inception score and FID calculated by un-conditional GAN trained by CIFAR-10, STL-10 and a public pre-trained model of conditional GAN by ImageNet. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 149,460 |
2106.05727 | Cooperative Multi-Agent Fairness and Equivariant Policies | We study fairness through the lens of cooperative multi-agent learning. Our work is motivated by empirical evidence that naive maximization of team reward yields unfair outcomes for individual team members. To address fairness in multi-agent contexts, we introduce team fairness, a group-based fairness measure for multi-agent learning. We then prove that it is possible to enforce team fairness during policy optimization by transforming the team's joint policy into an equivariant map. We refer to our multi-agent learning strategy as Fairness through Equivariance (Fair-E) and demonstrate its effectiveness empirically. We then introduce Fairness through Equivariance Regularization (Fair-ER) as a soft-constraint version of Fair-E and show that it reaches higher levels of utility than Fair-E and fairer outcomes than non-equivariant policies. Finally, we present novel findings regarding the fairness-utility trade-off in multi-agent settings; showing that the magnitude of the trade-off is dependent on agent skill. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 240,203 |
2411.10702 | Wireless Resource Allocation with Collaborative Distributed and
Centralized DRL under Control Channel Attacks | In this paper, we consider a wireless resource allocation problem in a cyber-physical system (CPS) where the control channel, carrying resource allocation commands, is subjected to denial-of-service (DoS) attacks. We propose a novel concept of collaborative distributed and centralized (CDC) resource allocation to effectively mitigate the impact of these attacks. To optimize the CDC resource allocation policy, we develop a new CDC-deep reinforcement learning (DRL) algorithm, whereas existing DRL frameworks only formulate either centralized or distributed decision-making problems. Simulation results demonstrate that the CDC-DRL algorithm significantly outperforms state-of-the-art DRL benchmarks, showcasing its ability to address resource allocation problems in large-scale CPSs under control channel attacks. | false | false | false | false | false | false | true | false | false | true | true | false | false | false | false | false | false | false | 508,755 |
1912.03048 | Document Network Embedding: Coping for Missing Content and Missing Links | Searching through networks of documents is an important task. A promising path to improve the performance of information retrieval systems in this context is to leverage dense node and content representations learned with embedding techniques. However, these techniques cannot learn representations for documents that are either isolated or whose content is missing. To tackle this issue, assuming that the topology of the network and the content of the documents correlate, we propose to estimate the missing node representations from the available content representations, and conversely. Inspired by recent advances in machine translation, we detail in this paper how to learn a linear transformation from a set of aligned content and node representations. The projection matrix is efficiently calculated in terms of the singular value decomposition. The usefulness of the proposed method is highlighted by the improved ability to predict the neighborhood of nodes whose links are unobserved based on the projected content representations, and to retrieve similar documents when content is missing, based on the projected node representations. | false | false | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 156,504 |
2210.03594 | Label Propagation with Weak Supervision | Semi-supervised learning and weakly supervised learning are important paradigms that aim to reduce the growing demand for labeled data in current machine learning applications. In this paper, we introduce a novel analysis of the classical label propagation algorithm (LPA) (Zhu & Ghahramani, 2002) that moreover takes advantage of useful prior information, specifically probabilistic hypothesized labels on the unlabeled data. We provide an error bound that exploits both the local geometric properties of the underlying graph and the quality of the prior information. We also propose a framework to incorporate multiple sources of noisy information. In particular, we consider the setting of weak supervision, where our sources of information are weak labelers. We demonstrate the ability of our approach on multiple benchmark weakly supervised classification tasks, showing improvements upon existing semi-supervised and weakly supervised methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 322,102 |
cs/0204056 | Trading Agents for Roaming Users | Some roaming users need services to manipulate autonomous processes. Trading agents running on agent trade servers are used as a case in point. We present a solution that provides the agent owners with means to upkeeping their desktop environment, and maintaining their agent trade server processes, via a briefcase service. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 537,565 |
2209.05707 | Robin: A Novel Online Suicidal Text Corpus of Substantial Breadth and
Scale | Suicide is a major public health crisis. With more than 20,000,000 suicide attempts each year, the early detection of suicidal intent has the potential to save hundreds of thousands of lives. Traditional mental health screening methods are time-consuming, costly, and often inaccessible to disadvantaged populations; online detection of suicidal intent using machine learning offers a viable alternative. Here we present Robin, the largest non-keyword generated suicidal corpus to date, consisting of over 1.1 million online forum postings. In addition to its unprecedented size, Robin is specially constructed to include various categories of suicidal text, such as suicide bereavement and flippant references, better enabling models trained on Robin to learn the subtle nuances of text expressing suicidal ideation. Experimental results achieve state-of-the-art performance for the classification of suicidal text, both with traditional methods like logistic regression (F1=0.85), as well as with large-scale pre-trained language models like BERT (F1=0.92). Finally, we release the Robin dataset publicly as a machine learning resource with the potential to drive the next generation of suicidal sentiment research. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 317,176 |
2009.06368 | Searching for a Search Method: Benchmarking Search Algorithms for
Generating NLP Adversarial Examples | We study the behavior of several black-box search algorithms used for generating adversarial examples for natural language processing (NLP) tasks. We perform a fine-grained analysis of three elements relevant to search: search algorithm, search space, and search budget. When new search algorithms are proposed in past work, the attack search space is often modified alongside the search algorithm. Without ablation studies benchmarking the search algorithm change with the search space held constant, one cannot tell if an increase in attack success rate is a result of an improved search algorithm or a less restrictive search space. Additionally, many previous studies fail to properly consider the search algorithms' run-time cost, which is essential for downstream tasks like adversarial training. Our experiments provide a reproducible benchmark of search algorithms across a variety of search spaces and query budgets to guide future research in adversarial NLP. Based on our experiments, we recommend greedy attacks with word importance ranking when under a time constraint or attacking long inputs, and either beam search or particle swarm optimization otherwise. Code implementation shared via https://github.com/QData/TextAttack-Search-Benchmark | false | false | false | false | true | false | true | false | true | false | false | false | true | false | false | false | false | false | 195,614 |
1705.06884 | A Unified Framework for Stochastic Matrix Factorization via Variance
Reduction | We propose a unified framework to speed up the existing stochastic matrix factorization (SMF) algorithms via variance reduction. Our framework is general and it subsumes several well-known SMF formulations in the literature. We perform a non-asymptotic convergence analysis of our framework and derive computational and sample complexities for our algorithm to converge to an $\epsilon$-stationary point in expectation. In addition, extensive experiments for a wide class of SMF formulations demonstrate that our framework consistently yields faster convergence and a more accurate output dictionary vis-\`a-vis state-of-the-art frameworks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 73,698 |
2402.12945 | Stochastic Approximation Approach to Federated Machine Learning | This paper examines Federated learning (FL) in a Stochastic Approximation (SA) framework. FL is a collaborative way to train neural network models across various participants or clients without centralizing their data. Each client will train a model on their respective data and send the weights across to a the server periodically for aggregation. The server aggregates these weights which are then used by the clients to re-initialize their neural network and continue the training. SA is an iterative algorithm that uses approximate sample gradients and tapering step size to locate a minimizer of a cost function. In this paper the clients use a stochastic approximation iterate to update the weights of its neural network. It is shown that the aggregated weights track an autonomous ODE. Numerical simulations are performed and the results are compared with standard algorithms like FedAvg and FedProx. It is observed that the proposed algorithm is robust and gives more reliable estimates of the weights, in particular when the clients data are not identically distributed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 431,055 |
2402.18281 | Towards Better Understanding of Contrastive Sentence Representation
Learning: A Unified Paradigm for Gradient | Sentence Representation Learning (SRL) is a crucial task in Natural Language Processing (NLP), where contrastive Self-Supervised Learning (SSL) is currently a mainstream approach. However, the reasons behind its remarkable effectiveness remain unclear. Specifically, many studies have investigated the similarities between contrastive and non-contrastive SSL from a theoretical perspective. Such similarities can be verified in classification tasks, where the two approaches achieve comparable performance. But in ranking tasks (i.e., Semantic Textual Similarity (STS) in SRL), contrastive SSL significantly outperforms non-contrastive SSL. Therefore, two questions arise: First, *what commonalities enable various contrastive losses to achieve superior performance in STS?* Second, *how can we make non-contrastive SSL also effective in STS?* To address these questions, we start from the perspective of gradients and discover that four effective contrastive losses can be integrated into a unified paradigm, which depends on three components: the **Gradient Dissipation**, the **Weight**, and the **Ratio**. Then, we conduct an in-depth analysis of the roles these components play in optimization and experimentally demonstrate their significance for model performance. Finally, by adjusting these components, we enable non-contrastive SSL to achieve outstanding performance in STS. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 433,367 |
2009.08088 | Code-switching pre-training for neural machine translation | This paper proposes a new pre-training method, called Code-Switching Pre-training (CSP for short) for Neural Machine Translation (NMT). Unlike traditional pre-training method which randomly masks some fragments of the input sentence, the proposed CSP randomly replaces some words in the source sentence with their translation words in the target language. Specifically, we firstly perform lexicon induction with unsupervised word embedding mapping between the source and target languages, and then randomly replace some words in the input sentence with their translation words according to the extracted translation lexicons. CSP adopts the encoder-decoder framework: its encoder takes the code-mixed sentence as input, and its decoder predicts the replaced fragment of the input sentence. In this way, CSP is able to pre-train the NMT model by explicitly making the most of the cross-lingual alignment information extracted from the source and target monolingual corpus. Additionally, we relieve the pretrain-finetune discrepancy caused by the artificial symbols like [mask]. To verify the effectiveness of the proposed method, we conduct extensive experiments on unsupervised and supervised NMT. Experimental results show that CSP achieves significant improvements over baselines without pre-training or with other pre-training methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 196,128 |
2410.19609 | OpenWebVoyager: Building Multimodal Web Agents via Iterative Real-World
Exploration, Feedback and Optimization | The rapid development of large language and multimodal models has sparked significant interest in using proprietary models, such as GPT-4o, to develop autonomous agents capable of handling real-world scenarios like web navigation. Although recent open-source efforts have tried to equip agents with the ability to explore environments and continuously improve over time, they are building text-only agents in synthetic environments where the reward signals are clearly defined. Such agents struggle to generalize to realistic settings that require multimodal perception abilities and lack ground-truth signals. In this paper, we introduce an open-source framework designed to facilitate the development of multimodal web agent that can autonomously conduct real-world exploration and improve itself. We first train the base model with imitation learning to gain the basic abilities. We then let the agent explore the open web and collect feedback on its trajectories. After that, it further improves its policy by learning from well-performing trajectories judged by another general-purpose model. This exploration-feedback-optimization cycle can continue for several iterations. Experimental results show that our web agent successfully improves itself after each iteration, demonstrating strong performance across multiple test sets. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 502,386 |
2411.13981 | On the Fairness, Diversity and Reliability of Text-to-Image Generative
Models | The widespread availability of multimodal generative models has sparked critical discussions on their fairness, reliability, and potential for misuse. While text-to-image models can produce high-fidelity, user-guided images, they also exhibit unpredictable behavior and vulnerabilities, which can be exploited to manipulate class or concept representations. To address this, we propose an evaluation framework designed to assess model reliability through their responses to globally- and locally-applied `semantic' perturbations in the embedding space, pinpointing inputs that trigger unreliable behavior. Our approach offers deeper insights into two essential aspects: (i) generative diversity, evaluating the breadth of visual representations for learned concepts, and (ii) generative fairness, examining how removing concepts from input prompts affects semantic guidance. Beyond these evaluations, our method lays the groundwork for detecting unreliable, bias-injected models and retrieval of bias provenance. We will release our code. Keywords: Fairness, Reliability, AI Ethics, Bias, Text-to-Image Models | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 509,996 |
2211.14752 | Differentiable Meta Multigraph Search with Partial Message Propagation
on Heterogeneous Information Networks | Heterogeneous information networks (HINs) are widely employed for describing real-world data with intricate entities and relationships. To automatically utilize their semantic information, graph neural architecture search has recently been developed on various tasks of HINs. Existing works, on the other hand, show weaknesses in instability and inflexibility. To address these issues, we propose a novel method called Partial Message Meta Multigraph search (PMMM) to automatically optimize the neural architecture design on HINs. Specifically, to learn how graph neural networks (GNNs) propagate messages along various types of edges, PMMM adopts an efficient differentiable framework to search for a meaningful meta multigraph, which can capture more flexible and complex semantic relations than a meta graph. The differentiable search typically suffers from performance instability, so we further propose a stable algorithm called partial message search to ensure that the searched meta multigraph consistently surpasses the manually designed meta-structures, i.e., meta-paths. Extensive experiments on six benchmark datasets over two representative tasks, including node classification and recommendation, demonstrate the effectiveness of the proposed method. Our approach outperforms the state-of-the-art heterogeneous GNNs, finds out meaningful meta multigraphs, and is significantly more stable. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 332,980 |
2201.01615 | Lawin Transformer: Improving Semantic Segmentation Transformer with
Multi-Scale Representations via Large Window Attention | Multi-scale representations are crucial for semantic segmentation. The community has witnessed the flourish of semantic segmentation convolutional neural networks (CNN) exploiting multi-scale contextual information. Motivated by that the vision transformer (ViT) is powerful in image classification, some semantic segmentation ViTs are recently proposed, most of them attaining impressive results but at a cost of computational economy. In this paper, we succeed in introducing multi-scale representations into semantic segmentation ViT via window attention mechanism and further improves the performance and efficiency. To this end, we introduce large window attention which allows the local window to query a larger area of context window at only a little computation overhead. By regulating the ratio of the context area to the query area, we enable the $\textit{large window attention}$ to capture the contextual information at multiple scales. Moreover, the framework of spatial pyramid pooling is adopted to collaborate with $\textit{the large window attention}$, which presents a novel decoder named $\textbf{la}$rge $\textbf{win}$dow attention spatial pyramid pooling (LawinASPP) for semantic segmentation ViT. Our resulting ViT, Lawin Transformer, is composed of an efficient hierachical vision transformer (HVT) as encoder and a LawinASPP as decoder. The empirical results demonstrate that Lawin Transformer offers an improved efficiency compared to the existing method. Lawin Transformer further sets new state-of-the-art performance on Cityscapes (84.4% mIoU), ADE20K (56.2% mIoU) and COCO-Stuff datasets. The code will be released at https://github.com/yan-hao-tian/lawin | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 274,303 |
2101.11948 | Choice modelling in the age of machine learning -- discussion paper | Since its inception, the choice modelling field has been dominated by theory-driven modelling approaches. Machine learning offers an alternative data-driven approach for modelling choice behaviour and is increasingly drawing interest in our field. Cross-pollination of machine learning models, techniques and practices could help overcome problems and limitations encountered in the current theory-driven modelling paradigm, such as subjective labour-intensive search processes for model selection, and the inability to work with text and image data. However, despite the potential benefits of using the advances of machine learning to improve choice modelling practices, the choice modelling field has been hesitant to embrace machine learning. This discussion paper aims to consolidate knowledge on the use of machine learning models, techniques and practices for choice modelling, and discuss their potential. Thereby, we hope not only to make the case that further integration of machine learning in choice modelling is beneficial, but also to further facilitate it. To this end, we clarify the similarities and differences between the two modelling paradigms; we review the use of machine learning for choice modelling; and we explore areas of opportunities for embracing machine learning models and techniques to improve our practices. To conclude this discussion paper, we put forward a set of research questions which must be addressed to better understand if and how machine learning can benefit choice modelling. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 217,436 |
2403.04602 | Minimum-Time Planar Paths with up to Two Constant Acceleration Inputs
and $L_2$ Velocity and Acceleration Constraints | Given starting and ending positions and velocities, $L_2$ bounds on the acceleration and velocity, and the restriction to no more than two constant control inputs, this paper provides routines to compute the minimal-time path. Closed form solutions are provided for reaching a position in minimum time with and without a velocity bound, and for stopping at the goal position. A numeric solver is used to reach a goal position and velocity with no more than two constant control inputs. If a cruising phase at the terminal velocity is needed, this requires solving a non-linear equation with a single parameter. Code is provided on GitHub at https://github.com/RoboticSwarmControl/MinTimeL2pathsConstraints. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 435,659 |
2401.09036 | IRS-Enhanced Anti-Jamming Precoding Against DISCO Physical Layer Jamming
Attacks | Illegitimate intelligent reflective surfaces (IRSs) can pose significant physical layer security risks on multi-user multiple-input single-output (MU-MISO) systems. Recently, a DISCO approach has been proposed an illegitimate IRS with random and time-varying reflection coefficients, referred to as a "disco" IRS (DIRS). Such DIRS can attack MU-MISO systems without relying on either jamming power or channel state information (CSI), and classical anti-jamming techniques are ineffective for the DIRS-based fully-passive jammers (DIRS-based FPJs). In this paper, we propose an IRS-enhanced anti-jamming precoder against DIRS-based FPJs that requires only statistical rather than instantaneous CSI of the DIRS-jammed channels. Specifically, a legitimate IRS is introduced to reduce the strength of the DIRS-based jamming relative to the transmit signals at a legitimate user (LU). In addition, the active beamforming at the legitimate access point (AP) is designed to maximize the signal-to-jamming-plus-noise ratios (SJNRs). Numerical results are presented to evaluate the effectiveness of the proposed IRS-enhanced anti-jamming precoder against DIRS-based FPJs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 422,124 |
1207.0337 | The DoF of the K-user Interference Channel with a Cognitive Relay | It was shown recently that the 2-user interference channel with a cognitive relay (IC-CR) has full degrees of freedom (DoF) almost surely, that is, 2 DoF. The purpose of this work is to check whether the DoF of the $K$-user IC-CR, consisting of $K$ user pairs and a cognitive relay, follow as a straight forward extension of the 2-user case. As it turns out, this is not the case. The $K$-user IC-CR is shown to have $2K/3$ DoF if $K>2$ for the when the channel is time varying, achievable using interference alignment. Thus, while the basic $K$-user IC with time varying channel coefficients has 1/2 DoF per user for all $K$, the $K$-user IC-CR with varying channels has 1 DoF per user if K=2 and 2/3 DoF per user if $K>2$. Furthermore, the DoF region of the 3-user IC-CR with constant channels is characterized using interference neutralization, and a new upper bound on the sum-capacity of the 2-user IC-CR is given. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 17,162 |
1907.07836 | Multi-year Long-term Load Forecast for Area Distribution Feeders based
on Selective Sequence Learning | Long-term load forecast (LTLF) for area distribution feeders is one of the most critical tasks frequently performed in electric distribution utility companies. For a specific planning area, cost-effective system upgrades can only be planned out based on accurate feeder LTLF results. In our previous research, we established a unique sequence prediction method which has the tremendous advantage of combining area top-down, feeder bottom-up and multi-year historical data all together for forecast and achieved a superior performance over various traditional methods by real-world tests. However, the previous method only focused on the forecast of the next one-year. In our current work, we significantly improved this method: the forecast can now be extended to a multi-year forecast window in the future; unsupervised learning techniques are used to group feeders by their load composition features to improve accuracy; we also propose a novel selective sequence learning mechanism which uses Gated Recurrent Unit network to not only learn how to predict sequence values but also learn to select the best-performing sequential configuration for each individual feeder. The proposed method was tested on an actual urban distribution system in West Canada. It was compared with traditional methods and our previous sequence prediction method. It demonstrates the best forecasting performance as well as the possibility of using sequence prediction models for multi-year component-level load forecast. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 138,979 |
2104.05828 | Evidence-based Prescriptive Analytics, CAUSAL Digital Twin and a
Learning Estimation Algorithm | Evidence-based Prescriptive Analytics (EbPA) is necessary to determine optimal operational set-points that will improve business productivity. EbPA results from what-if analysis and counterfactual experimentation on CAUSAL Digital Twins (CDTs) that quantify cause-effect relationships in the DYNAMICS of a system of connected assets. We describe the basics of Causality and Causal Graphs and develop a Learning Causal Digital Twin (LCDT) solution; our algorithm uses a simple recurrent neural network with some innovative modifications incorporating Causal Graph simulation. Since LCDT is a learning digital twin where parameters are learned online in real-time with minimal pre-configuration, the work of deploying digital twins will be significantly simplified. A proof-of-principle of LCDT was conducted using real vibration data from a system of bearings; results of causal factor estimation, what-if analysis study and counterfactual experiment are very encouraging. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 229,860 |
1910.11932 | Exploring Author Context for Detecting Intended vs Perceived Sarcasm | We investigate the impact of using author context on textual sarcasm detection. We define author context as the embedded representation of their historical posts on Twitter and suggest neural models that extract these representations. We experiment with two tweet datasets, one labelled manually for sarcasm, and the other via tag-based distant supervision. We achieve state-of-the-art performance on the second dataset, but not on the one labelled manually, indicating a difference between intended sarcasm, captured by distant supervision, and perceived sarcasm, captured by manual labelling. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 150,911 |
2012.03709 | Reference Knowledgeable Network for Machine Reading Comprehension | Multi-choice Machine Reading Comprehension (MRC) as a challenge requires models to select the most appropriate answer from a set of candidates with a given passage and question. Most of the existing researches focus on the modeling of specific tasks or complex networks, without explicitly referring to relevant and credible external knowledge sources, which are supposed to greatly make up for the deficiency of the given passage. Thus we propose a novel reference-based knowledge enhancement model called Reference Knowledgeable Network (RekNet), which simulates human reading strategies to refine critical information from the passage and quote explicit knowledge in necessity. In detail, RekNet refines finegrained critical information and defines it as Reference Span, then quotes explicit knowledge quadruples by the co-occurrence information of Reference Span and candidates. The proposed RekNet is evaluated on three multi-choice MRC benchmarks: RACE, DREAM and Cosmos QA, obtaining consistent and remarkable performance improvement with observable statistical significance level over strong baselines. Our code is available at https://github.com/Yilin1111/RekNet. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 210,220 |
2204.02121 | MetaAudio: A Few-Shot Audio Classification Benchmark | Currently available benchmarks for few-shot learning (machine learning with few training examples) are limited in the domains they cover, primarily focusing on image classification. This work aims to alleviate this reliance on image-based benchmarks by offering the first comprehensive, public and fully reproducible audio based alternative, covering a variety of sound domains and experimental settings. We compare the few-shot classification performance of a variety of techniques on seven audio datasets (spanning environmental sounds to human-speech). Extending this, we carry out in-depth analyses of joint training (where all datasets are used during training) and cross-dataset adaptation protocols, establishing the possibility of a generalised audio few-shot classification algorithm. Our experimentation shows gradient-based meta-learning methods such as MAML and Meta-Curvature consistently outperform both metric and baseline methods. We also demonstrate that the joint training routine helps overall generalisation for the environmental sound databases included, as well as being a somewhat-effective method of tackling the cross-dataset/domain setting. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 289,834 |
2312.15526 | Aspect category learning and sentimental analysis using weakly
supervised learning | The surge of e-commerce reviews has presented a challenge in manually annotating the vast volume of reviews to comprehend their underlying aspects and sentiments. This research focused on leveraging weakly supervised learning to tackle aspect category learning and the sentiment classification of reviews. Our approach involves the generation of labels for both aspects and sentiments, employing the Snorkel framework of WSL, which incorporates aspect terms, review sentiment scores, and review ratings as sources of weak signals. This innovative strategy significantly reduces the laborious labeling efforts required for processing such extensive datasets. In this study, we deployed hybrid models, namely BiLSTM, CNN-BiLSTM, and CNN-LSTM, which harness multiple inputs, including review text, aspect terms, and ratings. Our proposed model employs two distinct loss functions: Binary Cross Entropy with Sigmoid Activation for Multi-Label Classification, enabling us to learn aspect Labels such as Quality, Usability, Service, Size, and Price, and Categorical Cross Entropy with Softmax Activations for Multi-Class Classification. Subsequently, we meticulously evaluate the performance metrics of these three implemented models, including Macro F1 score and Macro Precision. CNN & Bi-LSTM model attained 0.78 and 0.79 F1 scores on aspect and sentiment identification, respectively. The outcomes of this research are poised to make a substantial contribution to e-commerce platforms, offering an efficient and automated means to label and analyze vast troves of user reviews. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 418,040 |
2103.02937 | Visual Question Answering: which investigated applications? | Visual Question Answering (VQA) is an extremely stimulating and challenging research area where Computer Vision (CV) and Natural Language Processig (NLP) have recently met. In image captioning and video summarization, the semantic information is completely contained in still images or video dynamics, and it has only to be mined and expressed in a human-consistent way. Differently from this, in VQA semantic information in the same media must be compared with the semantics implied by a question expressed in natural language, doubling the artificial intelligence-related effort. Some recent surveys about VQA approaches have focused on methods underlying either the image-related processing or the verbal-related one, or on the way to consistently fuse the conveyed information. Possible applications are only suggested, and, in fact, most cited works rely on general-purpose datasets that are used to assess the building blocks of a VQA system. This paper rather considers the proposals that focus on real-world applications, possibly using as benchmarks suitable data bound to the application domain. The paper also reports about some recent challenges in VQA research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 223,120 |
1006.0448 | Emergence of Complex-Like Cells in a Temporal Product Network with Local
Receptive Fields | We introduce a new neural architecture and an unsupervised algorithm for learning invariant representations from temporal sequence of images. The system uses two groups of complex cells whose outputs are combined multiplicatively: one that represents the content of the image, constrained to be constant over several consecutive frames, and one that represents the precise location of features, which is allowed to vary over time but constrained to be sparse. The architecture uses an encoder to extract features, and a decoder to reconstruct the input from the features. The method was applied to patches extracted from consecutive movie frames and produces orientation and frequency selective units analogous to the complex cells in V1. An extension of the method is proposed to train a network composed of units with local receptive field spread over a large image of arbitrary size. A layer of complex cells, subject to sparsity constraints, pool feature units over overlapping local neighborhoods, which causes the feature units to organize themselves into pinwheel patterns of orientation-selective receptive fields, similar to those observed in the mammalian visual cortex. A feed-forward encoder efficiently computes the feature representation of full images. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 6,653 |
2311.08386 | Capacity of Summation over a Symmetric Quantum Erasure MAC with
Partially Replicated Inputs | The optimal quantum communication cost of computing a classical sum of distributed sources is studied over a quantum erasure multiple access channel (QEMAC). K classical messages comprised of finite-field symbols are distributed across $S$ servers, who also share quantum entanglement in advance. Each server $s\in[S]$ manipulates its quantum subsystem $\mathcal{Q}_s$ according to its own available classical messages and sends $\mathcal{Q}_s$ to the receiver who then computes the sum of the messages based on a joint quantum measurement. The download cost from Server $s\in [S]$ is the logarithm of the dimension of $\mathcal{Q}_s$. The rate $R$ is defined as the number of instances of the sum computed at the receiver, divided by the total download cost from all the servers. The main focus is on the symmetric setting with $K= {S \choose \alpha} $ messages where each message is replicated among a unique subset of $\alpha$ servers, and the answers from any $\beta$ servers may be erased. If no entanglement is initially available to the receiver, then we show that the capacity (maximal rate) is precisely $C= \max\left\{ \min \left\{ \frac{2(\alpha-\beta)}{S}, \frac{S-2\beta}{S} \right\}, \frac{\alpha-\beta}{S} \right\}$. The capacity with arbitrary levels of prior entanglement $(\Delta_0)$ between the $S$ data-servers and the receiver is also characterized, by including an auxiliary server (Server $0$) that has no classical data, so that the communication cost from Server $0$ is a proxy for the amount of receiver-side entanglement that is available in advance. The challenge on the converse side resides in the optimal application of the weak monotonicity property, while the achievability combines ideas from classical network coding and treating qudits as classical dits, as well as new constructions based on the $N$-sum box abstraction that rely on absolutely maximally entangled quantum states. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 407,706 |
2012.06815 | Alpha-Refine: Boosting Tracking Performance by Precise Bounding Box
Estimation | Visual object tracking aims to precisely estimate the bounding box for the given target, which is a challenging problem due to factors such as deformation and occlusion. Many recent trackers adopt the multiple-stage tracking strategy to improve the quality of bounding box estimation. These methods first coarsely locate the target and then refine the initial prediction in the following stages. However, existing approaches still suffer from limited precision, and the coupling of different stages severely restricts the method's transferability. This work proposes a novel, flexible, and accurate refinement module called Alpha-Refine (AR), which can significantly improve the base trackers' box estimation quality. By exploring a series of design options, we conclude that the key to successful refinement is extracting and maintaining detailed spatial information as much as possible. Following this principle, Alpha-Refine adopts a pixel-wise correlation, a corner prediction head, and an auxiliary mask head as the core components. Comprehensive experiments on TrackingNet, LaSOT, GOT-10K, and VOT2020 benchmarks with multiple base trackers show that our approach significantly improves the base trackers' performance with little extra latency. The proposed Alpha-Refine method leads to a series of strengthened trackers, among which the ARSiamRPN (AR strengthened SiamRPNpp) and the ARDiMP50 (ARstrengthened DiMP50) achieve good efficiency-precision trade-off, while the ARDiMPsuper (AR strengthened DiMP-super) achieves very competitive performance at a real-time speed. Code and pretrained models are available at https://github.com/MasterBin-IIAU/AlphaRefine. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 211,235 |
2009.02798 | CSI-Based Multi-Antenna and Multi-Point Indoor Positioning Using
Probability Fusion | Channel state information (CSI)-based fingerprinting via neural networks (NNs) is a promising approach to enable accurate indoor and outdoor positioning of user equipments (UEs), even under challenging propagation conditions. In this paper, we propose a positioning pipeline for wireless LAN MIMO-OFDM systems which uses uplink CSI measurements obtained from one or more unsynchronized access points (APs). For each AP receiver, novel features are first extracted from the CSI that are robust to system impairments arising in real-world transceivers. These features are the inputs to a NN that extracts a probability map indicating the likelihood of a UE being at a given grid point. The NN output is then fused across multiple APs to provide a final position estimate. We provide experimental results with real-world indoor measurements under line-of-sight (LoS) and non-LoS propagation conditions for an 80MHz bandwidth IEEE 802.11ac system using a two-antenna transmit UE and two AP receivers each with four antennas. Our approach is shown to achieve centimeter-level median distance error, an order of magnitude improvement over a conventional baseline. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 194,664 |
1805.09409 | Non-Gaussian Hyperplane Tessellations and Robust One-Bit Compressed
Sensing | We show that a tessellation generated by a small number of random affine hyperplanes can be used to approximate Euclidean distances between any two points in an arbitrary bounded set $T$, where the random hyperplanes are generated by subgaussian or heavy-tailed normal vectors and uniformly distributed shifts. We derive quantitative bounds on the number of hyperplanes needed for constructing such tessellations in terms of natural metric complexity measures of $T$ and the desired approximation error. Our work extends significantly prior results in this direction, which were restricted to Gaussian hyperplane tessellations of subsets of the Euclidean unit sphere. As an application, we obtain new reconstruction results in memoryless one-bit compressed sensing with non-Gaussian measurement matrices. We show that by quantizing at uniformly distributed thresholds, it is possible to accurately reconstruct low-complexity signals from a small number of one-bit quantized measurements, even if the measurement vectors are drawn from a heavy-tailed distribution. Our reconstruction results are uniform in nature and robust in the presence of pre-quantization noise on the analog measurements as well as adversarial bit corruptions in the quantization process. Moreover we show that if the measurement matrix is subgaussian then accurate recovery can be achieved via a convex program. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 98,413 |
2011.11896 | A Data-Fusion-Assisted Telemetry Layer for Autonomous Optical Networks | For further improving the capacity and reliability of optical networks, a closed-loop autonomous architecture is preferred. Considering a large number of optical components in an optical network and many digital signal processing modules in each optical transceiver, massive real-time data can be collected. However, for a traditional monitoring structure, collecting, storing and processing a large size of data are challenging tasks. Moreover, strong correlations and similarities between data from different sources and regions are not properly considered, which may limit function extension and accuracy improvement. To address abovementioned issues, a data-fusion-assisted telemetry layer between the physical layer and control layer is proposed in this paper. The data fusion methodologies are elaborated on three different levels: Source Level, Space Level and Model Level. For each level, various data fusion algorithms are introduced and relevant works are reviewed. In addition, proof-of-concept use cases for each level are provided through simulations, where the benefits of the data-fusion-assisted telemetry layer are shown. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 207,984 |
2402.02986 | A Safety-Adapted Loss for Pedestrian Detection in Automated Driving | In safety-critical domains like automated driving (AD), errors by the object detector may endanger pedestrians and other vulnerable road users (VRU). As common evaluation metrics are not an adequate safety indicator, recent works employ approaches to identify safety-critical VRU and back-annotate the risk to the object detector. However, those approaches do not consider the safety factor in the deep neural network (DNN) training process. Thus, state-of-the-art DNN penalizes all misdetections equally irrespective of their criticality. Subsequently, to mitigate the occurrence of critical failure cases, i.e., false negatives, a safety-aware training strategy might be required to enhance the detection performance for critical pedestrians. In this paper, we propose a novel safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training. We exploit the reachability set-based time-to-collision (TTC-RSB) metric from the motion domain along with distance information to account for the worst-case threat quantifying the criticality. Our evaluation results using RetinaNet and FCOS on the nuScenes dataset demonstrate that training the models with our safety-aware loss function mitigates the misdetection of critical pedestrians without sacrificing performance for the general case, i.e., pedestrians outside the safety-critical zone. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 426,814 |
2111.04198 | TaCL: Improving BERT Pre-training with Token-aware Contrastive Learning | Masked language models (MLMs) such as BERT and RoBERTa have revolutionized the field of Natural Language Understanding in the past few years. However, existing pre-trained MLMs often output an anisotropic distribution of token representations that occupies a narrow subset of the entire representation space. Such token representations are not ideal, especially for tasks that demand discriminative semantic meanings of distinct tokens. In this work, we propose TaCL (Token-aware Contrastive Learning), a novel continual pre-training approach that encourages BERT to learn an isotropic and discriminative distribution of token representations. TaCL is fully unsupervised and requires no additional data. We extensively test our approach on a wide range of English and Chinese benchmarks. The results show that TaCL brings consistent and notable improvements over the original BERT model. Furthermore, we conduct detailed analysis to reveal the merits and inner-workings of our approach. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 265,417 |
2104.10273 | Disentangled Face Identity Representations for joint 3D Face Recognition
and Expression Neutralisation | In this paper, we propose a new deep learning-based approach for disentangling face identity representations from expressive 3D faces. Given a 3D face, our approach not only extracts a disentangled identity representation but also generates a realistic 3D face with a neutral expression while predicting its identity. The proposed network consists of three components; (1) a Graph Convolutional Autoencoder (GCA) to encode the 3D faces into latent representations, (2) a Generative Adversarial Network (GAN) that translates the latent representations of expressive faces into those of neutral faces, (3) and an identity recognition sub-network taking advantage of the neutralized latent representations for 3D face recognition. The whole network is trained in an end-to-end manner. Experiments are conducted on three publicly available datasets showing the effectiveness of the proposed approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 231,516 |
2409.15546 | A Novel Framework for the Automated Characterization of Gram-Stained
Blood Culture Slides Using a Large-Scale Vision Transformer | This study introduces a new framework for the artificial intelligence-assisted characterization of Gram-stained whole-slide images (WSIs). As a test for the diagnosis of bloodstream infections, Gram stains provide critical early data to inform patient treatment. Rapid and reliable analysis of Gram stains has been shown to be positively associated with better clinical outcomes, underscoring the need for improved tools to automate Gram stain analysis. In this work, we developed a novel transformer-based model for Gram-stained WSI classification, which is more scalable to large datasets than previous convolutional neural network (CNN) -based methods as it does not require patch-level manual annotations. We also introduce a large Gram stain dataset from Dartmouth-Hitchcock Medical Center (Lebanon, New Hampshire, USA) to evaluate our model, exploring the classification of five major categories of Gram-stained WSIs: Gram-positive cocci in clusters, Gram-positive cocci in pairs/chains, Gram-positive rods, Gram-negative rods, and slides with no bacteria. Our model achieves a classification accuracy of 0.858 (95% CI: 0.805, 0.905) and an AUC of 0.952 (95% CI: 0.922, 0.976) using five-fold nested cross-validation on our 475-slide dataset, demonstrating the potential of large-scale transformer models for Gram stain classification. We further demonstrate the generalizability of our trained model, which achieves strong performance on external datasets without additional fine-tuning. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 490,952 |
1910.14492 | Structured exploration in the finite horizon linear quadratic dual
control problem | This paper presents a novel approach to synthesize dual controllers for unknown linear time-invariant systems with the tasks of optimizing a quadratic cost while reducing the uncertainty. To this end, a synthesis problem is defined where the feedback law has to simultaneously gain knowledge of the system and robustly optimize the cost. By framing the problem in a finite horizon setting, the trade-offs arising when the tasks include both identification and control are formally captured in the optimization problem. Results show that efficient exploration strategies are achieved when the structure of the problem is exploited. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 151,664 |
2111.13244 | Going Grayscale: The Road to Understanding and Improving Unlearnable
Examples | Recent work has shown that imperceptible perturbations can be applied to craft unlearnable examples (ULEs), i.e. images whose content cannot be used to improve a classifier during training. In this paper, we reveal the road that researchers should follow for understanding ULEs and improving ULEs as they were originally formulated (ULEOs). The paper makes four contributions. First, we show that ULEOs exploit color and, consequently, their effects can be mitigated by simple grayscale pre-filtering, without resorting to adversarial training. Second, we propose an extension to ULEOs, which is called ULEO-GrayAugs, that forces the generated ULEs away from channel-wise color perturbations by making use of grayscale knowledge and data augmentations during optimization. Third, we show that ULEOs generated using Multi-Layer Perceptrons (MLPs) are effective in the case of complex Convolutional Neural Network (CNN) classifiers, suggesting that CNNs suffer specific vulnerability to ULEs. Fourth, we demonstrate that when a classifier is trained on ULEOs, adversarial training will prevent a drop in accuracy measured both on clean images and on adversarial images. Taken together, our contributions represent a substantial advance in the state of art of unlearnable examples, but also reveal important characteristics of their behavior that must be better understood in order to achieve further improvements. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 268,235 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.