id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1802.03608 | MOEA/D with Angle-based Constrained Dominance Principle for Constrained
Multi-objective Optimization Problems | This paper proposes a novel constraint-handling mechanism named angle-based constrained dominance principle (ACDP) embedded in a decomposition-based multi-objective evolutionary algorithm (MOEA/D) to solve constrained multi-objective optimization problems (CMOPs). To maintain the diversity of the working population, ACDP utilizes the information of the angle of solutions to adjust the dominance relation of solutions during the evolutionary process. This paper uses 14 benchmark instances to evaluate the performance of the MOEA/D with ACDP (MOEA/D-ACDP). Additionally, an engineering optimization problem (which is I-beam optimization problem) is optimized. The proposed MOEA/D-ACDP, and four other decomposition-based CMOEAs, including C-MOEA/D, MOEA/D-CDP, MOEA/D-Epsilon and MOEA/D-SR are tested by the above benchmarks and the engineering application. The experimental results manifest that MOEA/D-ACDP is significantly better than the other four CMOEAs on these test instances and the real-world case, which indicates that ACDP is more effective for solving CMOPs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 90,021 |
2409.16253 | Learning To Help: Training Models to Assist Legacy Devices | Machine learning models implemented in hardware on physical devices may be deployed for a long time. The computational abilities of the device may be limited and become outdated with respect to newer improvements. Because of the size of ML models, offloading some computation (e.g. to an edge cloud) can help such legacy devices. We cast this problem in the framework of learning with abstention (LWA) in which the expert (edge) must be trained to assist the client (device). Prior work on LWA trains the client assuming the edge is either an oracle or a human expert. In this work, we formalize the reverse problem of training the expert for a fixed (legacy) client. As in LWA, the client uses a rejection rule to decide when to offload inference to the expert (at a cost). We find the Bayes-optimal rule, prove a generalization bound, and find a consistent surrogate loss function. Empirical results show that our framework outperforms confidence-based rejection rules. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 491,266 |
2005.06728 | OD-SGD: One-step Delay Stochastic Gradient Descent for Distributed
Training | The training of modern deep learning neural network calls for large amounts of computation, which is often provided by GPUs or other specific accelerators. To scale out to achieve faster training speed, two update algorithms are mainly applied in the distributed training process, i.e. the Synchronous SGD algorithm (SSGD) and Asynchronous SGD algorithm (ASGD). SSGD obtains good convergence point while the training speed is slowed down by the synchronous barrier. ASGD has faster training speed but the convergence point is lower when compared to SSGD. To sufficiently utilize the advantages of SSGD and ASGD, we propose a novel technology named One-step Delay SGD (OD-SGD) to combine their strengths in the training process. Therefore, we can achieve similar convergence point and training speed as SSGD and ASGD separately. To the best of our knowledge, we make the first attempt to combine the features of SSGD and ASGD to improve distributed training performance. Each iteration of OD-SGD contains a global update in the parameter server node and local updates in the worker nodes, the local update is introduced to update and compensate the delayed local weights. We evaluate our proposed algorithm on MNIST, CIFAR-10 and ImageNet datasets. Experimental results show that OD-SGD can obtain similar or even slightly better accuracy than SSGD, while its training speed is much faster, which even exceeds the training speed of ASGD. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 177,100 |
1511.04901 | Coarse-to-fine Face Alignment with Multi-Scale Local Patch Regression | Facial landmark localization plays an important role in face recognition and analysis applications. In this paper, we give a brief introduction to a coarse-to-fine pipeline with neural networks and sequential regression. First, a global convolutional network is applied to the holistic facial image to give an initial landmark prediction. A pyramid of multi-scale local image patches is then cropped to feed to a new network for each landmark to refine the prediction. As the refinement network outputs a more accurate position estimation than the input, such procedure could be repeated several times until the estimation converges. We evaluate our system on the 300-W dataset [11] and it outperforms the recent state-of-the-arts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 48,965 |
2311.11564 | KBioXLM: A Knowledge-anchored Biomedical Multilingual Pretrained
Language Model | Most biomedical pretrained language models are monolingual and cannot handle the growing cross-lingual requirements. The scarcity of non-English domain corpora, not to mention parallel data, poses a significant hurdle in training multilingual biomedical models. Since knowledge forms the core of domain-specific corpora and can be translated into various languages accurately, we propose a model called KBioXLM, which transforms the multilingual pretrained model XLM-R into the biomedical domain using a knowledge-anchored approach. We achieve a biomedical multilingual corpus by incorporating three granularity knowledge alignments (entity, fact, and passage levels) into monolingual corpora. Then we design three corresponding training tasks (entity masking, relation masking, and passage relation prediction) and continue training on top of the XLM-R model to enhance its domain cross-lingual ability. To validate the effectiveness of our model, we translate the English benchmarks of multiple tasks into Chinese. Experimental results demonstrate that our model significantly outperforms monolingual and multilingual pretrained models in cross-lingual zero-shot and few-shot scenarios, achieving improvements of up to 10+ points. Our code is publicly available at https://github.com/ngwlh-gl/KBioXLM. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 408,994 |
2410.11829 | MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained
Vision-Language Understanding | Despite significant advancements in Multimodal Large Language Models (MLLMs) for understanding complex human intentions through cross-modal interactions, capturing intricate image details remains challenging. Previous methods integrating multiple vision encoders to enhance visual detail introduce redundancy and computational overhead. We observe that most MLLMs utilize only the last-layer feature map of the vision encoder for visual representation, neglecting the rich fine-grained information in shallow feature maps. To address this issue, we propose \modelname, a simple yet effective multi-layer feature fuser that efficiently integrates deep and shallow features from Vision Transformers (ViTs). Specifically, it leverages semantically aligned deep features as queries to dynamically extract missing details from shallow features, thus preserving semantic alignment while enriching the representation with fine-grained information. Applied to the LLaVA-1.5 model, \modelname~achieves significant improvements in visual representation and benchmark performance, providing a more flexible and lightweight solution compared to multi-encoder ensemble methods. The code and model have been released at https://github.com/yuecao0119/MMFuser. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 498,734 |
1102.0918 | Incentive Compatible Influence Maximization in Social Networks and
Application to Viral Marketing | Information diffusion and influence maximization are important and extensively studied problems in social networks. Various models and algorithms have been proposed in the literature in the context of the influence maximization problem. A crucial assumption in all these studies is that the influence probabilities are known to the social planner. This assumption is unrealistic since the influence probabilities are usually private information of the individual agents and strategic agents may not reveal them truthfully. Moreover, the influence probabilities could vary significantly with the type of the information flowing in the network and the time at which the information is propagating in the network. In this paper, we use a mechanism design approach to elicit influence probabilities truthfully from the agents. We first work with a simple model, the influencer model, where we assume that each user knows the level of influence she has on her neighbors but this is private information. In the second model, the influencer-influencee model, which is more realistic, we determine influence probabilities by combining the probability values reported by the influencers and influencees. In the context of the first model, we present how VCG (Vickrey-Clarke-Groves) mechanisms could be used for truthfully eliciting the influence probabilities. Our main contribution is to design a scoring rule based mechanism in the context of the influencer-influencee model. In particular, we show the incentive compatibility of the mechanisms when the scoring rules are proper and propose a reverse weighted scoring rule based mechanism as an appropriate mechanism to use. We also discuss briefly the implementation of such a mechanism in viral marketing applications. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 9,030 |
2305.18089 | Inverse Protein Folding Using Deep Bayesian Optimization | Inverse protein folding -- the task of predicting a protein sequence from its backbone atom coordinates -- has surfaced as an important problem in the "top down", de novo design of proteins. Contemporary approaches have cast this problem as a conditional generative modelling problem, where a large generative model over protein sequences is conditioned on the backbone. While these generative models very rapidly produce promising sequences, independent draws from generative models may fail to produce sequences that reliably fold to the correct backbone. Furthermore, it is challenging to adapt pure generative approaches to other settings, e.g., when constraints exist. In this paper, we cast the problem of improving generated inverse folds as an optimization problem that we solve using recent advances in "deep" or "latent space" Bayesian optimization. Our approach consistently produces protein sequences with greatly reduced structural error to the target backbone structure as measured by TM score and RMSD while using fewer computational resources. Additionally, we demonstrate other advantages of an optimization-based approach to the problem, such as the ability to handle constraints. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 368,854 |
2309.12482 | State2Explanation: Concept-Based Explanations to Benefit Agent Learning
and User Understanding | As more non-AI experts use complex AI systems for daily tasks, there has been an increasing effort to develop methods that produce explanations of AI decision making that are understandable by non-AI experts. Towards this effort, leveraging higher-level concepts and producing concept-based explanations have become a popular method. Most concept-based explanations have been developed for classification techniques, and we posit that the few existing methods for sequential decision making are limited in scope. In this work, we first contribute a desiderata for defining concepts in sequential decision making settings. Additionally, inspired by the Protege Effect which states explaining knowledge often reinforces one's self-learning, we explore how concept-based explanations of an RL agent's decision making can in turn improve the agent's learning rate, as well as improve end-user understanding of the agent's decision making. To this end, we contribute a unified framework, State2Explanation (S2E), that involves learning a joint embedding model between state-action pairs and concept-based explanations, and leveraging such learned model to both (1) inform reward shaping during an agent's training, and (2) provide explanations to end-users at deployment for improved task performance. Our experimental validations, in Connect 4 and Lunar Lander, demonstrate the success of S2E in providing a dual-benefit, successfully informing reward shaping and improving agent learning rate, as well as significantly improving end user task performance at deployment time. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 393,810 |
1209.5779 | Chance Constrained Optimal Power Flow: Risk-Aware Network Control under
Uncertainty | When uncontrollable resources fluctuate, Optimum Power Flow (OPF), routinely used by the electric power industry to re-dispatch hourly controllable generation (coal, gas and hydro plants) over control areas of transmission networks, can result in grid instability, and, potentially, cascading outages. This risk arises because OPF dispatch is computed without awareness of major uncertainty, in particular fluctuations in renewable output. As a result, grid operation under OPF with renewable variability can lead to frequent conditions where power line flow ratings are significantly exceeded. Such a condition, which is borne by simulations of real grids, would likely resulting in automatic line tripping to protect lines from thermal stress, a risky and undesirable outcome which compromises stability. Smart grid goals include a commitment to large penetration of highly fluctuating renewables, thus calling to reconsider current practices, in particular the use of standard OPF. Our Chance Constrained (CC) OPF corrects the problem and mitigates dangerous renewable fluctuations with minimal changes in the current operational procedure. Assuming availability of a reliable wind forecast parameterizing the distribution function of the uncertain generation, our CC-OPF satisfies all the constraints with high probability while simultaneously minimizing the cost of economic re-dispatch. CC-OPF allows efficient implementation, e.g. solving a typical instance over the 2746-bus Polish network in 20 seconds on a standard laptop. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 18,762 |
1506.03134 | Pointer Networks | We introduce a new neural architecture to learn the conditional probability of an output sequence with elements that are discrete tokens corresponding to positions in an input sequence. Such problems cannot be trivially addressed by existent approaches such as sequence-to-sequence and Neural Turing Machines, because the number of target classes in each step of the output depends on the length of the input, which is variable. Problems such as sorting variable sized sequences, and various combinatorial optimization problems belong to this class. Our model solves the problem of variable size output dictionaries using a recently proposed mechanism of neural attention. It differs from the previous attention attempts in that, instead of using attention to blend hidden units of an encoder to a context vector at each decoder step, it uses attention as a pointer to select a member of the input sequence as the output. We call this architecture a Pointer Net (Ptr-Net). We show Ptr-Nets can be used to learn approximate solutions to three challenging geometric problems -- finding planar convex hulls, computing Delaunay triangulations, and the planar Travelling Salesman Problem -- using training examples alone. Ptr-Nets not only improve over sequence-to-sequence with input attention, but also allow us to generalize to variable size output dictionaries. We show that the learnt models generalize beyond the maximum lengths they were trained on. We hope our results on these tasks will encourage a broader exploration of neural learning for discrete problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 44,009 |
2006.02064 | Hybrid Scheme of Kinematic Analysis and Lagrangian Koopman Operator
Analysis for Short-term Precipitation Forecasting | With the accumulation of meteorological big data, data-driven models for short-term precipitation forecasting have shown increasing promise. We focus on Koopman operator analysis, which is a data-driven scheme to discover governing laws in observed data. We propose a method to apply this scheme to phenomena accompanying advection currents such as precipitation. The proposed method decomposes time evolutions of the phenomena between advection currents under a velocity field and changes in physical quantities under Lagrangian coordinates. The advection currents are estimated by kinematic analysis, and the changes in physical quantities are estimated by Koopman operator analysis. The proposed method is applied to actual precipitation distribution data, and the results show that the development and decay of precipitation are properly captured relative to conventional methods and that stable predictions over long periods are possible. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 179,947 |
1902.00528 | Competitive Experience Replay | Deep learning has achieved remarkable successes in solving challenging reinforcement learning (RL) problems when dense reward function is provided. However, in sparse reward environment it still often suffers from the need to carefully shape reward function to guide policy optimization. This limits the applicability of RL in the real world since both reinforcement learning and domain-specific knowledge are required. It is therefore of great practical importance to develop algorithms which can learn from a binary signal indicating successful task completion or other unshaped, sparse reward signals. We propose a novel method called competitive experience replay, which efficiently supplements a sparse reward by placing learning in the context of an exploration competition between a pair of agents. Our method complements the recently proposed hindsight experience replay (HER) by inducing an automatic exploratory curriculum. We evaluate our approach on the tasks of reaching various goal locations in an ant maze and manipulating objects with a robotic arm. Each task provides only binary rewards indicating whether or not the goal is achieved. Our method asymmetrically augments these sparse rewards for a pair of agents each learning the same task, creating a competitive game designed to drive exploration. Extensive experiments demonstrate that this method leads to faster converge and improved task performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,420 |
2411.14446 | Rising Rested Bandits: Lower Bounds and Efficient Algorithms | This paper is in the field of stochastic Multi-Armed Bandits (MABs), i.e. those sequential selection techniques able to learn online using only the feedback given by the chosen option (a.k.a. $arm$). We study a particular case of the rested bandits in which the arms' expected reward is monotonically non-decreasing and concave. We study the inherent sample complexity of the regret minimization problem by deriving suitable regret lower bounds. Then, we design an algorithm for the rested case $\textit{R-ed-UCB}$, providing a regret bound depending on the properties of the instance and, under certain circumstances, of $\widetilde{\mathcal{O}}(T^{\frac{2}{3}})$. We empirically compare our algorithms with state-of-the-art methods for non-stationary MABs over several synthetically generated tasks and an online model selection problem for a real-world dataset | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 510,164 |
2201.05119 | Pushing the limits of self-supervised ResNets: Can we outperform
supervised learning without labels on ImageNet? | Despite recent progress made by self-supervised methods in representation learning with residual networks, they still underperform supervised learning on the ImageNet classification benchmark, limiting their applicability in performance-critical settings. Building on prior theoretical insights from ReLIC [Mitrovic et al., 2021], we include additional inductive biases into self-supervised learning. We propose a new self-supervised representation learning method, ReLICv2, which combines an explicit invariance loss with a contrastive objective over a varied set of appropriately constructed data views to avoid learning spurious correlations and obtain more informative representations. ReLICv2 achieves $77.1\%$ top-$1$ accuracy on ImageNet under linear evaluation on a ResNet50, thus improving the previous state-of-the-art by absolute $+1.5\%$; on larger ResNet models, ReLICv2 achieves up to $80.6\%$ outperforming previous self-supervised approaches with margins up to $+2.3\%$. Most notably, ReLICv2 is the first unsupervised representation learning method to consistently outperform the supervised baseline in a like-for-like comparison over a range of ResNet architectures. Using ReLICv2, we also learn more robust and transferable representations that generalize better out-of-distribution than previous work, both on image classification and semantic segmentation. Finally, we show that despite using ResNet encoders, ReLICv2 is comparable to state-of-the-art self-supervised vision transformers. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 275,287 |
2009.10791 | Using the Hammer Only on Nails: A Hybrid Method for Evidence Retrieval
for Question Answering | Evidence retrieval is a key component of explainable question answering (QA). We argue that, despite recent progress, transformer network-based approaches such as universal sentence encoder (USE-QA) do not always outperform traditional information retrieval (IR) methods such as BM25 for evidence retrieval for QA. We introduce a lexical probing task that validates this observation: we demonstrate that neural IR methods have the capacity to capture lexical differences between questions and answers, but miss obvious lexical overlap signal. Learning from this probing analysis, we introduce a hybrid approach for evidence retrieval that combines the advantages of both IR directions. Our approach uses a routing classifier that learns when to direct incoming questions to BM25 vs. USE-QA for evidence retrieval using very simple statistics, which can be efficiently extracted from the top candidate evidence sentences produced by a BM25 model. We demonstrate that this hybrid evidence retrieval generally performs better than either individual retrieval strategy on three QA datasets: OpenBookQA, ReQA SQuAD, and ReQA NQ. Furthermore, we show that the proposed routing strategy is considerably faster than neural methods, with a runtime that is up to 5 times faster than USE-QA. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 196,981 |
1903.09033 | Equivariant Entity-Relationship Networks | The relational model is a ubiquitous representation of big-data, in part due to its extensive use in databases. In this paper, we propose the Equivariant Entity-Relationship Network (EERN), which is a Multilayer Perceptron equivariant to the symmetry transformations of the Entity-Relationship model. To this end, we identify the most expressive family of linear maps that are exactly equivariant to entity relationship symmetries, and further show that they subsume recently introduced equivariant maps for sets, exchangeable tensors, and graphs. The proposed feed-forward layer has linear complexity in the data and can be used for both inductive and transductive reasoning about relational databases, including database embedding, and the prediction of missing records. This provides a principled theoretical foundation for the application of deep learning to one of the most abundant forms of data. Empirically, EERN outperforms different variants of coupled matrix tensor factorization in both synthetic and real-data experiments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,962 |
2207.12575 | Simulation-based Probabilistic Risk Assessment | Simulation-based probabilistic risk assessment (SPRA) is a systematic and comprehensive methodology that has been used and refined over the past few decades to evaluate the risks associated with complex systems. SPRA models are well established for cases with considerable data and system behavior information available. In this regard, multiple statistical and probabilistic tools can be used to provide a valuable assessment of dynamic probabilistic risk levels in different applications. This tutorial presents a comprehensive review of SPRA methodologies. Based on the reviewed literature, SPRA methods can be classified into three categories of dynamic probabilistic logic methods, dynamic stochastic analytical models, and hybrid discrete dynamic event and system simulation models. In this tutorial, the key strengths and weaknesses, and suggestions on ways to address real and perceived shortcomings of available SPRA methods are presented and discussed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 310,046 |
2012.10187 | Regularized Attentive Capsule Network for Overlapped Relation Extraction | Distantly supervised relation extraction has been widely applied in knowledge base construction due to its less requirement of human efforts. However, the automatically established training datasets in distant supervision contain low-quality instances with noisy words and overlapped relations, introducing great challenges to the accurate extraction of relations. To address this problem, we propose a novel Regularized Attentive Capsule Network (RA-CapNet) to better identify highly overlapped relations in each informal sentence. To discover multiple relation features in an instance, we embed multi-head attention into the capsule network as the low-level capsules, where the subtraction of two entities acts as a new form of relation query to select salient features regardless of their positions. To further discriminate overlapped relation features, we devise disagreement regularization to explicitly encourage the diversity among both multiple attention heads and low-level capsules. Extensive experiments conducted on widely used datasets show that our model achieves significant improvements in relation extraction. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 212,280 |
2009.14809 | A Tale of Two Linkings: Dynamically Gating between Schema Linking and
Structural Linking for Text-to-SQL Parsing | In Text-to-SQL semantic parsing, selecting the correct entities (tables and columns) for the generated SQL query is both crucial and challenging; the parser is required to connect the natural language (NL) question and the SQL query to the structured knowledge in the database. We formulate two linking processes to address this challenge: schema linking which links explicit NL mentions to the database and structural linking which links the entities in the output SQL with their structural relationships in the database schema. Intuitively, the effectiveness of these two linking processes changes based on the entity being generated, thus we propose to dynamically choose between them using a gating mechanism. Integrating the proposed method with two graph neural network-based semantic parsers together with BERT representations demonstrates substantial gains in parsing accuracy on the challenging Spider dataset. Analyses show that our proposed method helps to enhance the structure of the model output when generating complicated SQL queries and offers more explainable predictions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 198,147 |
1910.04233 | Kernel-Based Approaches for Sequence Modeling: Connections to Neural
Methods | We investigate time-dependent data analysis from the perspective of recurrent kernel machines, from which models with hidden units and gated memory cells arise naturally. By considering dynamic gating of the memory cell, a model closely related to the long short-term memory (LSTM) recurrent neural network is derived. Extending this setup to $n$-gram filters, the convolutional neural network (CNN), Gated CNN, and recurrent additive network (RAN) are also recovered as special cases. Our analysis provides a new perspective on the LSTM, while also extending it to $n$-gram convolutional filters. Experiments are performed on natural language processing tasks and on analysis of local field potentials (neuroscience). We demonstrate that the variants we derive from kernels perform on par or even better than traditional neural methods. For the neuroscience application, the new models demonstrate significant improvements relative to the prior state of the art. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 148,705 |
2008.01487 | Autoencoder Image Interpolation by Shaping the Latent Space | Autoencoders represent an effective approach for computing the underlying factors characterizing datasets of different types. The latent representation of autoencoders have been studied in the context of enabling interpolation between data points by decoding convex combinations of latent vectors. This interpolation, however, often leads to artifacts or produces unrealistic results during reconstruction. We argue that these incongruities are due to the structure of the latent space and because such naively interpolated latent vectors deviate from the data manifold. In this paper, we propose a regularization technique that shapes the latent representation to follow a manifold that is consistent with the training images and that drives the manifold to be smooth and locally convex. This regularization not only enables faithful interpolation between data points, as we show herein, but can also be used as a general regularization technique to avoid overfitting or to produce new samples for data augmentation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 190,333 |
2206.04184 | Abstraction not Memory: BERT and the English Article System | Article prediction is a task that has long defied accurate linguistic description. As such, this task is ideally suited to evaluate models on their ability to emulate native-speaker intuition. To this end, we compare the performance of native English speakers and pre-trained models on the task of article prediction set up as a three way choice (a/an, the, zero). Our experiments with BERT show that BERT outperforms humans on this task across all articles. In particular, BERT is far superior to humans at detecting the zero article, possibly because we insert them using rules that the deep neural model can easily pick up. More interestingly, we find that BERT tends to agree more with annotators than with the corpus when inter-annotator agreement is high but switches to agreeing more with the corpus as inter-annotator agreement drops. We contend that this alignment with annotators, despite being trained on the corpus, suggests that BERT is not memorising article use, but captures a high level generalisation of article use akin to human intuition. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 301,530 |
2210.02237 | Dimensional Data KNN-Based Imputation | Data Warehouses (DWs) are core components of Business Intelligence (BI). Missing data in DWs have a great impact on data analyses. Therefore, missing data need to be completed. Unlike other existing data imputation methods mainly adapted for facts, we propose a new imputation method for dimensions. This method contains two steps: 1) a hierarchical imputation and 2) a k-nearest neighbors (KNN) based imputation. Our solution has the advantage of taking into account the DW structure and dependency constraints. Experimental assessments validate our method in terms of effectiveness and efficiency. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 321,579 |
2004.00472 | Learning to Cache and Caching to Learn: Regret Analysis of Caching
Algorithms | Crucial performance metrics of a caching algorithm include its ability to quickly and accurately learn a popularity distribution of requests. However, a majority of work on analytical performance analysis focuses on hit probability after an asymptotically large time has elapsed. We consider an online learning viewpoint, and characterize the "regret" in terms of the finite time difference between the hits achieved by a candidate caching algorithm with respect to a genie-aided scheme that places the most popular items in the cache. We first consider the Full Observation regime wherein all requests are seen by the cache. We show that the Least Frequently Used (LFU) algorithm is able to achieve order optimal regret, which is matched by an efficient counting algorithm design that we call LFU-Lite. We then consider the Partial Observation regime wherein only requests for items currently cached are seen by the cache, making it similar to an online learning problem related to the multi-armed bandit problem. We show how approaching this "caching bandit" using traditional approaches yields either high complexity or regret, but a simple algorithm design that exploits the structure of the distribution can ensure order optimal regret. We conclude by illustrating our insights using numerical simulations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 170,646 |
2211.01724 | Learning Control by Iterative Inversion | We propose $\textit{iterative inversion}$ -- an algorithm for learning an inverse function without input-output pairs, but only with samples from the desired output distribution and access to the forward function. The key challenge is a $\textit{distribution shift}$ between the desired outputs and the outputs of an initial random guess, and we prove that iterative inversion can steer the learning correctly, under rather strict conditions on the function. We apply iterative inversion to learn control. Our input is a set of demonstrations of desired behavior, given as video embeddings of trajectories (without actions), and our method iteratively learns to imitate trajectories generated by the current policy, perturbed by random exploration noise. Our approach does not require rewards, and only employs supervised learning, which can be easily scaled to use state-of-the-art trajectory embedding techniques and policy representations. Indeed, with a VQ-VAE embedding, and a transformer-based policy, we demonstrate non-trivial continuous control on several tasks. Further, we report an improved performance on imitating diverse behaviors compared to reward based methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 328,330 |
2209.15109 | ConceptNet infused DialoGPT for Underlying Commonsense Understanding and
Reasoning in Dialogue Response Generation | The pre-trained conversational models still fail to capture the implicit commonsense (CS) knowledge hidden in the dialogue interaction, even though they were pre-trained with an enormous dataset. In order to build a dialogue agent with CS capability, we firstly inject external knowledge into a pre-trained conversational model to establish basic commonsense through efficient Adapter tuning (Section 4). Secondly, we propose the ``two-way learning'' method to enable the bidirectional relationship between CS knowledge and sentence pairs so that the model can generate a sentence given the CS triplets, also generate the underlying CS knowledge given a sentence (Section 5). Finally, we leverage this integrated CS capability to improve open-domain dialogue response generation so that the dialogue agent is capable of understanding the CS knowledge hidden in dialogue history on top of inferring related other knowledge to further guide response generation (Section 6). The experiment results demonstrate that CS\_Adapter fusion helps DialoGPT to be able to generate series of CS knowledge. And the DialoGPT+CS\_Adapter response model adapted from CommonGen training can generate underlying CS triplets that fits better to dialogue context. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 320,465 |
2211.13424 | Deepfake Detection via Joint Unsupervised Reconstruction and Supervised
Classification | Deep learning has enabled realistic face manipulation (i.e., deepfake), which poses significant concerns over the integrity of the media in circulation. Most existing deep learning techniques for deepfake detection can achieve promising performance in the intra-dataset evaluation setting (i.e., training and testing on the same dataset), but are unable to perform satisfactorily in the inter-dataset evaluation setting (i.e., training on one dataset and testing on another). Most of the previous methods use the backbone network to extract global features for making predictions and only employ binary supervision (i.e., indicating whether the training instances are fake or authentic) to train the network. Classification merely based on the learning of global features leads often leads to weak generalizability to unseen manipulation methods. In addition, the reconstruction task can improve the learned representations. In this paper, we introduce a novel approach for deepfake detection, which considers the reconstruction and classification tasks simultaneously to address these problems. This method shares the information learned by one task with the other, which focuses on a different aspect other existing works rarely consider and hence boosts the overall performance. In particular, we design a two-branch Convolutional AutoEncoder (CAE), in which the Convolutional Encoder used to compress the feature map into the latent representation is shared by both branches. Then the latent representation of the input data is fed to a simple classifier and the unsupervised reconstruction component simultaneously. Our network is trained end-to-end. Experiments demonstrate that our method achieves state-of-the-art performance on three commonly-used datasets, particularly in the cross-dataset evaluation setting. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 332,466 |
2211.06688 | Partial Visual-Semantic Embedding: Fashion Intelligence System with
Sensitive Part-by-Part Learning | In this study, we propose a technology called the Fashion Intelligence System based on the visual-semantic embedding (VSE) model to quantify abstract and complex expressions unique to fashion, such as ''casual,'' ''adult-casual,'' and ''office-casual,'' and to support users' understanding of fashion. However, the existing VSE model does not support the situations in which the image is composed of multiple parts such as hair, tops, pants, skirts, and shoes. We propose partial VSE, which enables sensitive learning for each part of the fashion coordinates. The proposed model partially learns embedded representations. This helps retain the various existing practical functionalities and enables image-retrieval tasks in which changes are made only to the specified parts and image reordering tasks that focus on the specified parts. This was not possible with conventional models. Based on both the qualitative and quantitative evaluation experiments, we show that the proposed model is superior to conventional models without increasing the computational complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 329,984 |
2303.16646 | Structured Epipolar Matcher for Local Feature Matching | Local feature matching is challenging due to textureless and repetitive patterns. Existing methods focus on using appearance features and global interaction and matching, while the importance of geometry priors in local feature matching has not been fully exploited. Different from these methods, in this paper, we delve into the importance of geometry prior and propose Structured Epipolar Matcher (SEM) for local feature matching, which can leverage the geometric information in an iterative matching way. The proposed model enjoys several merits. First, our proposed Structured Feature Extractor can model the relative positional relationship between pixels and high-confidence anchor points. Second, our proposed Epipolar Attention and Matching can filter out irrelevant areas by utilizing the epipolar constraint. Extensive experimental results on five standard benchmarks demonstrate the superior performance of our SEM compared to state-of-the-art methods. Project page: https://sem2023.github.io. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 354,937 |
1612.00625 | Recognition of Text Image Using Multilayer Perceptron | The biggest challenge in the field of image processing is to recognize documents both in printed and handwritten format. Optical Character Recognition OCR is a type of document image analysis where scanned digital image that contains either machine printed or handwritten script input into an OCR software engine and translating it into an editable machine readable digital text format. A Neural network is designed to model the way in which the brain performs a particular task or function of interest: The neural network is simulated in software on a digital computer. Character Recognition refers to the process of converting printed Text documents into translated Unicode Text. The printed documents available in the form of books, papers, magazines, etc. are scanned using standard scanners which produce an image of the scanned document. Lines are identifying by an algorithm where we identify top and bottom of line. Then in each line character boundaries are calculated by an algorithm then using these calculation, characters is isolated from the image and then we classify each character by basic back propagation. Each image character is comprised of 30*20 pixels. We have used the Back propagation Neural Network for efficient recognition where the errors were corrected through back propagation and rectified neuron values were transmitted by feed-forward method in the neural network of multiple layers. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 64,924 |
1304.4633 | PAC Quasi-automatizability of Resolution over Restricted Distributions | We consider principled alternatives to unsupervised learning in data mining by situating the learning task in the context of the subsequent analysis task. Specifically, we consider a query-answering (hypothesis-testing) task: In the combined task, we decide whether an input query formula is satisfied over a background distribution by using input examples directly, rather than invoking a two-stage process in which (i) rules over the distribution are learned by an unsupervised learning algorithm and (ii) a reasoning algorithm decides whether or not the query formula follows from the learned rules. In a previous work (2013), we observed that the learning task could satisfy numerous desirable criteria in this combined context -- effectively matching what could be achieved by agnostic learning of CNFs from partial information -- that are not known to be achievable directly. In this work, we show that likewise, there are reasoning tasks that are achievable in such a combined context that are not known to be achievable directly (and indeed, have been seriously conjectured to be impossible, cf. (Alekhnovich and Razborov, 2008)). Namely, we test for a resolution proof of the query formula of a given size in quasipolynomial time (that is, "quasi-automatizing" resolution). The learning setting we consider is a partial-information, restricted-distribution setting that generalizes learning parities over the uniform distribution from partial information, another task that is known not to be achievable directly in various models (cf. (Ben-David and Dichterman, 1998) and (Michael, 2010)). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 24,018 |
2409.16630 | Stochastic Subsampling With Average Pooling | Regularization of deep neural networks has been an important issue to achieve higher generalization performance without overfitting problems. Although the popular method of Dropout provides a regularization effect, it causes inconsistent properties in the output, which may degrade the performance of deep neural networks. In this study, we propose a new module called stochastic average pooling, which incorporates Dropout-like stochasticity in pooling. We describe the properties of stochastic subsampling and average pooling and leverage them to design a module without any inconsistency problem. The stochastic average pooling achieves a regularization effect without any potential performance degradation due to the inconsistency issue and can easily be plugged into existing architectures of deep neural networks. Experiments demonstrate that replacing existing average pooling with stochastic average pooling yields consistent improvements across a variety of tasks, datasets, and models. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 491,427 |
2104.01885 | Conformal testing in a binary model situation | Conformal testing is a way of testing the IID assumption based on conformal prediction. The topic of this note is computational evaluation of the performance of conformal testing in a model situation in which IID binary observations generated from a Bernoulli distribution are followed by IID binary observations generated from another Bernoulli distribution, with the parameters of the distributions and changepoint unknown. Existing conformal test martingales can be used for this task and work well in simple cases, but their efficiency can be improved greatly. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 228,514 |
1412.2424 | On the Mean-Square Performance of the Constrained LMS Algorithm | The so-called constrained least mean-square algorithm is one of the most commonly used linear-equality-constrained adaptive filtering algorithms. Its main advantages are adaptability and relative simplicity. In order to gain analytical insights into the performance of this algorithm, we examine its mean-square performance and derive theoretical expressions for its transient and steady-state mean-square deviation. Our methodology is inspired by the principle of energy conservation in adaptive filters. Simulation results corroborate the accuracy of the derived formula. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 38,205 |
1701.01156 | Adaptive Real-Time Software Defined MIMO Visible Light Communications
using Spatial Multiplexing and Spatial Diversity | In this paper, we experimentally demonstrate a real-time software defined multiple input multiple output (MIMO) visible light communication (VLC) system employing link adaptation of spatial multiplexing and spatial diversity. Real-time MIMO signal processing is implemented by using the Field Programmable Gate Array (FPGA) based Universal Software Radio Peripheral (USRP) devices. Software defined implantation of MIMO VLC can assist in enabling an adaptive and reconfigurable communication system without hardware changes. We measured the error vector magnitude (EVM), bit error rate (BER) and spectral efficiency performance for single carrier M-QAM MIMO VLC using spatial diversity and spatial multiplexing. Results show that spatial diversity MIMO VLC improves error performance at the cost of spectral efficiency that spatial multiplexing should enhance. We propose the adaptive MIMO solution that both modulation schema and MIMO schema are dynamically adapted to the changing channel conditions for enhancing the error performance and spectral efficiency. The average error-free spectral efficiency of adaptive 2x2 MIMO VLC achieved 12 b/s/Hz over 2 meters indoor dynamic transmission. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 66,363 |
2008.11822 | Indirect Object-to-Robot Pose Estimation from an External Monocular RGB
Camera | We present a robotic grasping system that uses a single external monocular RGB camera as input. The object-to-robot pose is computed indirectly by combining the output of two neural networks: one that estimates the object-to-camera pose, and another that estimates the robot-to-camera pose. Both networks are trained entirely on synthetic data, relying on domain randomization to bridge the sim-to-real gap. Because the latter network performs online camera calibration, the camera can be moved freely during execution without affecting the quality of the grasp. Experimental results analyze the effect of camera placement, image resolution, and pose refinement in the context of grasping several household objects. We also present results on a new set of 28 textured household toy grocery objects, which have been selected to be accessible to other researchers. To aid reproducibility of the research, we offer 3D scanned textured models, along with pre-trained weights for pose estimation. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 193,384 |
2107.12136 | The Role of Functional Programming in Management and Orchestration of
Virtualized Network Resources Part I. System structure for Complex Systems
and Design Principles | This is part I of the follow-up lecture notes of the lectures given by the authors at the Three \CO" (Composability, Comprehensibility, Correctness) Winter School held in Ko\v{s}ice, Slovakia, in January 2018, and Summer School held in Budapest, Hungary, in June 2019. In this part we explain the role of functional programming paradigm in the management of complex software systems, and how the functional programming concepts play important role in the designing such systems. Key prerequisite for implementing functional programming concepts is properly designed system structure following well defined design principles and rules. That is the main goal of this lecture to introduce students with proper system modeling. Furthermore, we also explain how new emerging technologies are designed in such a way that they enforce the development of systems that comply to the design rules inspired by the functional programming. This is extremely important in view of the current network evolution and virtualization concepts, which will require many functional programming concepts in the network services and functions, as will be discussed in part II of these lecture notes. These notes provide an introduction to the subject, with the goal of explaining the problems and the principles, methods and techniques used for their solution. The worked examples and exercises serve students as the teaching material, from which they can learn how to use design principles to model effective system structures. Here we focus on students understanding of importance of effective system structures for coordination of development and management processes that are driven by business goals and further evolution. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 247,813 |
2212.13495 | Truncate-Split-Contrast: A Framework for Learning from Mislabeled Videos | Learning with noisy label (LNL) is a classic problem that has been extensively studied for image tasks, but much less for video in the literature. A straightforward migration from images to videos without considering the properties of videos, such as computational cost and redundant information, is not a sound choice. In this paper, we propose two new strategies for video analysis with noisy labels: 1) A lightweight channel selection method dubbed as Channel Truncation for feature-based label noise detection. This method selects the most discriminative channels to split clean and noisy instances in each category; 2) A novel contrastive strategy dubbed as Noise Contrastive Learning, which constructs the relationship between clean and noisy instances to regularize model training. Experiments on three well-known benchmark datasets for video classification show that our proposed tru{\bf N}cat{\bf E}-split-contr{\bf A}s{\bf T} (NEAT) significantly outperforms the existing baselines. By reducing the dimension to 10\% of it, our method achieves over 0.4 noise detection F1-score and 5\% classification accuracy improvement on Mini-Kinetics dataset under severe noise (symmetric-80\%). Thanks to Noise Contrastive Learning, the average classification accuracy improvement on Mini-Kinetics and Sth-Sth-V1 is over 1.6\%. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 338,321 |
2012.00932 | Extended T: Learning with Mixed Closed-set and Open-set Noisy Labels | The label noise transition matrix $T$, reflecting the probabilities that true labels flip into noisy ones, is of vital importance to model label noise and design statistically consistent classifiers. The traditional transition matrix is limited to model closed-set label noise, where noisy training data has true class labels within the noisy label set. It is unfitted to employ such a transition matrix to model open-set label noise, where some true class labels are outside the noisy label set. Thus when considering a more realistic situation, i.e., both closed-set and open-set label noise occurs, existing methods will undesirably give biased solutions. Besides, the traditional transition matrix is limited to model instance-independent label noise, which may not perform well in practice. In this paper, we focus on learning under the mixed closed-set and open-set label noise. We address the aforementioned issues by extending the traditional transition matrix to be able to model mixed label noise, and further to the cluster-dependent transition matrix to better approximate the instance-dependent label noise in real-world applications. We term the proposed transition matrix as the cluster-dependent extended transition matrix. An unbiased estimator (i.e., extended $T$-estimator) has been designed to estimate the cluster-dependent extended transition matrix by only exploiting the noisy data. Comprehensive synthetic and real experiments validate that our method can better model the mixed label noise, following its more robust performance than the prior state-of-the-art label-noise learning methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 209,276 |
1711.10352 | Learning Face Age Progression: A Pyramid Architecture of GANs | The two underlying requirements of face age progression, i.e. aging accuracy and identity permanence, are not well studied in the literature. In this paper, we present a novel generative adversarial network based approach. It separately models the constraints for the intrinsic subject-specific characteristics and the age-specific facial changes with respect to the elapsed time, ensuring that the generated faces present desired aging effects while simultaneously keeping personalized properties stable. Further, to generate more lifelike facial details, high-level age-specific features conveyed by the synthesized face are estimated by a pyramidal adversarial discriminator at multiple scales, which simulates the aging effects in a finer manner. The proposed method is applicable to diverse face samples in the presence of variations in pose, expression, makeup, etc., and remarkably vivid aging effects are achieved. Both visual fidelity and quantitative evaluations show that the approach advances the state-of-the-art. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,579 |
2206.14135 | Towards Explainable Metaheuristic: Mining Surrogate Fitness Models for
Importance of Variables | Metaheuristic search algorithms look for solutions that either maximise or minimise a set of objectives, such as cost or performance. However most real-world optimisation problems consist of nonlinear problems with complex constraints and conflicting objectives. The process by which a GA arrives at a solution remains largely unexplained to the end-user. A poorly understood solution will dent the confidence a user has in the arrived at solution. We propose that investigation of the variables that strongly influence solution quality and their relationship would be a step toward providing an explanation of the near-optimal solution presented by a metaheuristic. Through the use of four benchmark problems we use the population data generated by a Genetic Algorithm (GA) to train a surrogate model, and investigate the learning of the search space by the surrogate model. We compare what the surrogate has learned after being trained on population data generated after the first generation and contrast this with a surrogate model trained on the population data from all generations. We show that the surrogate model picks out key characteristics of the problem as it is trained on population data from each generation. Through mining the surrogate model we can build a picture of the learning process of a GA, and thus an explanation of the solution presented by the GA. The aim being to build trust and confidence in the end-user about the solution presented by the GA, and encourage adoption of the model. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 305,185 |
0910.0902 | Reduced-Rank Hidden Markov Models | We introduce the Reduced-Rank Hidden Markov Model (RR-HMM), a generalization of HMMs that can model smooth state evolution as in Linear Dynamical Systems (LDSs) as well as non-log-concave predictive distributions as in continuous-observation HMMs. RR-HMMs assume an m-dimensional latent state and n discrete observations, with a transition matrix of rank k <= m. This implies the dynamics evolve in a k-dimensional subspace, while the shape of the set of predictive distributions is determined by m. Latent state belief is represented with a k-dimensional state vector and inference is carried out entirely in R^k, making RR-HMMs as computationally efficient as k-state HMMs yet more expressive. To learn RR-HMMs, we relax the assumptions of a recently proposed spectral learning algorithm for HMMs (Hsu, Kakade and Zhang 2009) and apply it to learn k-dimensional observable representations of rank-k RR-HMMs. The algorithm is consistent and free of local optima, and we extend its performance guarantees to cover the RR-HMM case. We show how this algorithm can be used in conjunction with a kernel density estimator to efficiently model high-dimensional multivariate continuous data. We also relax the assumption that single observations are sufficient to disambiguate state, and extend the algorithm accordingly. Experiments on synthetic data and a toy video, as well as on a difficult robot vision modeling problem, yield accurate models that compare favorably with standard alternatives in simulation quality and prediction capability. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 4,639 |
2310.08027 | Exploring Large Language Models for Multi-Modal Out-of-Distribution
Detection | Out-of-distribution (OOD) detection is essential for reliable and trustworthy machine learning. Recent multi-modal OOD detection leverages textual information from in-distribution (ID) class names for visual OOD detection, yet it currently neglects the rich contextual information of ID classes. Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class. Indiscriminately using such knowledge causes catastrophic damage to OOD detection due to LLMs' hallucinations, as is observed by our analysis. In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs. Specifically, we introduce a consistency-based uncertainty calibration method to estimate the confidence score of each generation. We further extract visual objects from each image to fully capitalize on the aforementioned world knowledge. Extensive experiments demonstrate that our method consistently outperforms the state-of-the-art. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 399,223 |
1204.1678 | A New Approach for Arabic Handwritten Postal Addresses Recognition | In this paper, we propose an automatic analysis system for the Arabic handwriting postal addresses recognition, by using the beta elliptical model. Our system is divided into different steps: analysis, pre-processing and classification. The first operation is the filtering of image. In the second, we remove the border print, stamps and graphics. After locating the address on the envelope, the address segmentation allows the extraction of postal code and city name separately. The pre-processing system and the modeling approach are based on two basic steps. The first step is the extraction of the temporal order in the image of the handwritten trajectory. The second step is based on the use of Beta-Elliptical model for the representation of handwritten script. The recognition system is based on Graph-matching algorithm. Our modeling and recognition approaches were validated by using the postal code and city names extracted from the Tunisian postal envelopes data. The recognition rate obtained is about 98%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 15,342 |
2006.16493 | Hierarchical Temporal and Spatial Clustering of Uncertain and
Time-varying Load Models | Load modeling is difficult due to its uncertain and time-varying properties. Through the recently proposed ambient signals load modeling approach, these properties can be more frequently tracked. However, the large dataset of load modeling results becomes a new problem. In this paper, a hierarchical temporal and spatial clustering method of load models is proposed, after which the large size load model dataset can be represented by several representative load models (RLMs). In the temporal clustering stage, the RLMs of one load bus are picked up through clustering to represent all the load models of the load bus at different time. In the spatial clustering stage, the RLMs of all the load buses form a new set and the RLMs of the system are picked up through spatial clustering. In this way, the large sets of load models are represented by a small number of RLMs, through which the storage space of the load models is significantly reduced. The validation results in IEEE 39 bus system have shown that the simulation accuracy can still be maintained after replacing the load models with the RLMs. In this way, the effectiveness of the proposed hierarchical clustering framework is validated. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 184,823 |
2402.16132 | LSTPrompt: Large Language Models as Zero-Shot Time Series Forecasters by
Long-Short-Term Prompting | Time-series forecasting (TSF) finds broad applications in real-world scenarios. Prompting off-the-shelf Large Language Models (LLMs) demonstrates strong zero-shot TSF capabilities while preserving computational efficiency. However, existing prompting methods oversimplify TSF as language next-token predictions, overlooking its dynamic nature and lack of integration with state-of-the-art prompt strategies such as Chain-of-Thought. Thus, we propose LSTPrompt, a novel approach for prompting LLMs in zero-shot TSF tasks. LSTPrompt decomposes TSF into short-term and long-term forecasting sub-tasks, tailoring prompts to each. LSTPrompt guides LLMs to regularly reassess forecasting mechanisms to enhance adaptability. Extensive evaluations demonstrate consistently better performance of LSTPrompt than existing prompting methods, and competitive results compared to foundation TSF models. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 432,438 |
2301.05217 | Progress measures for grokking via mechanistic interpretability | Neural networks often exhibit emergent behavior, where qualitatively new capabilities arise from scaling up the amount of parameters, training data, or training steps. One approach to understanding emergence is to find continuous \textit{progress measures} that underlie the seemingly discontinuous qualitative changes. We argue that progress measures can be found via mechanistic interpretability: reverse-engineering learned behaviors into their individual components. As a case study, we investigate the recently-discovered phenomenon of ``grokking'' exhibited by small transformers trained on modular addition tasks. We fully reverse engineer the algorithm learned by these networks, which uses discrete Fourier transforms and trigonometric identities to convert addition to rotation about a circle. We confirm the algorithm by analyzing the activations and weights and by performing ablations in Fourier space. Based on this understanding, we define progress measures that allow us to study the dynamics of training and split training into three continuous phases: memorization, circuit formation, and cleanup. Our results show that grokking, rather than being a sudden shift, arises from the gradual amplification of structured mechanisms encoded in the weights, followed by the later removal of memorizing components. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 340,289 |
2004.08207 | Simulation of Covid-19 epidemic evolution: are compartmental models
really predictive? | Computational models for the simulation of the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) epidemic evolution would be extremely useful to support authorities in designing healthcare policies and lockdown measures to contain its impact on public health and economy. In Italy, the devised forecasts have been mostly based on a pure data-driven approach, by fitting and extrapolating open data on the epidemic evolution collected by the Italian Civil Protection Center. In this respect, SIR epidemiological models, which start from the description of the nonlinear interactions between population compartments, would be a much more desirable approach to understand and predict the collective emergent response. The present contribution addresses the fundamental question whether a SIR epidemiological model, suitably enriched with asymptomatic and dead individual compartments, could be able to provide reliable predictions on the epidemic evolution. To this aim, a machine learning approach based on particle swarm optimization (PSO) is proposed to automatically identify the model parameters based on a training set of data of progressive increasing size, considering Lombardy in Italy as a case study. The analysis of the scatter in the forecasts shows that model predictions are quite sensitive to the size of the dataset used for training, and that further data are still required to achieve convergent -- and therefore reliable -- predictions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 173,004 |
2108.11916 | HAN: Higher-order Attention Network for Spoken Language Understanding | Spoken Language Understanding (SLU), including intent detection and slot filling, is a core component in human-computer interaction. The natural attributes of the relationship among the two subtasks make higher requirements on fine-grained feature interaction, i.e., the token-level intent features and slot features. Previous works mainly focus on jointly modeling the relationship between the two subtasks with attention-based models, while ignoring the exploration of attention order. In this paper, we propose to replace the conventional attention with our proposed Bilinear attention block and show that the introduced Higher-order Attention Network (HAN) brings improvement for the SLU task. Importantly, we conduct wide analysis to explore the effectiveness brought from the higher-order attention. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 252,336 |
1312.6918 | Data Offloading in Load Coupled Networks: A Utility Maximization
Framework | We provide a general framework for the problem of data offloading in a heterogeneous wireless network, where some demand of cellular users is served by a complementary network. The complementary network is either a small-cell network that shares the same resources as the cellular network, or a WiFi network that uses orthogonal resources. For a given demand served in a cellular network, the load, or the level of resource usage, of each cell depends in a non-linear manner on the load of other cells due to the mutual coupling of interference seen by one another. With load coupling, we optimize the demand to be served in the cellular or the complementary networks, so as to maximize a utility function. We consider three representative utility functions that balance, to varying degrees, the revenue from serving the users vs the user fairness. We establish conditions for which the optimization problem has a feasible solution and is convex, and hence tractable to numerical computations. Finally, we propose a strategy with theoretical justification to constrain the load to some maximum value, as required for practical implementation. Numerical studies are conducted for both under-loaded and over-loaded networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 29,416 |
2411.03113 | Minimum Radiative Heat and Propellant Aerocapture Guidance with Attitude
Kinematics Constraints | To maximize the payload mass, an aerocapture trajectory should be flown in such a way that both the final {\Delta}V and the total heat load are minimized. For some aerocapture missions, the heating due to radiation of high temperature gases in the shock-layer is so much larger than the heat due to convection, that the latter is negligible. This paper provides analytical proof and numerical validation that radiative heat is minimized by the same trajectory that minimizes the final {\Delta}V: a single switch bang-bang trajectory, starting with full lift-up, full lift-down commands. Further, a novel guidance that plans a bang-bang trajectory with constraints in the attitude kinematics is introduced. While achieving similar performance as the current state-of-the-art, the inclusion of constraints in attitude kinematics allows for much less tuning. Finally, a lateral guidance that makes use of information on the final inclination of the predicted trajectory is introduced. Such guidance allows for very high accuracy in the inclination requirements with only two reversals, by requiring a single parameter to be tuned. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 505,793 |
2109.02336 | Towards an Approach to Contextual Detection of Multi-Stage Cyber Attacks
in Smart Grids | Electric power grids are at risk of being compromised by high-impact cyber-security threats such as coordinated, timed attacks. Navigating this new threat landscape requires a deep understanding of the potential risks and complex attack processes in energy information systems, which in turn demands an unmanageable manual effort to timely process a large amount of cross-domain information. To provide an adequate basis to contextually assess and understand the situation of smart grids in case of coordinated cyber-attacks, we need a systematic and coherent approach to identify cyber incidents. In this paper, we present an approach that collects and correlates cross-domain cyber threat information to detect multi-stage cyber-attacks in energy information systems. We investigate the applicability and performance of the presented correlation approach and discuss the results to highlight challenges in domain-specific detection mechanisms. | false | false | false | false | false | false | false | false | false | false | true | false | true | false | false | false | false | true | 253,711 |
1611.03941 | Anomaly Detection in Bitcoin Network Using Unsupervised Learning Methods | The problem of anomaly detection has been studied for a long time. In short, anomalies are abnormal or unlikely things. In financial networks, thieves and illegal activities are often anomalous in nature. Members of a network want to detect anomalies as soon as possible to prevent them from harming the network's community and integrity. Many Machine Learning techniques have been proposed to deal with this problem; some results appear to be quite promising but there is no obvious superior method. In this paper, we consider anomaly detection particular to the Bitcoin transaction network. Our goal is to detect which users and transactions are the most suspicious; in this case, anomalous behavior is a proxy for suspicious behavior. To this end, we use three unsupervised learning methods including k-means clustering, Mahalanobis distance, and Unsupervised Support Vector Machine (SVM) on two graphs generated by the Bitcoin transaction network: one graph has users as nodes, and the other has transactions as nodes. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 63,761 |
2406.12608 | Bridging Local Details and Global Context in Text-Attributed Graphs | Representation learning on text-attributed graphs (TAGs) is vital for real-world applications, as they combine semantic textual and contextual structural information. Research in this field generally consist of two main perspectives: local-level encoding and global-level aggregating, respectively refer to textual node information unification (e.g., using Language Models) and structure-augmented modeling (e.g., using Graph Neural Networks). Most existing works focus on combining different information levels but overlook the interconnections, i.e., the contextual textual information among nodes, which provides semantic insights to bridge local and global levels. In this paper, we propose GraphBridge, a multi-granularity integration framework that bridges local and global perspectives by leveraging contextual textual information, enhancing fine-grained understanding of TAGs. Besides, to tackle scalability and efficiency challenges, we introduce a graphaware token reduction module. Extensive experiments across various models and datasets show that our method achieves state-of-theart performance, while our graph-aware token reduction module significantly enhances efficiency and solves scalability issues. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 465,485 |
2012.07499 | A learning perspective on the emergence of abstractions: the curious
case of phonemes | In the present paper we use a range of modeling techniques to investigate whether an abstract phone could emerge from exposure to speech sounds. In effect, the study represents an attempt for operationalize a theoretical device of Usage-based Linguistics of emergence of an abstraction from language use. Our quest focuses on the simplest of such hypothesized abstractions. We test two opposing principles regarding the development of language knowledge in linguistically untrained language users: Memory-Based Learning (MBL) and Error-Correction Learning (ECL). A process of generalization underlies the abstractions linguists operate with, and we probed whether MBL and ECL could give rise to a type of language knowledge that resembles linguistic abstractions. Each model was presented with a significant amount of pre-processed speech produced by one speaker. We assessed the consistency or stability of what these simple models have learned and their ability to give rise to abstract categories. Both types of models fare differently with regard to these tests. We show that ECL models can learn abstractions and that at least part of the phone inventory and grouping into traditional types can be reliably identified from the input. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 211,477 |
2305.08457 | MolHF: A Hierarchical Normalizing Flow for Molecular Graph Generation | Molecular de novo design is a critical yet challenging task in scientific fields, aiming to design novel molecular structures with desired property profiles. Significant progress has been made by resorting to generative models for graphs. However, limited attention is paid to hierarchical generative models, which can exploit the inherent hierarchical structure (with rich semantic information) of the molecular graphs and generate complex molecules of larger size that we shall demonstrate to be difficult for most existing models. The primary challenge to hierarchical generation is the non-differentiable issue caused by the generation of intermediate discrete coarsened graph structures. To sidestep this issue, we cast the tricky hierarchical generation problem over discrete spaces as the reverse process of hierarchical representation learning and propose MolHF, a new hierarchical flow-based model that generates molecular graphs in a coarse-to-fine manner. Specifically, MolHF first generates bonds through a multi-scale architecture, then generates atoms based on the coarsened graph structure at each scale. We demonstrate that MolHF achieves state-of-the-art performance in random generation and property optimization, implying its high capacity to model data distribution. Furthermore, MolHF is the first flow-based model that can be applied to model larger molecules (polymer) with more than 100 heavy atoms. The code and models are available at https://github.com/violet-sto/MolHF. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 364,288 |
2305.11981 | "Sch\"one neue Lieferkettenwelt": Workers' Voice und Arbeitsstandards in
Zeiten algorithmischer Vorhersage | The complexity and increasingly tight coupling of supply chains poses a major logistical challenge for leading companies. Another challenge is that leading companies -- under pressure from consumers, a critical public and legislative measures such as supply chain laws -- have to take more responsibility than before for their suppliers' labour standards. In this paper, we discuss a new approach that leading companies are using to try to address these challenges: algorithmic prediction of business risks, but also environmental and social risks. We describe the technical and cultural conditions for algorithmic prediction and explain how -- from the perspective of leading companies -- it helps to address both challenges. We then develop scenarios on how and with what kind of social consequences algorithmic prediction can be used by leading companies. From the scenarios, we derive policy options for different stakeholder groups to help develop algorithmic prediction towards improving labour standards and worker voice. -- Die Komplexit\"at und zunehmend enge Kopplung vieler Lieferketten stellt eine gro{\ss}e logistische Herausforderung f\"ur Leitunternehmen dar. Eine weitere Herausforderung besteht darin, dass Leitunternehmen -- gedr\"angt durch Konsument:innen, eine kritische \"Offentlichkeit und gesetzgeberische Ma{\ss}nahmen wie die Lieferkettengesetze -- st\"arker als bisher Verantwortung f\"ur Arbeitsstandards in ihren Zulieferbetrieben \"ubernehmen m\"ussen. In diesem Beitrag diskutieren wir einen neuen Ansatz, mit dem Leitunternehmen versuchen, diese Herausforderungen zu bearbeiten: die algorithmische Vorhersage von betriebswirtschaftlichen, aber auch \"okologischen und sozialen Risiken. Wir beschreiben die technischen und kulturellen Bedingungen f\"ur algorithmische Vorhersage und erkl\"aren, wie diese -- aus Perspektive von Leitunternehmen -- bei der Bearbeitung beider Herausforderungen hilft. Anschlie{\ss}end entwickeln wir Szenarien, wie und mit welchen sozialen Konsequenzen algorithmische Vorhersage durch Leitunternehmen eingesetzt werden kann. Aus den Szenarien leiten wir Handlungsoptionen f\"ur verschiedene Stakeholder-Gruppen ab, die dabei helfen sollen, algorithmische Vorhersage im Sinne einer Verbesserung von Arbeitsstandards und Workers' Voice weiterzuentwickeln. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 365,781 |
2401.14949 | Renewable energy exporting consumption-oriented transfer limit switching
control: A unsupervised learning-based method | A method for generating unsupervised conditional mapping rules for multi-inter-corridor transfer limits and their integration into unit commitment through banding-switching is proposed in this paper. The method starts by using Ant colony clustering(ACC) to identify different operating modes with renewable energy penetration. For each sub-pattern, coupling inter-corridors are determined using correlation coefficients. An algorithm for constructing coupled inter-corridors' limits boundaries, employing grid partitioning, is proposed to establish conditional mappings from sub-patterns to multi-inter-corridor limits. Additionally, a banding matching model is proposed, incorporating distance criteria and the Big-M method. It also includes a limit-switching method based on Lagrange multipliers. Case studies on the IEEE 39-node system illustrate the effectiveness of this method in increasing consumption of renewable energy and reducing operational costs while adhering to stability verification requirements. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 424,272 |
2108.11325 | A multimodal solution approach for mitigating the impact of planned
maintenance on metro rail attractiveness | The possible unavailability of urban rail-based transport services due to planned maintenance activities may have significant consequences on the perceived quality of service, thus affecting railway attractiveness. To cope with the mitigation of planned service interruptions and to guarantee a seamless journey and a good travel experience for passengers, it is possible to exploit the existing services differently and/or provide additional on-demand services, such as temporary supplemental bus lines. In this context, this paper aims to develop a mathematical programming model for planning service interruptions due to maintenance considering passenger transport demand dynamics. In particular, the proposed approach deals with service interruptions characterized by a long duration for which timetable adaption strategies are not applicable, suggesting mitigation actions that exploit the already existing services and/or the activation of additional ones, with the aim of minimizing users' inconvenience. In doing so, the planned infrastructure status (i.e., available or under maintenance), as well as the forecasted transport demand, are taken into account to adapt the service accordingly by offering a multimodal transport solution to passengers. To find the best solution, a decomposition solution approach is proposed in combination with a multistage cooperative framework with feedback that models the negotiation process between the involved actors. Finally, the applicability of the proposed approach to real case studies is discussed based on some performance indicators. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 252,153 |
1511.06523 | WIDER FACE: A Face Detection Benchmark | Face detection is one of the most studied topics in the computer vision community. Much of the progresses have been made by the availability of face detection benchmark datasets. We show that there is a gap between current face detection performance and the real world requirements. To facilitate future face detection research, we introduce the WIDER FACE dataset, which is 10 times larger than existing datasets. The dataset contains rich annotations, including occlusions, poses, event categories, and face bounding boxes. Faces in the proposed dataset are extremely challenging due to large variations in scale, pose and occlusion, as shown in Fig. 1. Furthermore, we show that WIDER FACE dataset is an effective training source for face detection. We benchmark several representative detection systems, providing an overview of state-of-the-art performance and propose a solution to deal with large scale variation. Finally, we discuss common failure cases that worth to be further investigated. Dataset can be downloaded at: mmlab.ie.cuhk.edu.hk/projects/WIDERFace | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 49,278 |
2210.14670 | Boosting Semi-Supervised Semantic Segmentation with Probabilistic
Representations | Recent breakthroughs in semi-supervised semantic segmentation have been developed through contrastive learning. In prevalent pixel-wise contrastive learning solutions, the model maps pixels to deterministic representations and regularizes them in the latent space. However, there exist inaccurate pseudo-labels which map the ambiguous representations of pixels to the wrong classes due to the limited cognitive ability of the model. In this paper, we define pixel-wise representations from a new perspective of probability theory and propose a Probabilistic Representation Contrastive Learning (PRCL) framework that improves representation quality by taking its probability into consideration. Through modelling the mapping from pixels to representations as the probability via multivariate Gaussian distributions, we can tune the contribution of the ambiguous representations to tolerate the risk of inaccurate pseudo-labels. Furthermore, we define prototypes in the form of distributions, which indicates the confidence of a class, while the point prototype cannot. Moreover, we propose to regularize the distribution variance to enhance the reliability of representations. Taking advantage of these benefits, high-quality feature representations can be derived in the latent space, thereby the performance of semantic segmentation can be further improved. We conduct sufficient experiment to evaluate PRCL on Pascal VOC and CityScapes to demonstrate its superiority. The code is available at https://github.com/Haoyu-Xie/PRCL. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 326,642 |
2310.06470 | Focus on Local Regions for Query-based Object Detection | Query-based methods have garnered significant attention in object detection since the advent of DETR, the pioneering query-based detector. However, these methods face challenges like slow convergence and suboptimal performance. Notably, self-attention in object detection often hampers convergence due to its global focus. To address these issues, we propose FoLR, a transformer-like architecture with only decoders. We improve the self-attention by isolating connections between irrelevant objects that makes it focus on local regions but not global regions. We also design the adaptive sampling method to extract effective features based on queries' local regions from feature maps. Additionally, we employ a look-back strategy for decoders to retain previous information, followed by the Feature Mixer module to fuse features and queries. Experimental results demonstrate FoLR's state-of-the-art performance in query-based detectors, excelling in convergence speed and computational efficiency. Index Terms: Local regions, Attention mechanism, Object detection | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 398,594 |
2411.15111 | Learnable Activation Functions in Physics-Informed Neural Networks for
Solving Partial Differential Equations | We investigate the use of learnable activation functions in Physics-Informed Neural Networks (PINNs) for solving Partial Differential Equations (PDEs). Specifically, we compare the efficacy of traditional Multilayer Perceptrons (MLPs) with fixed and learnable activations against Kolmogorov-Arnold Networks (KANs), which employ learnable basis functions. Physics-informed neural networks (PINNs) have emerged as an effective method for directly incorporating physical laws into the learning process, offering a data-efficient solution for both the forward and inverse problems associated with PDEs. However, challenges such as effective training and spectral bias, where low-frequency components are learned more effectively, often limit their applicability to problems characterized by rapid oscillations or sharp transitions. By employing different activation or basis functions on MLP and KAN, we assess their impact on convergence behavior and spectral bias mitigation, and the accurate approximation of PDEs. The findings offer insights into the design of neural network architectures that balance training efficiency, convergence speed, and test accuracy for PDE solvers. By evaluating the influence of activation or basis function choices, this work provides guidelines for developing more robust and accurate PINN models. The source code and pre-trained models used in this study are made publicly available to facilitate reproducibility and future exploration. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 510,443 |
1401.2468 | N2Sky - Neural Networks as Services in the Clouds | We present the N2Sky system, which provides a framework for the exchange of neural network specific knowledge, as neural network paradigms and objects, by a virtual organization environment. It follows the sky computing paradigm delivering ample resources by the usage of federated Clouds. N2Sky is a novel Cloud-based neural network simulation environment, which follows a pure service oriented approach. The system implements a transparent environment aiming to enable both novice and experienced users to do neural network research easily and comfortably. N2Sky is built using the RAVO reference architecture of virtual organizations which allows itself naturally integrating into the Cloud service stack (SaaS, PaaS, and IaaS) of service oriented architectures. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 29,743 |
2106.15258 | SRF-Net: Selective Receptive Field Network for Anchor-Free Temporal
Action Detection | Temporal action detection (TAD) is a challenging task which aims to temporally localize and recognize the human action in untrimmed videos. Current mainstream one-stage TAD approaches localize and classify action proposals relying on pre-defined anchors, where the location and scale for action instances are set by designers. Obviously, such an anchor-based TAD method limits its generalization capability and will lead to performance degradation when videos contain rich action variation. In this study, we explore to remove the requirement of pre-defined anchors for TAD methods. A novel TAD model termed as Selective Receptive Field Network (SRF-Net) is developed, in which the location offsets and classification scores at each temporal location can be directly estimated in the feature map and SRF-Net is trained in an end-to-end manner. Innovatively, a building block called Selective Receptive Field Convolution (SRFC) is dedicatedly designed which is able to adaptively adjust its receptive field size according to multiple scales of input information at each temporal location in the feature map. Extensive experiments are conducted on the THUMOS14 dataset, and superior results are reported comparing to state-of-the-art TAD approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 243,673 |
2502.06678 | Quantile Multi-Armed Bandits with 1-bit Feedback | In this paper, we study a variant of best-arm identification involving elements of risk sensitivity and communication constraints. Specifically, the goal of the learner is to identify the arm with the highest quantile reward, while the communication from an agent (who observes rewards) and the learner (who chooses actions) is restricted to only one bit of feedback per arm pull. We propose an algorithm that utilizes noisy binary search as a subroutine, allowing the learner to estimate quantile rewards through 1-bit feedback. We derive an instance-dependent upper bound on the sample complexity of our algorithm and provide an algorithm-independent lower bound for specific instances, with the two matching to within logarithmic factors under mild conditions, or even to within constant factors in certain low error probability scaling regimes. The lower bound is applicable even in the absence of communication constraints, and thus we conclude that restricting to 1-bit feedback has a minimal impact on the scaling of the sample complexity. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 532,183 |
2005.04473 | Visually Impaired Aid using Convolutional Neural Networks, Transfer
Learning, and Particle Competition and Cooperation | Navigation and mobility are some of the major problems faced by visually impaired people in their daily lives. Advances in computer vision led to the proposal of some navigation systems. However, most of them require expensive and/or heavy hardware. In this paper we propose the use of convolutional neural networks (CNN), transfer learning, and semi-supervised learning (SSL) to build a framework aimed at the visually impaired aid. It has low computational costs and, therefore, may be implemented on current smartphones, without relying on any additional equipment. The smartphone camera can be used to automatically take pictures of the path ahead. Then, they will be immediately classified, providing almost instantaneous feedback to the user. We also propose a dataset to train the classifiers, including indoor and outdoor situations with different types of light, floor, and obstacles. Many different CNN architectures are evaluated as feature extractors and classifiers, by fine-tuning weights pre-trained on a much larger dataset. The graph-based SSL method, known as particle competition and cooperation, is also used for classification, allowing feedback from the user to be incorporated without retraining the underlying network. 92\% and 80\% classification accuracy is achieved in the proposed dataset in the best supervised and SSL scenarios, respectively. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 176,476 |
2004.06231 | Einsum Networks: Fast and Scalable Learning of Tractable Probabilistic
Circuits | Probabilistic circuits (PCs) are a promising avenue for probabilistic modeling, as they permit a wide range of exact and efficient inference routines. Recent ``deep-learning-style'' implementations of PCs strive for a better scalability, but are still difficult to train on real-world data, due to their sparsely connected computational graphs. In this paper, we propose Einsum Networks (EiNets), a novel implementation design for PCs, improving prior art in several regards. At their core, EiNets combine a large number of arithmetic operations in a single monolithic einsum-operation, leading to speedups and memory savings of up to two orders of magnitude, in comparison to previous implementations. As an algorithmic contribution, we show that the implementation of Expectation-Maximization (EM) can be simplified for PCs, by leveraging automatic differentiation. Furthermore, we demonstrate that EiNets scale well to datasets which were previously out of reach, such as SVHN and CelebA, and that they can be used as faithful generative image models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 172,449 |
2502.01117 | Learning to Learn Weight Generation via Trajectory Diffusion | Diffusion-based algorithms have emerged as promising techniques for weight generation, particularly in scenarios like multi-task learning that require frequent weight updates. However, existing solutions suffer from limited cross-task transferability. In addition, they only utilize optimal weights as training samples, ignoring the value of other weights in the optimization process. To address these issues, we propose Lt-Di, which integrates the diffusion algorithm with meta-learning to generate weights for unseen tasks. Furthermore, we extend the vanilla diffusion algorithm into a trajectory diffusion algorithm to utilize other weights along the optimization trajectory. Trajectory diffusion decomposes the entire diffusion chain into multiple shorter ones, improving training and inference efficiency. We analyze the convergence properties of the weight generation paradigm and improve convergence efficiency without additional time overhead. Our experiments demonstrate Lt-Di's higher accuracy while reducing computational overhead across various tasks, including zero-shot and few-shot learning, multi-domain generalization, and large-scale language model fine-tuning.Our code is released at https://github.com/tuantuange/Lt-Di. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 529,705 |
2106.10438 | ML and MAP Device Activity Detections for Grant-Free Massive Access in
Multi-Cell Networks | Device activity detection is one main challenge in grant-free massive access, which is recently proposed to support massive machine-type communications (mMTC). Existing solutions for device activity detection fail to consider inter-cell interference generated by massive IoT devices or important prior information on device activities and inter-cell interference. In this paper, given different numbers of observations and network parameters, we consider both non-cooperative device activity detection and cooperative device activity detection in a multi-cell network, consisting of many access points (APs) and IoT devices. Under each activity detection mechanism, we consider the joint maximum likelihood (ML) estimation and joint maximum a posterior probability (MAP) estimation of both device activities and interference powers, utilizing tools from probability, stochastic geometry, and optimization. Each estimation problem is a challenging non-convex problem, and a coordinate descent algorithm is proposed to obtain a stationary point. Each proposed joint ML estimation extends the existing one for a single-cell network by considering the estimation of interference powers, together with the estimation of device activities. Each proposed joint MAP estimation further enhances the corresponding joint ML estimation by exploiting prior distributions of device activities and interference powers. The proposed joint ML estimation and joint MAP estimation under cooperative detection outperform the respective ones under non-cooperative detection at the costs of increasing backhaul burden, knowledge of network parameters, and computational complexities. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 242,017 |
1503.05937 | Controllability of spin-boson systems | In this paper we study the so-called spin-boson system, namely {a two-level system} in interaction with a distinguished mode of a quantized bosonic field. We give a brief description of the controlled Rabi and Jaynes--Cummings models and we discuss their appearance in the mathematics and physics literature. We then study the controllability of the Rabi model when the control is an external field acting on the bosonic part. Applying geometric control techniques to the Galerkin approximation and using perturbation theory to guarantee non-resonance of the spectrum of the drift operator, we prove approximate controllability of the system, for almost every value of the interaction parameter. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 41,297 |
2007.06786 | Meta-rPPG: Remote Heart Rate Estimation Using a Transductive
Meta-Learner | Remote heart rate estimation is the measurement of heart rate without any physical contact with the subject and is accomplished using remote photoplethysmography (rPPG) in this work. rPPG signals are usually collected using a video camera with a limitation of being sensitive to multiple contributing factors, e.g. variation in skin tone, lighting condition and facial structure. End-to-end supervised learning approach performs well when training data is abundant, covering a distribution that doesn't deviate too much from the distribution of testing data or during deployment. To cope with the unforeseeable distributional changes during deployment, we propose a transductive meta-learner that takes unlabeled samples during testing (deployment) for a self-supervised weight adjustment (also known as transductive inference), providing fast adaptation to the distributional changes. Using this approach, we achieve state-of-the-art performance on MAHNOB-HCI and UBFC-rPPG. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 187,125 |
1607.04765 | Design and implementation of audio communication system for
social-humanoid robot Lumen as an exhibition guide in Electrical Engineering
Days 2015 | Social Robot Lumen is a humanoid robot created to act like human and be human friend. In this study, Lumen scenario is limited on Lumen as an exhibition guide in Electrical Engineering Days 2015, a seminar and exhibition of electrical engineering undergraduate and graduate student of Bandung Institute of Technology. To be an exhibition guide, Lumen is equipped by Nao robot, a server, and processing applications. Audio communication system is one of the processing applications. The purpose of the system is to create verbal communication that allow Lumen to receive human voice and respond naturally to it. To be able to communicate like a human, audio communication system is built with speech recognition module to transform speech data into text, speech synthesizer module to transform text data into speech, and gender identification module to distinguish adult female and male voice. Speech recognition module is implemented using Google Speech Recognition API, speech synthesizer module is implemented using Acapela engine, and gender identification module implemented by utilizing speech signal feature that is extracted using Fast Fourier Transform algorithm. Hardware used for implementation are Nao robot, computer, and wireless modem. ----- Lumen Robot Sosial Robot merupakan robot humanoid yang diciptakan agar dapat bersikap seperti manusia dan menjadi teman bagi manusia. Sistem komunikasi audio merupakan salah satu aplikasi pengolah yang bertujuan agar Lumen dapat menerima suara manusia dan meresponnya dengan natural, yaitu seperti cara manusia merespon manusia lainnya. Untuk dapat berkomunikasi seperti manusia, sistem komunikasi audio dilengkapi dengan tiga buah modul: speech recognition untuk mengubah data suara menjadi teks, speech synthesizer untuk mengubah data teks menjadi suara, dan gender identification untuk membedakan suara wanita dan pria. | true | false | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 58,658 |
2004.08227 | MPLP++: Fast, Parallel Dual Block-Coordinate Ascent for Dense Graphical
Models | Dense, discrete Graphical Models with pairwise potentials are a powerful class of models which are employed in state-of-the-art computer vision and bio-imaging applications. This work introduces a new MAP-solver, based on the popular Dual Block-Coordinate Ascent principle. Surprisingly, by making a small change to the low-performing solver, the Max Product Linear Programming (MPLP) algorithm, we derive the new solver MPLP++ that significantly outperforms all existing solvers by a large margin, including the state-of-the-art solver Tree-Reweighted Sequential (TRWS) message-passing algorithm. Additionally, our solver is highly parallel, in contrast to TRWS, which gives a further boost in performance with the proposed GPU and multi-thread CPU implementations. We verify the superiority of our algorithm on dense problems from publicly available benchmarks, as well, as a new benchmark for 6D Object Pose estimation. We also provide an ablation study with respect to graph density. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 173,009 |
2307.12672 | Global k-Space Interpolation for Dynamic MRI Reconstruction using Masked
Image Modeling | In dynamic Magnetic Resonance Imaging (MRI), k-space is typically undersampled due to limited scan time, resulting in aliasing artifacts in the image domain. Hence, dynamic MR reconstruction requires not only modeling spatial frequency components in the x and y directions of k-space but also considering temporal redundancy. Most previous works rely on image-domain regularizers (priors) to conduct MR reconstruction. In contrast, we focus on interpolating the undersampled k-space before obtaining images with Fourier transform. In this work, we connect masked image modeling with k-space interpolation and propose a novel Transformer-based k-space Global Interpolation Network, termed k-GIN. Our k-GIN learns global dependencies among low- and high-frequency components of 2D+t k-space and uses it to interpolate unsampled data. Further, we propose a novel k-space Iterative Refinement Module (k-IRM) to enhance the high-frequency components learning. We evaluate our approach on 92 in-house 2D+t cardiac MR subjects and compare it to MR reconstruction methods with image-domain regularizers. Experiments show that our proposed k-space interpolation method quantitatively and qualitatively outperforms baseline methods. Importantly, the proposed approach achieves substantially higher robustness and generalizability in cases of highly-undersampled MR data. For video presentation, poster, GIF results and code please check our project page: https://jzpeterpan.github.io/k-gin.github.io/. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 381,342 |
2001.04139 | Repr\'esentations lexicales pour la d\'etection non supervis\'ee
d'\'ev\'enements dans un flux de tweets : \'etude sur des corpus fran\c{c}ais
et anglais | In this work, we evaluate the performance of recent text embeddings for the automatic detection of events in a stream of tweets. We model this task as a dynamic clustering problem.Our experiments are conducted on a publicly available corpus of tweets in English and on a similar dataset in French annotated by our team. We show that recent techniques based on deep neural networks (ELMo, Universal Sentence Encoder, BERT, SBERT), although promising on many applications, are not very suitable for this task. We also experiment with different types of fine-tuning to improve these results on French data. Finally, we propose a detailed analysis of the results obtained, showing the superiority of tf-idf approaches for this task. | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 160,163 |
1305.0395 | Tensor Decompositions: A New Concept in Brain Data Analysis? | Matrix factorizations and their extensions to tensor factorizations and decompositions have become prominent techniques for linear and multilinear blind source separation (BSS), especially multiway Independent Component Analysis (ICA), NonnegativeMatrix and Tensor Factorization (NMF/NTF), Smooth Component Analysis (SmoCA) and Sparse Component Analysis (SCA). Moreover, tensor decompositions have many other potential applications beyond multilinear BSS, especially feature extraction, classification, dimensionality reduction and multiway clustering. In this paper, we briefly overview new and emerging models and approaches for tensor decompositions in applications to group and linked multiway BSS/ICA, feature extraction, classification andMultiway Partial Least Squares (MPLS) regression problems. Keywords: Multilinear BSS, linked multiway BSS/ICA, tensor factorizations and decompositions, constrained Tucker and CP models, Penalized Tensor Decompositions (PTD), feature extraction, classification, multiway PLS and CCA. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 24,345 |
1212.4522 | A Multi-View Embedding Space for Modeling Internet Images, Tags, and
their Semantics | This paper investigates the problem of modeling Internet images and associated text or tags for tasks such as image-to-image search, tag-to-image search, and image-to-tag search (image annotation). We start with canonical correlation analysis (CCA), a popular and successful approach for mapping visual and textual features to the same latent space, and incorporate a third view capturing high-level image semantics, represented either by a single category or multiple non-mutually-exclusive concepts. We present two ways to train the three-view embedding: supervised, with the third view coming from ground-truth labels or search keywords; and unsupervised, with semantic themes automatically obtained by clustering the tags. To ensure high accuracy for retrieval tasks while keeping the learning process scalable, we combine multiple strong visual features and use explicit nonlinear kernel mappings to efficiently approximate kernel CCA. To perform retrieval, we use a specially designed similarity function in the embedded space, which substantially outperforms the Euclidean distance. The resulting system produces compelling qualitative results and outperforms a number of two-view baselines on retrieval tasks on three large-scale Internet image datasets. | false | false | false | false | false | true | true | false | false | false | false | true | false | false | false | false | false | true | 20,468 |
2212.10228 | Automated Configuration and Usage of Strategy Portfolios for Bargaining | Bargaining can be used to resolve mixed-motive games in multi-agent systems. Although there is an abundance of negotiation strategies implemented in automated negotiating agents, most agents are based on single fixed strategies, while it is widely acknowledged that there is no single best-performing strategy for all negotiation settings. In this paper, we focus on bargaining settings where opponents are repeatedly encountered, but the bargaining problems change. We introduce a novel method that automatically creates and deploys a portfolio of complementary negotiation strategies using a training set and optimise pay-off in never-before-seen bargaining settings through per-setting strategy selection. Our method relies on the following contributions. We introduce a feature representation that captures characteristics for both the opponent and the bargaining problem. We model the behaviour of an opponent during a negotiation based on its actions, which is indicative of its negotiation strategy, in order to be more effective in future encounters. Our combination of feature-based methods generalises to new negotiation settings, as in practice, over time, it selects effective counter strategies in future encounters. Our approach is tested in an ANAC-like tournament, and we show that we are capable of winning such a tournament with a 5.6% increase in pay-off compared to the runner-up agent. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 337,379 |
2408.00513 | VecAug: Unveiling Camouflaged Frauds with Cohort Augmentation for
Enhanced Detection | Fraud detection presents a challenging task characterized by ever-evolving fraud patterns and scarce labeled data. Existing methods predominantly rely on graph-based or sequence-based approaches. While graph-based approaches connect users through shared entities to capture structural information, they remain vulnerable to fraudsters who can disrupt or manipulate these connections. In contrast, sequence-based approaches analyze users' behavioral patterns, offering robustness against tampering but overlooking the interactions between similar users. Inspired by cohort analysis in retention and healthcare, this paper introduces VecAug, a novel cohort-augmented learning framework that addresses these challenges by enhancing the representation learning of target users with personalized cohort information. To this end, we first propose a vector burn-in technique for automatic cohort identification, which retrieves a task-specific cohort for each target user. Then, to fully exploit the cohort information, we introduce an attentive cohort aggregation technique for augmenting target user representations. To improve the robustness of such cohort augmentation, we also propose a novel label-aware cohort neighbor separation mechanism to distance negative cohort neighbors and calibrate the aggregated cohort information. By integrating this cohort information with target user representations, VecAug enhances the modeling capacity and generalization capabilities of the model to be augmented. Our framework is flexible and can be seamlessly integrated with existing fraud detection models. We deploy our framework on e-commerce platforms and evaluate it on three fraud detection datasets, and results show that VecAug improves the detection performance of base models by up to 2.48\% in AUC and 22.5\% in R@P$_{0.9}$, outperforming state-of-the-art methods significantly. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 477,869 |
2311.01956 | Architecture of Smart Certificates for Web3 Applications Against
Cyberthreats in Financial Industry | This study addresses the security challenges associated with the current internet transformations, specifically focusing on emerging technologies such as blockchain and decentralized storage. It also investigates the role of Web3 applications in shaping the future of the internet. The primary objective is to propose a novel design for 'smart certificates,' which are digital certificates that can be programmatically enforced. Utilizing such certificates, an enterprise can better protect itself from cyberattacks and ensure the security of its data and systems. Web3 recent security solutions by companies and projects like Certik, Forta, Slither, and Securify are the equivalent of code scanning tool that were originally developed for Web1 and Web2 applications, and definitely not like certificates to help enterprises feel safe against cyberthreats. We aim to improve the resilience of enterprises' digital infrastructure by building on top of Web3 application and put methodologies in place for vulnerability analysis and attack correlation, focusing on architecture of different layers, Wallet/Client, Application and Smart Contract, where specific components are provided to identify and predict threats and risks. Furthermore, Certificate Transparency is used for enhancing the security, trustworthiness and decentralized management of the certificates, and detecting misuses, compromises, and malfeasances. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 405,242 |
2404.05692 | Evaluating Mathematical Reasoning Beyond Accuracy | The leaderboard of Large Language Models (LLMs) in mathematical tasks has been continuously updated. However, the majority of evaluations focus solely on the final results, neglecting the quality of the intermediate steps. This oversight can mask underlying problems, such as logical errors or unnecessary steps in the reasoning process. To measure reasoning beyond final-answer accuracy, we introduce ReasonEval, a new methodology for evaluating the quality of reasoning steps. ReasonEval employs validity and redundancy to characterize the reasoning quality, as well as accompanying LLMs to assess them automatically. We explore different design options for the LLM-based evaluators and empirically demonstrate that ReasonEval, when instantiated with base models possessing strong mathematical knowledge and trained with high-quality labeled data, consistently outperforms baseline methods in the meta-evaluation datasets. We also highlight the strong generalization capabilities of ReasonEval. By utilizing ReasonEval to evaluate LLMs specialized in math, we find that an increase in final-answer accuracy does not necessarily guarantee an improvement in the overall quality of the reasoning steps for challenging mathematical problems. Additionally, we observe that ReasonEval can play a significant role in data selection. We open-source the best-performing model, meta-evaluation script, and all evaluation results to facilitate future research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 445,176 |
2203.12668 | Pseudo Label Is Better Than Human Label | State-of-the-art automatic speech recognition (ASR) systems are trained with tens of thousands of hours of labeled speech data. Human transcription is expensive and time consuming. Factors such as the quality and consistency of the transcription can greatly affect the performance of the ASR models trained with these data. In this paper, we show that we can train a strong teacher model to produce high quality pseudo labels by utilizing recent self-supervised and semi-supervised learning techniques. Specifically, we use JUST (Joint Unsupervised/Supervised Training) and iterative noisy student teacher training to train a 600 million parameter bi-directional teacher model. This model achieved 4.0% word error rate (WER) on a voice search task, 11.1% relatively better than a baseline. We further show that by using this strong teacher model to generate high-quality pseudo labels for training, we can achieve 13.6% relative WER reduction (5.9% to 5.1%) for a streaming model compared to using human labels. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 287,350 |
2407.11216 | Finding Meaning in Points: Weakly Supervised Semantic Segmentation for
Event Cameras | Event cameras excel in capturing high-contrast scenes and dynamic objects, offering a significant advantage over traditional frame-based cameras. Despite active research into leveraging event cameras for semantic segmentation, generating pixel-wise dense semantic maps for such challenging scenarios remains labor-intensive. As a remedy, we present EV-WSSS: a novel weakly supervised approach for event-based semantic segmentation that utilizes sparse point annotations. To fully leverage the temporal characteristics of event data, the proposed framework performs asymmetric dual-student learning between 1) the original forward event data and 2) the longer reversed event data, which contain complementary information from the past and the future, respectively. Besides, to mitigate the challenges posed by sparse supervision, we propose feature-level contrastive learning based on class-wise prototypes, carefully aggregated at both spatial region and sample levels. Additionally, we further excavate the potential of our dual-student learning model by exchanging prototypes between the two learning paths, thereby harnessing their complementary strengths. With extensive experiments on various datasets, including DSEC Night-Point with sparse point annotations newly provided by this paper, the proposed method achieves substantial segmentation results even without relying on pixel-level dense ground truths. The code and dataset are available at https://github.com/Chohoonhee/EV-WSSS. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 473,348 |
2012.08850 | Consistency of Distributionally Robust Risk- and Chance-Constrained
Optimization under Wasserstein Ambiguity Sets | We study stochastic optimization problems with chance and risk constraints, where in the latter, risk is quantified in terms of the conditional value-at-risk (CVaR). We consider the distributionally robust versions of these problems, where the constraints are required to hold for a family of distributions constructed from the observed realizations of the uncertainty via the Wasserstein distance. Our main results establish that if the samples are drawn independently from an underlying distribution and the problems satisfy suitable technical assumptions, then the optimal value and optimizers of the distributionally robust versions of these problems converge to the respective quantities of the original problems, as the sample size increases. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 211,887 |
1711.05356 | An Efficient Construction of Rate-Compatible Punctured Polar (RCPP)
Codes Using Hierarchical Puncturing | In this paper, we present an efficient method to construct a good rate-compatible punctured polar (RCPP) code. One of the major challenges on the construction of a RCPP code is to design a common information set which is good for all the codes in the family. In the proposed construction, a common information set is simply optimized for the highest-rate punctured polar code in the family and then, this set is updated for each other code by satisfying the condition that information bits are unchanged during retransmissions. This is enabled by presenting a novel hierarchical puncturing and information-copy technique. To be specific, some information bits are copied to frozen-bit channels, which yields an information-dependent frozen vector. Then, the updated information sets are obtained by appropriately combining the common information set and an information-dependent frozen vector. Moreover, the impact of unknown frozen bits are resolved using the proposed hierarchical puncturing. Simulation results demonstrate that the proposed RCPP code attains a significant performance gain (about 2dB) over a benchmark RCPP code where both codes use the same puncturing patterns but the latter uses the conventional all-zero frozen vector. Therefore, the proposed method would be crucial to construct a good RCPP code. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 84,545 |
2012.13168 | Tunnel Facility-based Vehicle Localization in Highway Tunnel using 3D
LIDAR | Vehicle localization in highway tunnels is a challenging issue for autonomous vehicle navigation. Since GPS signals from satellites cannot be received inside a highway tunnel, map-aided localization is essential. However, the environment around the tunnel is composed mostly of an elliptical wall. Thereby, the unique feature points for map matching are few unlike the case outdoors. As a result, it is a very difficult condition to perform vehicle navigation in the tunnel with existing map-aided localization. In this paper, we propose tunnel facility-based precise vehicle localization in highway tunnels using 3D LIDAR. For vehicle localization in a highway tunnel, a point landmark map that stores the center points of tunnel facilities and a probability distribution map that stores the probability distributions of the lane markings are used. Point landmark-based localization is possible regardless of the number of feature points, if only representative points of an object can be extracted. Therefore, it is a suitable localization method for highway tunnels where the feature points are few. The tunnel facility points were extracted using 3D LIDAR. Position estimation is conducted using an EKF-based navigation filter. The proposed localization algorithm is verified through experiments using actual highway driving data. The experimental results verify that the tunnel facility-based vehicle localization yields precise results in real time. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 213,136 |
2007.07993 | Bitcoin Transaction Forecasting with Deep Network Representation
Learning | Bitcoin and its decentralized computing paradigm for digital currency trading are one of the most disruptive technology in the 21st century. This paper presents a novel approach to developing a Bitcoin transaction forecast model, DLForecast, by leveraging deep neural networks for learning Bitcoin transaction network representations. DLForecast makes three original contributions. First, we explore three interesting properties between Bitcoin transaction accounts: topological connectivity pattern of Bitcoin accounts, transaction amount pattern, and transaction dynamics. Second, we construct a time-decaying reachability graph and a time-decaying transaction pattern graph, aiming at capturing different types of spatial-temporal Bitcoin transaction patterns. Third, we employ node embedding on both graphs and develop a Bitcoin transaction forecasting system between user accounts based on historical transactions with built-in time-decaying factor. To maintain an effective transaction forecasting performance, we leverage the multiplicative model update (MMU) ensemble to combine prediction models built on different transaction features extracted from each corresponding Bitcoin transaction graph. Evaluated on real-world Bitcoin transaction data, we show that our spatial-temporal forecasting model is efficient with fast runtime and effective with forecasting accuracy over 60\% and improves the prediction performance by 50\% when compared to forecasting model built on the static graph baseline. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 187,483 |
2002.00580 | Super-resolution of multispectral satellite images using convolutional
neural networks | Super-resolution aims at increasing image resolution by algorithmic means and has progressed over the recent years due to advances in the fields of computer vision and deep learning. Convolutional Neural Networks based on a variety of architectures have been applied to the problem, e.g. autoencoders and residual networks. While most research focuses on the processing of photographs consisting only of RGB color channels, little work can be found concentrating on multi-band, analytic satellite imagery. Satellite images often include a panchromatic band, which has higher spatial resolution but lower spectral resolution than the other bands. In the field of remote sensing, there is a long tradition of applying pan-sharpening to satellite images, i.e. bringing the multispectral bands to the higher spatial resolution by merging them with the panchromatic band. To our knowledge there are so far no approaches to super-resolution which take advantage of the panchromatic band. In this paper we propose a method to train state-of-the-art CNNs using pairs of lower-resolution multispectral and high-resolution pan-sharpened image tiles in order to create super-resolved analytic images. The derived quality metrics show that the method improves information content of the processed images. We compare the results created by four CNN architectures, with RedNet30 performing best. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 162,401 |
2305.12936 | Entropy bounds for invariant measure perturbations in stochastic systems
with uncertain noise | This paper is concerned with stochastic systems whose state is a diffusion process governed by an Ito stochastic differential equation (SDE). In the framework of a nominal white-noise model, the SDE is driven by a standard Wiener process. For a scenario of statistical uncertainty, where the driving noise acquires a state-dependent drift and thus deviates from its idealised model, we consider the perturbation of the invariant probability density function (PDF) as a steady-state solution of the Fokker-Planck-Kolmogorov equation. We discuss an upper bound on a logarithmic Dirichlet form for the ratio of the invariant PDF to its nominal counterpart in terms of the Kullback-Leibler relative entropy rate of the actual noise distribution with respect the Wiener measure. This bound is shown to be achievable, provided the PDF ratio is preserved by the nominal steady-state probability flux. The logarithmic Dirichlet form bound is used in order to obtain an upper bound on the relative entropy of the perturbed invariant PDF in terms of quadratic-exponential moments of the noise drift in the uniform ellipticity case. These results are illustrated for perturbations of Gaussian invariant measures in linear stochastic systems involving linear noise drifts. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 366,261 |
2209.07272 | Socially Enhanced Situation Awareness from Microblogs using Artificial
Intelligence: A Survey | The rise of social media platforms provides an unbounded, infinitely rich source of aggregate knowledge of the world around us, both historic and real-time, from a human perspective. The greatest challenge we face is how to process and understand this raw and unstructured data, go beyond individual observations and see the "big picture"--the domain of Situation Awareness. We provide an extensive survey of Artificial Intelligence research, focusing on microblog social media data with applications to Situation Awareness, that gives the seminal work and state-of-the-art approaches across six thematic areas: Crime, Disasters, Finance, Physical Environment, Politics, and Health and Population. We provide a novel, unified methodological perspective, identify key results and challenges, and present ongoing research directions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 317,687 |
2106.15017 | Early Mobility Recognition for Intensive Care Unit Patients Using
Accelerometers | With the development of the Internet of Things(IoT) and Artificial Intelligence(AI) technologies, human activity recognition has enabled various applications, such as smart homes and assisted living. In this paper, we target a new healthcare application of human activity recognition, early mobility recognition for Intensive Care Unit(ICU) patients. Early mobility is essential for ICU patients who suffer from long-time immobilization. Our system includes accelerometer-based data collection from ICU patients and an AI model to recognize patients' early mobility. To improve the model accuracy and stability, we identify features that are insensitive to sensor orientations and propose a segment voting process that leverages a majority voting strategy to recognize each segment's activity. Our results show that our system improves model accuracy from 77.78\% to 81.86\% and reduces the model instability (standard deviation) from 16.69\% to 6.92\%, compared to the same AI model without our feature engineering and segment voting process. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | 243,584 |
2110.07744 | Constrained Covariance Steering Based Tube-MPPI | In this paper, we present a new trajectory optimization algorithm for stochastic linear systems which combines Model Predictive Path Integral (MPPI) control with Constrained Covariance Steering (CSS) to achieve high performance with safety guarantees (robustness). Although MPPI can be used to solve complex nonlinear trajectory optimization problems, it may not always handle constraints effectively and its performance may degrade in the presence of unmodeled disturbances. By contrast, CCS can handle probabilistic state and / or input constraints (e.g., chance constraints) and also steer the state covariance of the system to a desired positive definite matrix (control of uncertainty) which both imply that CCS can provide robustness against stochastic disturbances. CCS, however, suffers from scalability issues and cannot handle complex cost functions in general. We argue that the combination of the two methods yields a class of trajectory optimization algorithms that can achieve high performance (a feature of MPPI) while ensuring safety with high probability (a feature of CCS). The efficacy of our algorithm is demonstrated in an obstacle avoidance problem and a circular track path generation problem. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 261,110 |
1808.01405 | Teacher Guided Architecture Search | Much of the recent improvement in neural networks for computer vision has resulted from discovery of new networks architectures. Most prior work has used the performance of candidate models following limited training to automatically guide the search in a feasible way. Could further gains in computational efficiency be achieved by guiding the search via measurements of a high performing network with unknown detailed architecture (e.g. the primate visual system)? As one step toward this goal, we use representational similarity analysis to evaluate the similarity of internal activations of candidate networks with those of a (fixed, high performing) teacher network. We show that adopting this evaluation metric could produce up to an order of magnitude in search efficiency over performance-guided methods. Our approach finds a convolutional cell structure with similar performance as was previously found using other methods but at a total computational cost that is two orders of magnitude lower than Neural Architecture Search (NAS) and more than four times lower than progressive neural architecture search (PNAS). We further show that measurements from only ~300 neurons from primate visual system provides enough signal to find a network with an Imagenet top-1 error that is significantly lower than that achieved by performance-guided architecture search alone. These results suggest that representational matching can be used to accelerate network architecture search in cases where one has access to some or all of the internal representations of a teacher network of interest, such as the brain's sensory processing networks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,558 |
2302.12368 | Power System Recovery Coordinated with (Non-)Black-Start Generators | Power restoration is an urgent task after a black-out, and recovery efficiency is critical when quantifying system resilience. Multiple elements should be considered to restore the power system quickly and safely. This paper proposes a recovery model to solve a direct-current optimal power flow (DCOPF) based on mixed-integer linear programming (MILP). Since most of the generators cannot start independently, the interaction between black-start (BS) and non-black-start (NBS) generators must be modeled appropriately. The energization status of the NBS is coordinated with the recovery status of transmission lines, and both of them are modeled as binary variables. Also, only after an NBS unit receives cranking power through connected transmission lines, will it be allowed to participate in the following system dispatch. The amount of cranking power is estimated as a fixed proportion of the maximum generation capacity. The proposed model is validated on several test systems, as well as a 1393-bus representation system of the Puerto Rican electric power grid. Test results demonstrate how the recovery of NBS units and damaged transmission lines can be optimized, resulting in an efficient and well-coordinated recovery procedure. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 347,535 |
2008.08068 | Energy-Optimal Control of a Submarine-Launched Cruise Missile | A typical mission profile of submarine-launched cruise missiles begins with the launch phase which covers the motion of the missile from the launch to the water-exit and continues with the boost phase which lasts from the water-exit to the beginning of the cruise phase. In order to achieve the desired range of the launch and boost phases, efficient utilization of available energy which carries the missile to the beginning of the cruise phase is necessary. For this purpose, this study presents a new approach for energy-optimal control of the underwater and air motion of a submarine-launched cruise missile. In this approach, the aforementioned problem is modeled and solved as a minimum-effort optimal control problem. Then, the effects of initial and final conditions on energy need are investigated, and the optimal conditions that result with the minimum energy need are determined. Prior to the guidance and control design steps, six degrees of freedom (6 DOF) motion equations are derived and the hydrodynamic and aerodynamic parameters are retrieved. The nonlinear 6 DOF motion model is simplified and linearized before minimum-effort optimal control design part. Results of the designed guidance and control strategies are presented through the nonlinear 6 DOF simulations. Finally, some comments are made and future studies are mentioned based on theoretical and simulation studies. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 192,311 |
2312.02690 | A Comprehensive Study on Modelling and Control of Autonomous Underwater
Vehicle | Autonomous underwater vehicles (AUV) have become the de facto vehicle for remote operations involving oceanography, inspection, and monitoring tasks. These vehicles operate in different and often challenging environments; hence, the design and development of the AUV involving hydrodynamics and control systems need to be designed in detail. This book chapter presents a study on the modelling and robust control of a research vehicle in the presence of uncertainties. The vehicle's dynamic behaviour is modelled using a 6-degree-of-freedom approach, considering the effect of ocean currents. The level flight requirements for different speeds are derived, and the resulting model is decomposed into horizontal and vertical subsystems for linear analysis. The simulation results presented focus on the efficacy of linear controllers within three key subsystems: depth, yaw, and speed. Moreover, level-flight outcomes are demonstrated for a speed of 4 knots. The nonlinear control strategies employed in this study encompass conventional and sliding-mode control (SMC) methodologies. To ensure accurate tracking performance, the controller design considers the vehicle's dynamics with various uncertainties such as ocean currents, parameter uncertainty, CG (Center of Gravity) deviation and buoyancy variation. Both conventional and nonlinear SMC controllers' outcomes are showcased with a lawn-mowing manoeuvre scenario. A systematic comparison is drawn between the robustness of SMC against disturbances and parameter fluctuations in contrast to conventional controllers. Importantly, these results underscore the trade-off that accompanies SMC's robustness, as it necessitates a higher level of complexity in terms of controller design, intricate implementation intricacies, and the management of chattering phenomena. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 412,974 |
2104.11952 | Supervised Anomaly Detection via Conditional Generative Adversarial
Network and Ensemble Active Learning | Anomaly detection has wide applications in machine intelligence but is still a difficult unsolved problem. Major challenges include the rarity of labeled anomalies and it is a class highly imbalanced problem. Traditional unsupervised anomaly detectors are suboptimal while supervised models can easily make biased predictions towards normal data. In this paper, we present a new supervised anomaly detector through introducing the novel Ensemble Active Learning Generative Adversarial Network (EAL-GAN). EAL-GAN is a conditional GAN having a unique one generator vs. multiple discriminators architecture where anomaly detection is implemented by an auxiliary classifier of the discriminator. In addition to using the conditional GAN to generate class balanced supplementary training data, an innovative ensemble learning loss function ensuring each discriminator makes up for the deficiencies of the others is designed to overcome the class imbalanced problem, and an active learning algorithm is introduced to significantly reduce the cost of labeling real-world data. We present extensive experimental results to demonstrate that the new anomaly detector consistently outperforms a variety of SOTA methods by significant margins. The codes are available on Github. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 232,071 |
2502.00780 | Constructing Fundamentals for the Theory of Proportions and Symbolic
Allusions Applied Interdisciplinarily | The Theory of Proportions and Symbolic Allusions applied Interdisciplinary (TPASAI) is a framework that integrates mathematics, linguistics, psychology, and game theory to uncover hidden patterns and proportions in reality. Its central idea is that numerical encoding of symbols, dates, and language can reveal recurring structures and connections that reflect universal principles. By applying fractal analysis, the theory identifies patterns across different scales, offering a unifying perspective on the structure of the world. One key aspect of TPASAI is symbolic analysis, which allows for the reinterpretation of traumatic experiences in psychotherapy. For example, assigning numerical values to elements like fingers, dates, or words can help individuals uncover meaningful associations between personal experiences and collective symbols. This approach encourages cognitive flexibility and provides a therapeutic avenue for recontextualizing emotions. The theory also incorporates principles of game theory, which frame reality as a system of symbolic "codes" governed by rules that can be understood and strategically used. This perspective is especially useful for psychological conditions like obsessive-compulsive disorder (OCD), enabling patients to approach their obsessions as decipherable patterns rather than rigid constraints. TPASAI has practical applications in psychology, education, and technology. In education, it aids in teaching mathematical and linguistic concepts by exploring connections between symbolic representations and real-world events. In technology, the methodology can be employed in ciphering and natural language processing. The innovation of TPASAI lies in its ability to merge the structured rigor of mathematics with the interpretative flexibility of symbolic analysis, offering a deeper understanding of events and relationships. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 529,544 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.