id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2212.14143
Multimodal Wildland Fire Smoke Detection
Research has shown that climate change creates warmer temperatures and drier conditions, leading to longer wildfire seasons and increased wildfire risks in the United States. These factors have in turn led to increases in the frequency, extent, and severity of wildfires in recent years. Given the danger posed by wildland fires to people, property, wildlife, and the environment, there is an urgency to provide tools for effective wildfire management. Early detection of wildfires is essential to minimizing potentially catastrophic destruction. In this paper, we present our work on integrating multiple data sources in SmokeyNet, a deep learning model using spatio-temporal information to detect smoke from wildland fires. Camera image data is integrated with weather sensor measurements and processed by SmokeyNet to create a multimodal wildland fire smoke detection system. We present our results comparing performance in terms of both accuracy and time-to-detection for multimodal data vs. a single data source. With a time-to-detection of only a few minutes, SmokeyNet can serve as an automated early notification system, providing a useful tool in the fight against destructive wildfires.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
338,518
2412.13104
Intermediate Relation Size Bounds for Select-Project-Join Query Plans: Asymptotically Tight Characterizations
We study the problem of statically optimizing select-project-join (SPJ) plans where unary key constraints are allowed. A natural measure of a plan, which we call the output degree and which has been studied previously, is the minimum degree of a polynomial bounding the plan's output relation, as a function of the input database's maximum relation size. This measure is, by definition, invariant under passing from a plan to another plan that is semantically equivalent to the first. In this article, we consider a plan measure which we call the intermediate degree; this measure is defined to be the minimum degree bounding the size of all intermediate relations computed during a plan's execution -- again, as a function of the input database's maximum relation size. We present an algorithm that, given an SPJ plan $q$ and a set $\Sigma$ of unary keys, computes an SPJ plan $q'$ that is semantically equivalent to $q$ (over databases satisfying $\Sigma$) and that has the minimum intermediate degree over all such semantically equivalent plans. For the types of plans considered, we thus obtain a complete and effective understanding of intermediate degree.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
518,163
2005.03221
Deep Learning Framework for Detecting Ground Deformation in the Built Environment using Satellite InSAR data
The large volumes of Sentinel-1 data produced over Europe are being used to develop pan-national ground motion services. However, simple analysis techniques like thresholding cannot detect and classify complex deformation signals reliably making providing usable information to a broad range of non-expert stakeholders a challenge. Here we explore the applicability of deep learning approaches by adapting a pre-trained convolutional neural network (CNN) to detect deformation in a national-scale velocity field. For our proof-of-concept, we focus on the UK where previously identified deformation is associated with coal-mining, ground water withdrawal, landslides and tunnelling. The sparsity of measurement points and the presence of spike noise make this a challenging application for deep learning networks, which involve calculations of the spatial convolution between images. Moreover, insufficient ground truth data exists to construct a balanced training data set, and the deformation signals are slower and more localised than in previous applications. We propose three enhancement methods to tackle these problems: i) spatial interpolation with modified matrix completion, ii) a synthetic training dataset based on the characteristics of real UK velocity map, and iii) enhanced over-wrapping techniques. Using velocity maps spanning 2015-2019, our framework detects several areas of coal mining subsidence, uplift due to dewatering, slate quarries, landslides and tunnel engineering works. The results demonstrate the potential applicability of the proposed framework to the development of automated ground motion analysis systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
176,092
2404.01537
Are Doppler Velocity Measurements Useful for Spinning Radar Odometry?
Spinning, frequency-modulated continuous-wave (FMCW) radars with 360 degree coverage have been gaining popularity for autonomous-vehicle navigation. However, unlike `fixed' automotive radar, commercially available spinning radar systems typically do not produce radial velocities due to the lack of repeated measurements in the same direction and the fundamental hardware setup. To make these radial velocities observable, we modified the firmware of a commercial spinning radar to use triangular frequency modulation. In this paper, we develop a novel way to use this modulation to extract radial Doppler velocity measurements from consecutive azimuths of a radar intensity scan, without any data association. We show that these noisy, error-prone measurements contain enough information to provide good ego-velocity estimates, and incorporate these estimates into different modern odometry pipelines. We extensively evaluate the pipelines on over 110 km of driving data in progressively more geometrically challenging autonomous-driving environments. We show that Doppler velocity measurements improve odometry in well-defined geometric conditions and enable it to continue functioning even in severely geometrically degenerate environments, such as long tunnels.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
443,457
2501.07154
Privacy-Preserving Data Quality Assessment for Time-Series IoT Sensors
Data from Internet of Things (IoT) sensors has emerged as a key contributor to decision-making processes in various domains. However, the quality of the data is crucial to the effectiveness of applications built on it, and assessment of the data quality is heavily context-dependent. Further, preserving the privacy of the data during quality assessment is critical in domains where sensitive data is prevalent. This paper proposes a novel framework for automated, objective, and privacy-preserving data quality assessment of time-series data from IoT sensors deployed in smart cities. We leverage custom, autonomously computable metrics that parameterise the temporal performance and adherence to a declarative schema document to achieve objectivity. Additionally, we utilise a trusted execution environment to create a "data-blind" model that ensures individual privacy, eliminates assessee bias, and enhances adaptability across data types. This paper describes this data quality assessment methodology for IoT sensors, emphasising its relevance within the smart-city context while addressing the growing need for privacy in the face of extensive data collection practices.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
524,284
1607.02737
Transition Forests: Learning Discriminative Temporal Transitions for Action Recognition and Detection
A human action can be seen as transitions between one's body poses over time, where the transition depicts a temporal relation between two poses. Recognizing actions thus involves learning a classifier sensitive to these pose transitions as well as to static poses. In this paper, we introduce a novel method called transitions forests, an ensemble of decision trees that both learn to discriminate static poses and transitions between pairs of two independent frames. During training, node splitting is driven by alternating two criteria: the standard classification objective that maximizes the discrimination power in individual frames, and the proposed one in pairwise frame transitions. Growing the trees tends to group frames that have similar associated transitions and share same action label incorporating temporal information that was not available otherwise. Unlike conventional decision trees where the best split in a node is determined independently of other nodes, the transition forests try to find the best split of nodes jointly (within a layer) for incorporating distant node transitions. When inferring the class label of a new frame, it is passed down the trees and the prediction is made based on previous frame predictions and the current one in an efficient and online manner. We apply our method on varied skeleton action recognition and online detection datasets showing its suitability over several baselines and state-of-the-art approaches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
58,406
2102.00692
Despeckling Sentinel-1 GRD images by deep learning and application to narrow river segmentation
This paper presents a despeckling method for Sentinel-1 GRD images based on the recently proposed framework "SAR2SAR": a self-supervised training strategy. Training the deep neural network on collections of Sentinel 1 GRD images leads to a despeckling algorithm that is robust to space-variant spatial correlations of speckle. Despeckled images improve the detection of structures like narrow rivers. We apply a detector based on exogenous information and a linear features detector and show that rivers are better segmented when the processing chain is applied to images pre-processed by our despeckling neural network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
217,883
1302.1558
Computational Advantages of Relevance Reasoning in Bayesian Belief Networks
This paper introduces a computational framework for reasoning in Bayesian belief networks that derives significant advantages from focused inference and relevance reasoning. This framework is based on d -separation and other simple and computationally efficient techniques for pruning irrelevant parts of a network. Our main contribution is a technique that we call relevance-based decomposition. Relevance-based decomposition approaches belief updating in large networks by focusing on their parts and decomposing them into partially overlapping subnetworks. This makes reasoning in some intractable networks possible and, in addition, often results in significant speedup, as the total time taken to update all subnetworks is in practice often considerably less than the time taken to update the network as a whole. We report results of empirical tests that demonstrate practical significance of our approach.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
21,859
2307.07518
CephGPT-4: An Interactive Multimodal Cephalometric Measurement and Diagnostic System with Visual Large Language Model
Large-scale multimodal language models (LMMs) have achieved remarkable success in general domains. However, the exploration of diagnostic language models based on multimodal cephalometric medical data remains limited. In this paper, we propose a novel multimodal cephalometric analysis and diagnostic dialogue model. Firstly, a multimodal orthodontic medical dataset is constructed, comprising cephalometric images and doctor-patient dialogue data, with automatic analysis of cephalometric landmarks using U-net and generation of diagnostic reports. Then, the cephalometric dataset and generated diagnostic reports are separately fine-tuned on Minigpt-4 and VisualGLM. Results demonstrate that the CephGPT-4 model exhibits excellent performance and has the potential to revolutionize orthodontic measurement and diagnostic applications. These innovations hold revolutionary application potential in the field of orthodontics.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
379,441
2203.03443
Generalization Through The Lens Of Leave-One-Out Error
Despite the tremendous empirical success of deep learning models to solve various learning tasks, our theoretical understanding of their generalization ability is very limited. Classical generalization bounds based on tools such as the VC dimension or Rademacher complexity, are so far unsuitable for deep models and it is doubtful that these techniques can yield tight bounds even in the most idealistic settings (Nagarajan & Kolter, 2019). In this work, we instead revisit the concept of leave-one-out (LOO) error to measure the generalization ability of deep models in the so-called kernel regime. While popular in statistics, the LOO error has been largely overlooked in the context of deep learning. By building upon the recently established connection between neural networks and kernel learning, we leverage the closed-form expression for the leave-one-out error, giving us access to an efficient proxy for the test error. We show both theoretically and empirically that the leave-one-out error is capable of capturing various phenomena in generalization theory, such as double descent, random labels or transfer learning. Our work therefore demonstrates that the leave-one-out error provides a tractable way to estimate the generalization ability of deep neural networks in the kernel regime, opening the door to potential, new research directions in the field of generalization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
284,091
1811.05438
Very Hard Electoral Control Problems
It is important to understand how the outcome of an election can be modified by an agent with control over the structure of the election. Electoral control has been studied for many election systems, but for all studied systems the winner problem is in P, and so control is in NP. There are election systems, such as Kemeny, that have many desirable properties, but whose winner problems are not in NP. Thus for such systems control is not in NP, and in fact we show that it is typically complete for $\Sigma_2^p$ (i.e., ${\rm NP}^{\rm NP}$, the second level of the polynomial hierarchy). This is a very high level of complexity. Approaches that perform quite well for solving NP problems do not necessarily work for $\Sigma_2^p$-complete problems. However, answer set programming is suited to express problems in $\Sigma_2^p$, and we present an encoding for Kemeny control.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
113,319
1610.05948
A Bayesian Approach to Estimation of Speaker Normalization Parameters
In this work, a Bayesian approach to speaker normalization is proposed to compensate for the degradation in performance of a speaker independent speech recognition system. The speaker normalization method proposed herein uses the technique of vocal tract length normalization (VTLN). The VTLN parameters are estimated using a novel Bayesian approach which utilizes the Gibbs sampler, a special type of Markov Chain Monte Carlo method. Additionally the hyperparameters are estimated using maximum likelihood approach. This model is used assuming that human vocal tract can be modeled as a tube of uniform cross section. It captures the variation in length of the vocal tract of different speakers more effectively, than the linear model used in literature. The work has also investigated different methods like minimization of Mean Square Error (MSE) and Mean Absolute Error (MAE) for the estimation of VTLN parameters. Both single pass and two pass approaches are then used to build a VTLN based speech recognizer. Experimental results on recognition of vowels and Hindi phrases from a medium vocabulary indicate that the Bayesian method improves the performance by a considerable margin.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
62,585
1810.03867
Functionally Modular and Interpretable Temporal Filtering for Robust Segmentation
The performance of autonomous systems heavily relies on their ability to generate a robust representation of the environment. Deep neural networks have greatly improved vision-based perception systems but still fail in challenging situations, e.g. sensor outages or heavy weather. These failures are often introduced by data-inherent perturbations, which significantly reduce the information provided to the perception system. We propose a functionally modularized temporal filter, which stabilizes an abstract feature representation of a single-frame segmentation model using information of previous time steps. Our filter module splits the filter task into multiple less complex and more interpretable subtasks. The basic structure of the filter is inspired by a Bayes estimator consisting of a prediction and an update step. To make the prediction more transparent, we implement it using a geometric projection and estimate its parameters. This additionally enables the decomposition of the filter task into static representation filtering and low-dimensional motion filtering. Our model can cope with missing frames and is trainable in an end-to-end fashion. Using photorealistic, synthetic video data, we show the ability of the proposed architecture to overcome data-inherent perturbations. The experiments especially highlight advantages introduced by an interpretable and explicit filter module.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
109,900
2204.07746
Unsupervised Attention-based Sentence-Level Meta-Embeddings from Contextualised Language Models
A variety of contextualised language models have been proposed in the NLP community, which are trained on diverse corpora to produce numerous Neural Language Models (NLMs). However, different NLMs have reported different levels of performances in downstream NLP applications when used as text representations. We propose a sentence-level meta-embedding learning method that takes independently trained contextualised word embedding models and learns a sentence embedding that preserves the complementary strengths of the input source NLMs. Our proposed method is unsupervised and is not tied to a particular downstream task, which makes the learnt meta-embeddings in principle applicable to different tasks that require sentence representations. Specifically, we first project the token-level embeddings obtained by the individual NLMs and learn attention weights that indicate the contributions of source embeddings towards their token-level meta-embeddings. Next, we apply mean and max pooling to produce sentence-level meta-embeddings from token-level meta-embeddings. Experimental results on semantic textual similarity benchmarks show that our proposed unsupervised sentence-level meta-embedding method outperforms previously proposed sentence-level meta-embedding methods as well as a supervised baseline.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
291,830
2410.01729
Evaluating Robustness of Reward Models for Mathematical Reasoning
Reward models are key in reinforcement learning from human feedback (RLHF) systems, aligning the model behavior with human preferences. Particularly in the math domain, there have been plenty of studies using reward models to align policies for improving reasoning capabilities. Recently, as the importance of reward models has been emphasized, RewardBench is proposed to understand their behavior. However, we figure out that the math subset of RewardBench has different representations between chosen and rejected completions, and relies on a single comparison, which may lead to unreliable results as it only see an isolated case. Therefore, it fails to accurately present the robustness of reward models, leading to a misunderstanding of its performance and potentially resulting in reward hacking. In this work, we introduce a new design for reliable evaluation of reward models, and to validate this, we construct RewardMATH, a benchmark that effectively represents the robustness of reward models in mathematical reasoning tasks. We demonstrate that the scores on RewardMATH strongly correlate with the results of optimized policy and effectively estimate reward overoptimization, whereas the existing benchmark shows almost no correlation. The results underscore the potential of our design to enhance the reliability of evaluation, and represent the robustness of reward model. We make our code and data publicly available.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
493,914
cs/0512062
Evolino for recurrent support vector machines
Traditional Support Vector Machines (SVMs) need pre-wired finite time windows to predict and classify time series. They do not have an internal state necessary to deal with sequences involving arbitrary long-term dependencies. Here we introduce a new class of recurrent, truly sequential SVM-like devices with internal adaptive states, trained by a novel method called EVOlution of systems with KErnel-based outputs (Evoke), an instance of the recent Evolino class of methods. Evoke evolves recurrent neural networks to detect and represent temporal dependencies while using quadratic programming/support vector regression to produce precise outputs. Evoke is the first SVM-based mechanism learning to classify a context-sensitive language. It also outperforms recent state-of-the-art gradient-based recurrent neural networks (RNNs) on various time series prediction tasks.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
539,147
2011.07933
Score Combination for Improved Parallel Corpus Filtering for Low Resource Conditions
This paper describes our submission to the WMT20 sentence filtering task. We combine scores from (1) a custom LASER built for each source language, (2) a classifier built to distinguish positive and negative pairs by semantic alignment, and (3) the original scores included in the task devkit. For the mBART finetuning setup, provided by the organizers, our method shows 7% and 5% relative improvement over baseline, in sacreBLEU score on the test set for Pashto and Khmer respectively.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
206,709
2410.10486
Consensus in Multiagent Systems with lack of connection
We consider multi-agent systems with cooperative interactions and study the convergence to consensus in the case of time-dependent lack of interaction. We prove a new condition ensuring consensus: we define a graph in which directed arrows correspond to connection functions that converge (in the weak sense) to some function with a positive integral on all intervals of the form $[t,+\infty)$. If the graph has a vertex reachable from all other indices, then the system converges to consensus. We show that this requirement generalizes some known sufficient conditions for convergence, such as the Persistent Excitation one. We also give a second new condition, transversal to the known ones: total connectedness of the undirected graph formed by the non-vanishing of limiting functions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
498,100
2406.15619
Physics Informed Machine Learning (PIML) methods for estimating the remaining useful lifetime (RUL) of aircraft engines
This paper is aimed at using the newly developing field of physics informed machine learning (PIML) to develop models for predicting the remaining useful lifetime (RUL) aircraft engines. We consider the well-known benchmark NASA Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data as the main data for this paper, which consists of sensor outputs in a variety of different operating modes. C-MAPSS is a well-studied dataset with much existing work in the literature that address RUL prediction with classical and deep learning methods. In the absence of published empirical physical laws governing the C-MAPSS data, our approach first uses stochastic methods to estimate the governing physics models from the noisy time series data. In our approach, we model the various sensor readings as being governed by stochastic differential equations, and we estimate the corresponding transition density mean and variance functions of the underlying processes. We then augment LSTM (long-short term memory) models with the learned mean and variance functions during training and inferencing. Our PIML based approach is different from previous methods, and we use the data to first learn the physics. Our results indicate that PIML discovery and solutions methods are well suited for this problem and outperform previous data-only deep learning methods for this data set and task. Moreover, the framework developed herein is flexible, and can be adapted to other situations (other sensor modalities or combined multi-physics environments), including cases where the underlying physics is only partially observed or known.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
466,794
1506.05903
Detecting Real-World Influence Through Twitter
In this paper, we investigate the issue of detecting the real-life influence of people based on their Twitter account. We propose an overview of common Twitter features used to characterize such accounts and their activity, and show that these are inefficient in this context. In particular, retweets and followers numbers, and Klout score are not relevant to our analysis. We thus propose several Machine Learning approaches based on Natural Language Processing and Social Network Analysis to label Twitter users as Influencers or not. We also rank them according to a predicted influence level. Our proposals are evaluated over the CLEF RepLab 2014 dataset, and outmatch state-of-the-art ranking methods.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
44,351
1601.03785
A Method for Image Reduction Based on a Generalization of Ordered Weighted Averaging Functions
In this paper we propose a special type of aggregation function which generalizes the notion of Ordered Weighted Averaging Function - OWA. The resulting functions are called Dynamic Ordered Weighted Averaging Functions --- DYOWAs. This generalization will be developed in such way that the weight vectors are variables depending on the input vector. Particularly, this operators generalize the aggregation functions: Minimum, Maximum, Arithmetic Mean, Median, etc, which are extensively used in image processing. In this field of research two problems are considered: The determination of methods to reduce images and the construction of techniques which provide noise reduction. The operators described here are able to be used in both cases. In terms of image reduction we apply the methodology provided by Patermain et al. We use the noise reduction operators obtained here to treat the images obtained in the first part of the paper, thus obtaining images with better quality.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
50,945
2412.19425
A Self-Efficacy Theory-based Study on the Teachers Readiness to Teach Artificial Intelligence in Public Schools in Sri Lanka
This study investigates Sri Lankan ICT teachers' readiness to teach AI in schools, focusing on self-efficacy. A survey of over 1,300 teachers assessed their self-efficacy using a scale developed based on Bandura's theory. PLS-SEM analysis revealed that teachers' self-efficacy was low, primarily influenced by emotional and physiological states and imaginary experiences related to AI instruction. Mastery experiences had a lesser impact, and vicarious experiences and verbal persuasion showed no significant effect. The study highlights the need for a systemic approach to teacher professional development, considering the limitations in teachers' AI expertise and social capital. Further research is recommended to explore a socio-technical systems perspective for effective AI teacher training.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
520,837
2207.03915
Using Quantile Forecasts for Dynamic Equivalents of Active Distribution Grids under Uncertainty
While distribution networks (DNs) turn from consumers to active and responsive intelligent DNs, the question of how to represent them in large-scale transmission network (TN) studies is still under investigation. The standard approach that uses aggregated models for the inverter-interfaced generation and conventional load models introduces significant errors to the dynamic modeling that can lead to instabilities. This paper presents a new approach based on quantile forecasting to represent the uncertainty originating in DNs at the TN level. First, we aquire a required rich dataset employing Monte Carlo simulations of a DN. Then, we use machine learning (ML) algorithms to not only predict the most probable response but also intervals of potential responses with predefined confidence. These quantile methods represent the variance in DN responses at the TN level. The results indicate excellent performance for most ML techniques. The tuned quantile equivalents predict accurate bands for the current at the TN/DN-interface, and tests with unseen TN conditions indicate robustness. A final assessment that compares the MC trajectories against the predicted intervals highlights the potential of the proposed method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
307,016
2306.08162
INT2.1: Towards Fine-Tunable Quantized Large Language Models with Error Correction through Low-Rank Adaptation
We introduce a method that dramatically reduces fine-tuning VRAM requirements and rectifies quantization errors in quantized Large Language Models. First, we develop an extremely memory-efficient fine-tuning (EMEF) method for quantized models using Low-Rank Adaptation (LoRA), and drawing upon it, we construct an error-correcting algorithm designed to minimize errors induced by the quantization process. Our method reduces the memory requirements by up to 5.6 times, which enables fine-tuning a 7 billion parameter Large Language Model (LLM) on consumer laptops. At the same time, we propose a Low-Rank Error Correction (LREC) method that exploits the added LoRA layers to ameliorate the gap between the quantized model and its float point counterpart. Our error correction framework leads to a fully functional INT2 quantized LLM with the capacity to generate coherent English text. To the best of our knowledge, this is the first INT2 Large Language Model that has been able to reach such a performance. The overhead of our method is merely a 1.05 times increase in model size, which translates to an effective precision of INT2.1. Also, our method readily generalizes to other quantization standards, such as INT3, INT4, and INT8, restoring their lost performance, which marks a significant milestone in the field of model quantization. The strategies delineated in this paper hold promising implications for the future development and optimization of quantized models, marking a pivotal shift in the landscape of low-resource machine learning computations.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
373,301
1211.4133
A Logic and Adaptive Approach for Efficient Diagnosis Systems using CBR
Case Based Reasoning (CBR) is an intelligent way of thinking based on experience and capitalization of already solved cases (source cases) to find a solution to a new problem (target case). Retrieval phase consists on identifying source cases that are similar to the target case. This phase may lead to erroneous results if the existing knowledge imperfections are not taken into account. This work presents a novel solution based on Fuzzy logic techniques and adaptation measures which aggregate weighted similarities to improve the retrieval results. To confirm the efficiency of our solution, we have applied it to the industrial diagnosis domain. The obtained results are more efficient results than those obtained by applying typical measures.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
19,780
1911.04972
Multi-Step Chord Sequence Prediction Based on Aggregated Multi-Scale Encoder-Decoder Network
This paper studies the prediction of chord progressions for jazz music by relying on machine learning models. The motivation of our study comes from the recent success of neural networks for performing automatic music composition. Although high accuracies are obtained in single-step prediction scenarios, most models fail to generate accurate multi-step chord predictions. In this paper, we postulate that this comes from the multi-scale structure of musical information and propose new architectures based on an iterative temporal aggregation of input labels. Specifically, the input and ground truth labels are merged into increasingly large temporal bags, on which we train a family of encoder-decoder networks for each temporal scale. In a second step, we use these pre-trained encoder bottleneck features at each scale in order to train a final encoder-decoder network. Furthermore, we rely on different reductions of the initial chord alphabet into three adapted chord alphabets. We perform evaluations against several state-of-the-art models and show that our multi-scale architecture outperforms existing methods in terms of accuracy and perplexity, while requiring relatively few parameters. We analyze musical properties of the results, showing the influence of downbeat position within the analysis window on accuracy, and evaluate errors using a musically-informed distance metric.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
153,135
1709.03297
Cellular Automaton Based Simulation of Large Pedestrian Facilities - A Case Study on the Staten Island Ferry Terminals
Current metropolises largely depend on a functioning transport infrastructure and the increasing demand can only be satisfied by a well organized mass transit. One example for a crucial mass transit system is New York City's Staten Island Ferry, connecting the two boroughs of Staten Island and Manhattan with a regular passenger service. Today's demand already exceeds 2500 passengers for a single cycle during peek hours, and future projections suggest that it will further increase. One way to appraise how the system will cope with future demand is by simulation. This contribution proposes an integrated simulation approach to evaluate the system performance with respect to future demand. The simulation relies on a multiscale modeling approach where the terminal buildings are simulated by a microscopic and quantitatively valid cellular automata (CA) and the journeys of the ferries themselves are modeled by a mesoscopic queue simulation approach. Based on the simulation results recommendations with respect to the future demand are given.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
80,440
2305.10929
Architecture-agnostic Iterative Black-box Certified Defense against Adversarial Patches
The adversarial patch attack aims to fool image classifiers within a bounded, contiguous region of arbitrary changes, posing a real threat to computer vision systems (e.g., autonomous driving, content moderation, biometric authentication, medical imaging) in the physical world. To address this problem in a trustworthy way, proposals have been made for certified patch defenses that ensure the robustness of classification models and prevent future patch attacks from breaching the defense. State-of-the-art certified defenses can be compatible with any model architecture, as well as achieve high clean and certified accuracy. Although the methods are adaptive to arbitrary patch positions, they inevitably need to access the size of the adversarial patch, which is unreasonable and impractical in real-world attack scenarios. To improve the feasibility of the architecture-agnostic certified defense in a black-box setting (i.e., position and size of the patch are both unknown), we propose a novel two-stage Iterative Black-box Certified Defense method, termed IBCD.In the first stage, it estimates the patch size in a search-based manner by evaluating the size relationship between the patch and mask with pixel masking. In the second stage, the accuracy results are calculated by the existing white-box certified defense methods with the estimated patch size. The experiments conducted on two popular model architectures and two datasets verify the effectiveness and efficiency of IBCD.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
365,297
2202.11485
Learning Temporal Point Processes for Efficient Retrieval of Continuous Time Event Sequences
Recent developments in predictive modeling using marked temporal point processes (MTPP) have enabled an accurate characterization of several real-world applications involving continuous-time event sequences (CTESs). However, the retrieval problem of such sequences remains largely unaddressed in literature. To tackle this, we propose NEUROSEQRET which learns to retrieve and rank a relevant set of continuous-time event sequences for a given query sequence, from a large corpus of sequences. More specifically, NEUROSEQRET first applies a trainable unwarping function on the query sequence, which makes it comparable with corpus sequences, especially when a relevant query-corpus pair has individually different attributes. Next, it feeds the unwarped query sequence and the corpus sequence into MTPP guided neural relevance models. We develop two variants of the relevance model which offer a tradeoff between accuracy and efficiency. We also propose an optimization framework to learn binary sequence embeddings from the relevance scores, suitable for the locality-sensitive hashing leading to a significant speedup in returning top-K results for a given query sequence. Our experiments with several datasets show the significant accuracy boost of NEUROSEQRET beyond several baselines, as well as the efficacy of our hashing mechanism.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
281,906
2211.10586
Scaling Up Dataset Distillation to ImageNet-1K with Constant Memory
Dataset Distillation is a newly emerging area that aims to distill large datasets into much smaller and highly informative synthetic ones to accelerate training and reduce storage. Among various dataset distillation methods, trajectory-matching-based methods (MTT) have achieved SOTA performance in many tasks, e.g., on CIFAR-10/100. However, due to exorbitant memory consumption when unrolling optimization through SGD steps, MTT fails to scale to large-scale datasets such as ImageNet-1K. Can we scale this SOTA method to ImageNet-1K and does its effectiveness on CIFAR transfer to ImageNet-1K? To answer these questions, we first propose a procedure to exactly compute the unrolled gradient with constant memory complexity, which allows us to scale MTT to ImageNet-1K seamlessly with ~6x reduction in memory footprint. We further discover that it is challenging for MTT to handle datasets with a large number of classes, and propose a novel soft label assignment that drastically improves its convergence. The resulting algorithm sets new SOTA on ImageNet-1K: we can scale up to 50 IPCs (Image Per Class) on ImageNet-1K on a single GPU (all previous methods can only scale to 2 IPCs on ImageNet-1K), leading to the best accuracy (only 5.9% accuracy drop against full dataset training) while utilizing only 4.2% of the number of data points - an 18.2% absolute gain over prior SOTA. Our code is available at https://github.com/justincui03/tesla
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
331,357
2003.10422
A Unified Theory of Decentralized SGD with Changing Topology and Local Updates
Decentralized stochastic optimization methods have gained a lot of attention recently, mainly because of their cheap per iteration cost, data locality, and their communication-efficiency. In this paper we introduce a unified convergence analysis that covers a large variety of decentralized SGD methods which so far have required different intuitions, have different applications, and which have been developed separately in various communities. Our algorithmic framework covers local SGD updates and synchronous and pairwise gossip updates on adaptive network topology. We derive universal convergence rates for smooth (convex and non-convex) problems and the rates interpolate between the heterogeneous (non-identically distributed data) and iid-data settings, recovering linear convergence rates in many special cases, for instance for over-parametrized models. Our proofs rely on weak assumptions (typically improving over prior work in several aspects) and recover (and improve) the best known complexity results for a host of important scenarios, such as for instance coorperative SGD and federated averaging (local SGD).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
169,333
2309.08880
Data-Driven H-infinity Control with a Real-Time and Efficient Reinforcement Learning Algorithm: An Application to Autonomous Mobility-on-Demand Systems
Reinforcement learning (RL) is a class of artificial intelligence algorithms being used to design adaptive optimal controllers through online learning. This paper presents a model-free, real-time, data-efficient Q-learning-based algorithm to solve the H$_{\infty}$ control of linear discrete-time systems. The computational complexity is shown to reduce from $\mathcal{O}(\underline{q}^3)$ in the literature to $\mathcal{O}(\underline{q}^2)$ in the proposed algorithm, where $\underline{q}$ is quadratic in the sum of the size of state variables, control inputs, and disturbance. An adaptive optimal controller is designed and the parameters of the action and critic networks are learned online without the knowledge of the system dynamics, making the proposed algorithm completely model-free. Also, a sufficient probing noise is only needed in the first iteration and does not affect the proposed algorithm. With no need for an initial stabilizing policy, the algorithm converges to the closed-form solution obtained by solving the Riccati equation. A simulation study is performed by applying the proposed algorithm to real-time control of an autonomous mobility-on-demand (AMoD) system for a real-world case study to evaluate the effectiveness of the proposed algorithm.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
392,362
2211.01642
Fine-Tuning Pre-Trained Language Models Effectively by Optimizing Subnetworks Adaptively
Large-scale pre-trained language models have achieved impressive results on a wide range of downstream tasks recently. However, fine-tuning an extremely large-scale pre-trained language model on limited target datasets is often plagued by overfitting and representation degradation. In this paper, we propose a Dynamic Parameter Selection (DPS) algorithm for the large-scale pre-trained models during fine-tuning, which adaptively selects a more promising subnetwork to perform staging updates based on gradients of back-propagation. Experiments on the GLUE benchmark show that DPS outperforms previous fine-tuning methods in terms of overall performance and stability, and consistently achieves better results with variable pre-trained language models. In addition, DPS brings a large magnitude of improvement in out-of-domain transferring experiments and low-resource scenarios, which shows that it can maintain stable general contextual features and reduce the representation collapse. We release our code at https://github.com/ZhangHaojie077/DPS
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
328,305
1904.12394
Stability conditions of an ODE arising in human motion and its numerical simulation
This paper discusses the stability of an equilibrium point of an ordinary differential equation (ODE) arising from a feed-forward position control for a musculoskeletal system. The studied system has a link, a joint and two muscles with routing points. The motion convergence of the system strongly depends on the muscular arrangement of the musculoskeletal system. In this paper, a sufficient condition for asymptotic stability is obtained. Furthermore, numerical simulations of the penalized ODE and experimental results are described.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
129,104
1312.2060
Blind Identification via Lifting
Blind system identification is known to be an ill-posed problem and without further assumptions, no unique solution is at hand. In this contribution, we are concerned with the task of identifying an ARX model from only output measurements. We phrase this as a constrained rank minimization problem and present a relaxed convex formulation to approximate its solution. To make the problem well posed we assume that the sought input lies in some known linear subspace.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
28,915
2212.05861
Joint Counting, Detection and Re-Identification for Multi-Object Tracking
The recent trend in 2D multiple object tracking (MOT) is jointly solving detection and tracking, where object detection and appearance feature (or motion) are learned simultaneously. Despite competitive performance, in crowded scenes, joint detection and tracking usually fail to find accurate object associations due to missed or false detections. In this paper, we jointly model counting, detection and re-identification in an end-to-end framework, named CountingMOT, tailored for crowded scenes. By imposing mutual object-count constraints between detection and counting, the CountingMOT tries to find a balance between object detection and crowd density map estimation, which can help it to recover missed detections or reject false detections. Our approach is an attempt to bridge the gap of object detection, counting, and re-Identification. This is in contrast to prior MOT methods that either ignore the crowd density and thus are prone to failure in crowded scenes,or depend on local correlations to build a graphical relationship for matching targets. The proposed MOT tracker can perform online and real-time tracking, and achieves the state-of-the-art results on public benchmarks MOT16 (MOTA of 79.7), MOT17 (MOTA of 81.3%) and MOT20 (MOTA of 78.9%).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
335,921
1805.03716
Long Short-Term Memory as a Dynamically Computed Element-wise Weighted Sum
LSTMs were introduced to combat vanishing gradients in simple RNNs by augmenting them with gated additive recurrent connections. We present an alternative view to explain the success of LSTMs: the gates themselves are versatile recurrent models that provide more representational power than previously appreciated. We do this by decoupling the LSTM's gates from the embedded simple RNN, producing a new class of RNNs where the recurrence computes an element-wise weighted sum of context-independent functions of the input. Ablations on a range of problems demonstrate that the gating mechanism alone performs as well as an LSTM in most settings, strongly suggesting that the gates are doing much more in practice than just alleviating vanishing gradients.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
97,102
1712.04138
A vision based system for underwater docking
Autonomous underwater vehicles (AUVs) have been deployed for underwater exploration. However, its potential is confined by its limited on-board battery energy and data storage capacity. This problem has been addressed using docking systems by underwater recharging and data transfer for AUVs. In this work, we propose a vision based framework for underwater docking following these systems. The proposed framework comprises two modules; (i) a detection module which provides location information on underwater docking stations in 2D images captured by an on-board camera, and (ii) a pose estimation module which recovers the relative 3D position and orientation between docking stations and AUVs from the 2D images. For robust and credible detection of docking stations, we propose a convolutional neural network called Docking Neural Network (DoNN). For accurate pose estimation, a perspective-n-point algorithm is integrated into our framework. In order to examine our framework in underwater docking tasks, we collected a dataset of 2D images, named Underwater Docking Images Dataset (UDID), in an experimental water pool. To the best of our knowledge, UDID is the first publicly available underwater docking dataset. In the experiments, we first evaluate performance of the proposed detection module on UDID and its deformed variations. Next, we assess the accuracy of the pose estimation module by ground experiments, since it is not feasible to obtain true relative position and orientation between docking stations and AUVs under water. Then, we examine the pose estimation module by underwater experiments in our experimental water pool. Experimental results show that the proposed framework can be used to detect docking stations and estimate their relative pose efficiently and successfully, compared to the state-of-the-art baseline systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
86,558
2303.02534
Semi-parametric inference based on adaptively collected data
Many standard estimators, when applied to adaptively collected data, fail to be asymptotically normal, thereby complicating the construction of confidence intervals. We address this challenge in a semi-parametric context: estimating the parameter vector of a generalized linear regression model contaminated by a non-parametric nuisance component. We construct suitably weighted estimating equations that account for adaptivity in data collection, and provide conditions under which the associated estimates are asymptotically normal. Our results characterize the degree of "explorability" required for asymptotic normality to hold. For the simpler problem of estimating a linear functional, we provide similar guarantees under much weaker assumptions. We illustrate our general theory with concrete consequences for various problems, including standard linear bandits and sparse generalized bandits, and compare with other methods via simulation studies.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
349,394
2405.04837
Enhancing Data Integrity and Traceability in Industry Cyber Physical Systems (ICPS) through Blockchain Technology: A Comprehensive Approach
Blockchain technology, heralded as a transformative innovation, has far-reaching implications beyond its initial application in cryptocurrencies. This study explores the potential of blockchain in enhancing data integrity and traceability within Industry Cyber-Physical Systems (ICPS), a crucial aspect in the era of Industry 4.0. ICPS, integrating computational and physical components, is pivotal in managing critical infrastructure like manufacturing, power grids, and transportation networks. However, they face challenges in security, privacy, and reliability. With its inherent immutability, transparency, and distributed consensus, blockchain presents a groundbreaking approach to address these challenges. It ensures robust data reliability and traceability across ICPS, enhancing transaction transparency and facilitating secure data sharing. This research unearths various blockchain applications in ICPS, including supply chain management, quality control, contract management, and data sharing. Each application demonstrates blockchain's capacity to streamline processes, reduce fraud, and enhance system efficiency. In supply chain management, blockchain provides real-time auditing and compliance. For quality control, it establishes tamper-proof records, boosting consumer confidence. In contract management, smart contracts automate execution, enhancing efficiency. Blockchain also fosters secure collaboration in ICPS, which is crucial for system stability and safety. This study emphasizes the need for further research on blockchain's practical implementation in ICPS, focusing on challenges like scalability, system integration, and security vulnerabilities. It also suggests examining blockchain's economic and organizational impacts in ICPS to understand its feasibility and long-term advantages.
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
452,700
1804.08213
Some New Constructions of Quantum MDS Codes
It is an important task to construct quantum maximum-distance-separable (MDS) codes with good parameters. In the present paper, we provide six new classes of q-ary quantum MDS codes by using generalized Reed-Solomon (GRS) codes and Hermitian construction. The minimum distances of our quantum MDS codes can be larger than q/2+1 Three of these six classes of quantum MDS codes have longer lengths than the ones constructed in [1] and [2], hence some of their results can be easily derived from ours via the propagation rule. Moreover, some known quantum MDS codes of specific lengths can be seen as special cases of ours and the minimum distances of some known quantum MDS codes are also improved as well.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
95,715
1808.03887
Semi-supervised Skin Lesion Segmentation via Transformation Consistent Self-ensembling Model
Automatic skin lesion segmentation on dermoscopic images is an essential component in computer-aided diagnosis of melanoma. Recently, many fully supervised deep learning based methods have been proposed for automatic skin lesion segmentation. However, these approaches require massive pixel-wise annotation from experienced dermatologists, which is very costly and time-consuming. In this paper, we present a novel semi-supervised method for skin lesion segmentation by leveraging both labeled and unlabeled data. The network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data. In this paper, we present a novel semi-supervised method for skin lesion segmentation, where the network is optimized by the weighted combination of a common supervised loss for labeled inputs only and a regularization loss for both labeled and unlabeled data. Our method encourages a consistent prediction for unlabeled images using the outputs of the network-in-training under different regularizations, so that it can utilize the unlabeled data. To utilize the unlabeled data, our method encourages the consistent predictions of the network-in-training for the same input under different regularizations. Aiming for the semi-supervised segmentation problem, we enhance the effect of regularization for pixel-level predictions by introducing a transformation, including rotation and flipping, consistent scheme in our self-ensembling model. With only 300 labeled training samples, our method sets a new record on the benchmark of the International Skin Imaging Collaboration (ISIC) 2017 skin lesion segmentation challenge. Such a result clearly surpasses fully-supervised state-of-the-arts that are trained with 2000 labeled data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
105,025
2302.00515
Decentralized Search and Track with Multiple Autonomous Agents
In this paper we study the problem of cooperative searching and tracking (SAT) of multiple moving targets with a group of autonomous mobile agents that exhibit limited sensing capabilities. We assume that the actual number of targets is not known a priori and that target births/deaths can occur anywhere inside the surveillance region. For this reason efficient search strategies are required to detect and track as many targets as possible. To address the aforementioned challenges we augment the classical Probability Hypothesis Density (PHD) filter with the ability to propagate in time the search density in addition to the target density. Based on this, we develop decentralized cooperative look-ahead strategies for efficient searching and tracking of an unknown number of targets inside a bounded surveillance area. The performance of the proposed approach is demonstrated through simulation experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
343,241
2010.12408
On the Equivalence of Decoupled Graph Convolution Network and Label Propagation
The original design of Graph Convolution Network (GCN) couples feature transformation and neighborhood aggregation for node representation learning. Recently, some work shows that coupling is inferior to decoupling, which supports deep graph propagation better and has become the latest paradigm of GCN (e.g., APPNP and SGCN). Despite effectiveness, the working mechanisms of the decoupled GCN are not well understood. In this paper, we explore the decoupled GCN for semi-supervised node classification from a novel and fundamental perspective -- label propagation. We conduct thorough theoretical analyses, proving that the decoupled GCN is essentially the same as the two-step label propagation: first, propagating the known labels along the graph to generate pseudo-labels for the unlabeled nodes, and second, training normal neural network classifiers on the augmented pseudo-labeled data. More interestingly, we reveal the effectiveness of decoupled GCN: going beyond the conventional label propagation, it could automatically assign structure- and model- aware weights to the pseudo-label data. This explains why the decoupled GCN is relatively robust to the structure noise and over-smoothing, but sensitive to the label noise and model initialization. Based on this insight, we propose a new label propagation method named Propagation then Training Adaptively (PTA), which overcomes the flaws of the decoupled GCN with a dynamic and adaptive weighting strategy. Our PTA is simple yet more effective and robust than decoupled GCN. We empirically validate our findings on four benchmark datasets, demonstrating the advantages of our method. The code is available at https://github.com/DongHande/PT_propagation_then_training.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
202,683
2107.02457
Comparing PCG metrics with Human Evaluation in Minecraft Settlement Generation
There are a range of metrics that can be applied to the artifacts produced by procedural content generation, and several of them come with qualitative claims. In this paper, we adapt a range of existing PCG metrics to generated Minecraft settlements, develop a few new metrics inspired by PCG literature, and compare the resulting measurements to existing human evaluations. The aim is to analyze how those metrics capture human evaluation scores in different categories, how the metrics generalize to another game domain, and how metrics deal with more complex artifacts. We provide an exploratory look at a variety of metrics and provide an information gain and several correlation analyses. We found some relationships between human scores and metrics counting specific elements, measuring the diversity of blocks and measuring the presence of crafting materials for the present complex blocks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
244,831
2302.11730
Detachedly Learn a Classifier for Class-Incremental Learning
In continual learning, model needs to continually learn a feature extractor and classifier on a sequence of tasks. This paper focuses on how to learn a classifier based on a pretrained feature extractor under continual learning setting. We present an probabilistic analysis that the failure of vanilla experience replay (ER) comes from unnecessary re-learning of previous tasks and incompetence to distinguish current task from the previous ones, which is the cause of knowledge degradation and prediction bias. To overcome these weaknesses, we propose a novel replay strategy task-aware experience replay. It rebalances the replay loss and detaches classifier weight for the old tasks from the update process, by which the previous knowledge is kept intact and the overfitting on episodic memory is alleviated. Experimental results show our method outperforms current state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
347,296
2312.03628
Boosting Segment Anything Model Towards Open-Vocabulary Learning
The recent Segment Anything Model (SAM) has emerged as a new paradigmatic vision foundation model, showcasing potent zero-shot generalization and flexible prompting. Despite SAM finding applications and adaptations in various domains, its primary limitation lies in the inability to grasp object semantics. In this paper, we present Sambor to seamlessly integrate SAM with the open-vocabulary object detector in an end-to-end framework. While retaining all the remarkable capabilities inherent to SAM, we boost it to detect arbitrary objects from human inputs like category names or reference expressions. Building upon the SAM image encoder, we introduce a novel SideFormer module designed to acquire SAM features adept at perceiving objects and inject comprehensive semantic information for recognition. In addition, we devise an Open-set RPN that leverages SAM proposals to assist in finding potential objects. Consequently, Sambor enables the open-vocabulary detector to equally focus on generalizing both localization and classification sub-tasks. Our approach demonstrates superior zero-shot performance across benchmarks, including COCO and LVIS, proving highly competitive against previous state-of-the-art methods. We aspire for this work to serve as a meaningful endeavor in endowing SAM to recognize diverse object categories and advancing open-vocabulary learning with the support of vision foundation models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
413,330
1811.02666
Topological Semantics for Lumped Parameter Systems Modeling
Behaviors of many engineering systems are described by lumped parameter models that encapsulate the spatially distributed nature of the system into networks of lumped elements; the dynamics of such a network is governed by a system of ordinary differential and algebraic equations. Languages and simulation tools for modeling such systems differ in syntax, informal semantics, and in the methods by which such systems of equations are generated and simulated, leading to numerous interoperability challenges. Logical extensions of SysML aim specially at unifying a subset of the underlying concepts in such languages. We propose to unify semantics of all such systems using standard notions from algebraic topology. In particular, Tonti diagrams classify all physical theories in terms of physical laws (topological and constitutive) defined over a pair of dual cochain complexes and may be used to describe different types of lumped parameter systems. We show that all possible methods for generating the corresponding state equations within each physical domain correspond to paths over Tonti diagrams. We further propose a generalization of Tonti diagram that captures the behavior and supports canonical generation of state equations for multi-domain lumped parameter systems. The unified semantics provides a basis for greater interoperability in systems modeling, supporting automated translation, integration, reuse, and numerical simulation of models created in different authoring systems and applications. Notably, the proposed algebraic topological semantics is also compatible with spatially and temporally distributed models that are at the core of modern CAD and CAE systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
112,654
2206.04846
Masked Autoencoders are Robust Data Augmentors
Deep neural networks are capable of learning powerful representations to tackle complex vision tasks but expose undesirable properties like the over-fitting issue. To this end, regularization techniques like image augmentation are necessary for deep neural networks to generalize well. Nevertheless, most prevalent image augmentation recipes confine themselves to off-the-shelf linear transformations like scale, flip, and colorjitter. Due to their hand-crafted property, these augmentations are insufficient to generate truly hard augmented examples. In this paper, we propose a novel perspective of augmentation to regularize the training process. Inspired by the recent success of applying masked image modeling to self-supervised learning, we adopt the self-supervised masked autoencoder to generate the distorted view of the input images. We show that utilizing such model-based nonlinear transformation as data augmentation can improve high-level recognition tasks. We term the proposed method as \textbf{M}ask-\textbf{R}econstruct \textbf{A}ugmentation (MRA). The extensive experiments on various image classification benchmarks verify the effectiveness of the proposed augmentation. Specifically, MRA consistently enhances the performance on supervised, semi-supervised as well as few-shot classification. The code will be available at \url{https://github.com/haohang96/MRA}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
301,794
2405.04515
A Transformer with Stack Attention
Natural languages are believed to be (mildly) context-sensitive. Despite underpinning remarkably capable large language models, transformers are unable to model many context-free language tasks. In an attempt to address this limitation in the modeling power of transformer-based language models, we propose augmenting them with a differentiable, stack-based attention mechanism. Our stack-based attention mechanism can be incorporated into any transformer-based language model and adds a level of interpretability to the model. We show that the addition of our stack-based attention mechanism enables the transformer to model some, but not all, deterministic context-free languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
452,590
1802.06831
Comments on: "Lyapunov matrices for a class of time delay systems" by V. L. Kharitonov
We prove that an auxiliary two-point boundary value problem presented in V. L. Kharitonov, Lyapunov matrices for a class of time delay systems, Systems & Control Letters 55 (2006) 610-617 has linearly dependent boundary conditions, and consequently a unique solution does not exist. Therefore, the two-point boundary value problem presented therein fails to be a basis for constructing Lyapunov matrices for the class of time delay systems investigated.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
90,754
2406.16137
MLPHand: Real Time Multi-View 3D Hand Mesh Reconstruction via MLP Modeling
Multi-view hand mesh reconstruction is a critical task for applications in virtual reality and human-computer interaction, but it remains a formidable challenge. Although existing multi-view hand reconstruction methods achieve remarkable accuracy, they typically come with an intensive computational burden that hinders real-time inference. To this end, we propose MLPHand, a novel method designed for real-time multi-view single hand reconstruction. MLP Hand consists of two primary modules: (1) a lightweight MLP-based Skeleton2Mesh model that efficiently recovers hand meshes from hand skeletons, and (2) a multi-view geometry feature fusion prediction module that enhances the Skeleton2Mesh model with detailed geometric information from multiple views. Experiments on three widely used datasets demonstrate that MLPHand can reduce computational complexity by 90% while achieving comparable reconstruction accuracy to existing state-of-the-art baselines.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
467,011
2106.16166
A Critical Analysis of Recursive Model Indexes
The recursive model index (RMI) has recently been introduced as a machine-learned replacement for traditional indexes over sorted data, achieving remarkably fast lookups. Follow-up work focused on explaining RMI's performance and automatically configuring RMIs through enumeration. Unfortunately, configuring RMIs involves setting several hyperparameters, the enumeration of which is often too time-consuming in practice. Therefore, in this work, we conduct the first inventor-independent broad analysis of RMIs with the goal of understanding the impact of each hyperparameter on performance. In particular, we show that in addition to model types and layer size, error bounds and search algorithms must be considered to achieve the best possible performance. Based on our findings, we develop a simple-to-follow guideline for configuring RMIs. We evaluate our guideline by comparing the resulting RMIs with a number of state-of-the-art indexes, both learned and traditional. We show that our simple guideline is sufficient to achieve competitive performance with other learned indexes and RMIs whose configuration was determined using an expensive enumeration procedure. In addition, while carefully reimplementing RMIs, we are able to improve the build time by 2.5x to 6.3x.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
243,991
1806.00499
Backpropagation for Implicit Spectral Densities
Most successful machine intelligence systems rely on gradient-based learning, which is made possible by backpropagation. Some systems are designed to aid us in interpreting data when explicit goals cannot be provided. These unsupervised systems are commonly trained by backpropagating through a likelihood function. We introduce a tool that allows us to do this even when the likelihood is not explicitly set, by instead using the implicit likelihood of the model. Explicitly defining the likelihood often entails making heavy-handed assumptions that impede our ability to solve challenging tasks. On the other hand, the implicit likelihood of the model is accessible without the need for such assumptions. Our tool, which we call spectral backpropagation, allows us to optimize it in much greater generality than what has been attempted before. GANs can also be viewed as a technique for optimizing implicit likelihoods. We study them using spectral backpropagation in order to demonstrate robustness for high-dimensional problems, and identify two novel properties of the generator G: (1) there exist aberrant, nonsensical outputs to which G assigns very high likelihood, and (2) the eigenvectors of the metric induced by G over latent space correspond to quasi-disentangled explanatory factors.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
99,317
2301.12257
Few-shot Face Image Translation via GAN Prior Distillation
Face image translation has made notable progress in recent years. However, when training on limited data, the performance of existing approaches significantly declines. Although some studies have attempted to tackle this problem, they either failed to achieve the few-shot setting (less than 10) or can only get suboptimal results. In this paper, we propose GAN Prior Distillation (GPD) to enable effective few-shot face image translation. GPD contains two models: a teacher network with GAN Prior and a student network that fulfills end-to-end translation. Specifically, we adapt the teacher network trained on large-scale data in the source domain to the target domain with only a few samples, where it can learn the target domain's knowledge. Then, we can achieve few-shot augmentation by generating source domain and target domain images simultaneously with the same latent codes. We propose an anchor-based knowledge distillation module that can fully use the difference between the training and the augmented data to distill the knowledge of the teacher network into the student network. The trained student network achieves excellent generalization performance with the absorption of additional knowledge. Qualitative and quantitative experiments demonstrate that our method achieves superior results than state-of-the-art approaches in a few-shot setting.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
342,463
1907.07427
Energy-Efficient Power Control of Train-ground mmWave Communication for High Speed Trains
High speed train system has proven to be a very flexible and attractive system that can be developed under various circumstances and in different contexts and cultures. As a result, high speed trains are widely deployed around the world. Providing more reliable and higher data rate communication services for high speed trains has become one of the most urgent challenges. With vast amounts of spectrum available, the millimeter wave (mmWave) system is able to provide transmission rates of several gigabits per second for high speed trains. At the same time, mmWave communication also suffers from high attenuation, thus higher energy efficiency should be considered. This paper proposes an energy efficient power control scheme of train-ground mmWave communication for high speed trains. Considering a beam switching method for efficient beam alignment, we first establish position prediction model, the realistic direction antenna model and receive power model. And then we allocate the transmission power rationally through the power minimization algorithm while ensuring the total amount of transmission data. Based on this, this paper also develops a hybrid optimization scheme and finds the limit of total energy consumption when the number of segments goes to infinity. Through simulation with various system parameters and taking velocity estimation error into account, we demonstrate the superior performance of our schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
138,875
2010.08655
Adaptive Dense-to-Sparse Paradigm for Pruning Online Recommendation System with Non-Stationary Data
Large scale deep learning provides a tremendous opportunity to improve the quality of content recommendation systems by employing both wider and deeper models, but this comes at great infrastructural cost and carbon footprint in modern data centers. Pruning is an effective technique that reduces both memory and compute demand for model inference. However, pruning for online recommendation systems is challenging due to the continuous data distribution shift (a.k.a non-stationary data). Although incremental training on the full model is able to adapt to the non-stationary data, directly applying it on the pruned model leads to accuracy loss. This is because the sparsity pattern after pruning requires adjustment to learn new patterns. To the best of our knowledge, this is the first work to provide in-depth analysis and discussion of applying pruning to online recommendation systems with non-stationary data distribution. Overall, this work makes the following contributions: 1) We present an adaptive dense to sparse paradigm equipped with a novel pruning algorithm for pruning a large scale recommendation system with non-stationary data distribution; 2) We design the pruning algorithm to automatically learn the sparsity across layers to avoid repeating hand-tuning, which is critical for pruning the heterogeneous architectures of recommendation systems trained with non-stationary data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
201,247
2405.15172
Learning the Distribution Map in Reverse Causal Performative Prediction
In numerous predictive scenarios, the predictive model affects the sampling distribution; for example, job applicants often meticulously craft their resumes to navigate through a screening systems. Such shifts in distribution are particularly prevalent in the realm of social computing, yet, the strategies to learn these shifts from data remain remarkably limited. Inspired by a microeconomic model that adeptly characterizes agents' behavior within labor markets, we introduce a novel approach to learn the distribution shift. Our method is predicated on a reverse causal model, wherein the predictive model instigates a distribution shift exclusively through a finite set of agents' actions. Within this framework, we employ a microfoundation model for the agents' actions and develop a statistically justified methodology to learn the distribution shift map, which we demonstrate to be effective in minimizing the performative prediction risk.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
456,785
1910.02602
Human Action Sequence Classification
This paper classifies human action sequences from videos using a machine translation model. In contrast to classical human action classification which outputs a set of actions, our method output a sequence of action in the chronological order of the actions performed by the human. Therefore our method is evaluated using sequential performance measures such as Bilingual Evaluation Understudy (BLEU) scores. Action sequence classification has many applications such as learning from demonstration, action segmentation, detection, localization and video captioning. Furthermore, we use our model that is trained to output action sequences to solve downstream tasks; such as video captioning and action localization. We obtain state of the art results for video captioning in challenging Charades dataset obtaining BLEU-4 score of 34.8 and METEOR score of 33.6 outperforming previous state-of-the-art of 18.8 and 19.5 respectively. Similarly, on ActivityNet captioning, we obtain excellent results in-terms of ROUGE (20.24) and CIDER (37.58) scores. For action localization, without using any explicit start/end action annotations, our method obtains localization performance of 22.2 mAP outperforming prior fully supervised methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
148,294
1904.05530
Recurrent Event Network: Autoregressive Structure Inference over Temporal Knowledge Graphs
Knowledge graph reasoning is a critical task in natural language processing. The task becomes more challenging on temporal knowledge graphs, where each fact is associated with a timestamp. Most existing methods focus on reasoning at past timestamps and they are not able to predict facts happening in the future. This paper proposes Recurrent Event Network (RE-NET), a novel autoregressive architecture for predicting future interactions. The occurrence of a fact (event) is modeled as a probability distribution conditioned on temporal sequences of past knowledge graphs. Specifically, our RE-NET employs a recurrent event encoder to encode past facts and uses a neighborhood aggregator to model the connection of facts at the same timestamp. Future facts can then be inferred in a sequential manner based on the two modules. We evaluate our proposed method via link prediction at future times on five public datasets. Through extensive experiments, we demonstrate the strength of RENET, especially on multi-step inference over future timestamps, and achieve state-of-the-art performance on all five datasets. Code and data can be found at https://github.com/INK-USC/RE-Net.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
127,343
2202.01102
System Identification with Variance Minimization via Input Design
The subspace method is one of the mainstream system identification method of linear systems, and its basic idea is to estimate the system parameter matrices by projecting them into a subspace related to input and output. However, most of the existing subspace methods cannot have the statistic performance guaranteed since the lack of closed-form expression of the estimation. Meanwhile, traditional subspace methods cannot deal with the uncertainty of the noise, and thus stable identification results cannot be obtained. In this paper, we propose a novel improved subspace method from the perspective of input design, which guarantees the consistent and stable identification results with the minimum variance. Specifically, we first obtain a closed-form estimation of the system matrix, then analyze the statistic performance by deriving the maximum identification deviation. This identification deviation maximization problem is non-convex, and is solved by splitting it into two sub-problems with the optimal solution guaranteed. Next, an input design method is proposed to deal with the uncertainty and obtain stable identification results by minimizing the variance. This problem is formulated as a constrained min-max optimization problem. The optimal solution is obtained from transforming the cost function into a convex function while ensuring the safety constraints through the method of predictive control. We prove the consistency and the convergence of the proposed method. Simulation demonstrates the effectiveness of our method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
278,367
2305.16444
Don't Retrain, Just Rewrite: Countering Adversarial Perturbations by Rewriting Text
Can language models transform inputs to protect text classifiers against adversarial attacks? In this work, we present ATINTER, a model that intercepts and learns to rewrite adversarial inputs to make them non-adversarial for a downstream text classifier. Our experiments on four datasets and five attack mechanisms reveal that ATINTER is effective at providing better adversarial robustness than existing defense approaches, without compromising task accuracy. For example, on sentiment classification using the SST-2 dataset, our method improves the adversarial accuracy over the best existing defense approach by more than 4% with a smaller decrease in task accuracy (0.5% vs 2.5%). Moreover, we show that ATINTER generalizes across multiple downstream tasks and classifiers without having to explicitly retrain it for those settings. Specifically, we find that when ATINTER is trained to remove adversarial perturbations for the sentiment classification task on the SST-2 dataset, it even transfers to a semantically different task of news classification (on AGNews) and improves the adversarial robustness by more than 10%.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
368,091
2006.05712
Listen to What You Want: Neural Network-based Universal Sound Selector
Being able to control the acoustic events (AEs) to which we want to listen would allow the development of more controllable hearable devices. This paper addresses the AE sound selection (or removal) problems, that we define as the extraction (or suppression) of all the sounds that belong to one or multiple desired AE classes. Although this problem could be addressed with a combination of source separation followed by AE classification, this is a sub-optimal way of solving the problem. Moreover, source separation usually requires knowing the maximum number of sources, which may not be practical when dealing with AEs. In this paper, we propose instead a universal sound selection neural network that enables to directly select AE sounds from a mixture given user-specified target AE classes. The proposed framework can be explicitly optimized to simultaneously select sounds from multiple desired AE classes, independently of the number of sources in the mixture. We experimentally show that the proposed method achieves promising AE sound selection performance and could be generalized to mixtures with a number of sources that are unseen during training.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
181,177
1710.00625
Equalization Methods for NLIN Mitigation
We investigate the potential of adaptive equalization techniques to mitigate inter-channel nonlinear interference noise (NLIN). We derive a lower bound on the mutual information of a system using adaptive equalization, showing that the channel estimation error determines the equalizer's performance. We develop an adaptive equalization scheme which uses the statistics of the NLIN to obtain optimal detection, based on Kalman filtering and maximum likelihood sequence estimation (MLSE). This scheme outperforms commonly used equalizers and significantly increases performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
81,896
2005.12503
BEEP! Korean Corpus of Online News Comments for Toxic Speech Detection
Toxic comments in online platforms are an unavoidable social issue under the cloak of anonymity. Hate speech detection has been actively done for languages such as English, German, or Italian, where manually labeled corpus has been released. In this work, we first present 9.4K manually labeled entertainment news comments for identifying Korean toxic speech, collected from a widely used online news platform in Korea. The comments are annotated regarding social bias and hate speech since both aspects are correlated. The inter-annotator agreement Krippendorff's alpha score is 0.492 and 0.496, respectively. We provide benchmarks using CharCNN, BiLSTM, and BERT, where BERT achieves the highest score on all tasks. The models generally display better performance on bias identification, since the hate speech detection is a more subjective issue. Additionally, when BERT is trained with bias label for hate speech detection, the prediction score increases, implying that bias and hate are intertwined. We make our dataset publicly available and open competitions with the corpus and benchmarks.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
178,748
2307.06948
Self-regulating Prompts: Foundational Model Adaptation without Forgetting
Prompt learning has emerged as an efficient alternative for fine-tuning foundational models, such as CLIP, for various downstream tasks. Conventionally trained using the task-specific objective, i.e., cross-entropy loss, prompts tend to overfit downstream data distributions and find it challenging to capture task-agnostic general features from the frozen CLIP. This leads to the loss of the model's original generalization capability. To address this issue, our work introduces a self-regularization framework for prompting called PromptSRC (Prompting with Self-regulating Constraints). PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations using a three-pronged approach by: (a) regulating prompted representations via mutual agreement maximization with the frozen model, (b) regulating with self-ensemble of prompts over the training trajectory to encode their complementary strengths, and (c) regulating with textual diversity to mitigate sample diversity imbalance with the visual branch. To the best of our knowledge, this is the first regularization framework for prompt learning that avoids overfitting by jointly attending to pre-trained model features, the training trajectory during prompting, and the textual diversity. PromptSRC explicitly steers the prompts to learn a representation space that maximizes performance on downstream tasks without compromising CLIP generalization. We perform extensive experiments on 4 benchmarks where PromptSRC overall performs favorably well compared to the existing methods. Our code and pre-trained models are publicly available at: https://github.com/muzairkhattak/PromptSRC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
379,228
2411.19547
Training Agents with Weakly Supervised Feedback from Large Language Models
Large Language Models (LLMs) offer a promising basis for creating agents that can tackle complex tasks through iterative environmental interaction. Existing methods either require these agents to mimic expert-provided trajectories or rely on definitive environmental feedback for reinforcement learning which limits their application to specific scenarios like gaming or code generation. This paper introduces a novel training method for LLM-based agents using weakly supervised signals from a critic LLM, bypassing the need for expert trajectories or definitive feedback. Our agents are trained in iterative manner, where they initially generate trajectories through environmental interaction. Subsequently, a critic LLM selects a subset of good trajectories, which are then used to update the agents, enabling them to generate improved trajectories in the next iteration. Extensive tests on the API-bank dataset show consistent improvement in our agents' capabilities and comparable performance to GPT-4, despite using open-source models with much fewer parameters.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
512,294
1206.6484
Apprenticeship Learning for Model Parameters of Partially Observable Environments
We consider apprenticeship learning, i.e., having an agent learn a task by observing an expert demonstrating the task in a partially observable environment when the model of the environment is uncertain. This setting is useful in applications where the explicit modeling of the environment is difficult, such as a dialogue system. We show that we can extract information about the environment model by inferring action selection process behind the demonstration, under the assumption that the expert is choosing optimal actions based on knowledge of the true model of the target environment. Proposed algorithms can achieve more accurate estimates of POMDP parameters and better policies from a short demonstration, compared to methods that learns only from the reaction from the environment.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
17,019
2404.06591
Milgram's experiment in the knowledge space: Individual navigation strategies
Data deluge characteristic for our times has led to information overload, posing a significant challenge to effectively finding our way through the digital landscape. Addressing this issue requires an in-depth understanding of how we navigate through the abundance of information. Previous research has discovered multiple patterns in how individuals navigate in the geographic, social, and information spaces, yet individual differences in strategies for navigation in the knowledge space has remained largely unexplored. To bridge the gap, we conducted an online experiment where participants played a navigation game on Wikipedia and completed questionnaires about their personal information. Utilizing a graph embedding trained on the English Wikipedia, our study identified distinctive strategies that participants adopt: when the target is a famous person, participants typically use the geographical and occupational information of the target to navigate, reminiscent of hub-driven and proximity-driven approaches, respectively. We discovered that many participants playing the same game exhibit a "wisdom of the crowd" effect: The set of strategies provide a good estimate for the information landscape around the target indicating that the individual differences complement each other.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
445,518
2103.12551
Deep Learning for Exotic Option Valuation
A common approach to valuing exotic options involves choosing a model and then determining its parameters to fit the volatility surface as closely as possible. We refer to this as the model calibration approach (MCA). A disadvantage of MCA is that some information in the volatility surface is lost during the calibration process and the prices of exotic options will not in general be consistent with those of plain vanilla options. We consider an alternative approach where the structure of the user's preferred model is preserved but points on the volatility are features input to a neural network. We refer to this as the volatility feature approach (VFA) model. We conduct experiments showing that VFA can be expected to outperform MCA for the volatility surfaces encountered in practice. Once the upfront computational time has been invested in developing the neural network, the valuation of exotic options using VFA is very fast.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
226,217
1809.08700
Envy-Free Classification
In classic fair division problems such as cake cutting and rent division, envy-freeness requires that each individual (weakly) prefer his allocation to anyone else's. On a conceptual level, we argue that envy-freeness also provides a compelling notion of fairness for classification tasks. Our technical focus is the generalizability of envy-free classification, i.e., understanding whether a classifier that is envy free on a sample would be almost envy free with respect to the underlying distribution with high probability. Our main result establishes that a small sample is sufficient to achieve such guarantees, when the classifier in question is a mixture of deterministic classifiers that belong to a family of low Natarajan dimension.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
108,561
2101.06849
CFC-Net: A Critical Feature Capturing Network for Arbitrary-Oriented Object Detection in Remote Sensing Images
Object detection in optical remote sensing images is an important and challenging task. In recent years, the methods based on convolutional neural networks have made good progress. However, due to the large variation in object scale, aspect ratio, and arbitrary orientation, the detection performance is difficult to be further improved. In this paper, we discuss the role of discriminative features in object detection, and then propose a Critical Feature Capturing Network (CFC-Net) to improve detection accuracy from three aspects: building powerful feature representation, refining preset anchors, and optimizing label assignment. Specifically, we first decouple the classification and regression features, and then construct robust critical features adapted to the respective tasks through the Polarization Attention Module (PAM). With the extracted discriminative regression features, the Rotation Anchor Refinement Module (R-ARM) performs localization refinement on preset horizontal anchors to obtain superior rotation anchors. Next, the Dynamic Anchor Learning (DAL) strategy is given to adaptively select high-quality anchors based on their ability to capture critical features. The proposed framework creates more powerful semantic representations for objects in remote sensing images and achieves high-performance real-time object detection. Experimental results on three remote sensing datasets including HRSC2016, DOTA, and UCAS-AOD show that our method achieves superior detection performance compared with many state-of-the-art approaches. Code and models are available at https://github.com/ming71/CFC-Net.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
215,857
1512.02573
Hunting for Spammers: Detecting Evolved Spammers on Twitter
Once an email problem, spam has nowadays branched into new territories with disruptive effects. In particular, spam has established itself over the recent years as a ubiquitous, annoying, and sometimes threatening aspect of online social networks. Due to its prevalent existence, many works have tackled spam on Twitter from different angles. Spam is, however, a moving target. The new generation of spammers on Twitter has evolved into online creatures that are not easily recognizable by old detection systems. With the strong tangled spamming community, automatic tweeting scripts, and the ability to massively create Twitter accounts with a negligible cost, spam on Twitter is becoming smarter, fuzzier and harder to detect. Our own analysis of spam content on Arabic trending hashtags in Saudi Arabia results in an estimate of about three quarters of the total generated content. This alarming rate makes the development of adaptive spam detection techniques a very real and pressing need. In this paper, we analyze the spam content of trending hashtags on Saudi Twitter, and assess the performance of previous spam detection systems on our recently gathered dataset. Due to the escalating manipulation that characterizes newer spamming accounts, simple manual labeling currently leads to inaccurate results. In order to get reliable ground-truth data, we propose an updated manual classification algorithm that avoids the deficiencies of older manual approaches. We also adapt the previously proposed features to respond to spammers evading techniques, and use these features to build a new data-driven detection system.
false
false
false
false
false
true
false
false
false
false
false
false
true
true
false
false
false
false
49,952
2311.07798
Probabilistic Physics-integrated Neural Differentiable Modeling for Isothermal Chemical Vapor Infiltration Process
Chemical vapor infiltration (CVI) is a widely adopted manufacturing technique used in producing carbon-carbon and carbon-silicon carbide composites. These materials are especially valued in the aerospace and automotive industries for their robust strength and lightweight characteristics. The densification process during CVI critically influences the final performance, quality, and consistency of these composite materials. Experimentally optimizing the CVI processes is challenging due to long experimental time and large optimization space. To address these challenges, this work takes a modeling-centric approach. Due to the complexities and limited experimental data of the isothermal CVI densification process, we have developed a data-driven predictive model using the physics-integrated neural differentiable (PiNDiff) modeling framework. An uncertainty quantification feature has been embedded within the PiNDiff method, bolstering the model's reliability and robustness. Through comprehensive numerical experiments involving both synthetic and real-world manufacturing data, the proposed method showcases its capability in modeling densification during the CVI process. This research highlights the potential of the PiNDiff framework as an instrumental tool for advancing our understanding, simulation, and optimization of the CVI manufacturing process, particularly when faced with sparse data and an incomplete description of the underlying physics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
407,474
2406.18941
CLIP3D-AD: Extending CLIP for 3D Few-Shot Anomaly Detection with Multi-View Images Generation
Few-shot anomaly detection methods can effectively address data collecting difficulty in industrial scenarios. Compared to 2D few-shot anomaly detection (2D-FSAD), 3D few-shot anomaly detection (3D-FSAD) is still an unexplored but essential task. In this paper, we propose CLIP3D-AD, an efficient 3D-FSAD method extended on CLIP. We successfully transfer strong generalization ability of CLIP into 3D-FSAD. Specifically, we synthesize anomalous images on given normal images as sample pairs to adapt CLIP for 3D anomaly classification and segmentation. For classification, we introduce an image adapter and a text adapter to fine-tune global visual features and text features. Meanwhile, we propose a coarse-to-fine decoder to fuse and facilitate intermediate multi-layer visual representations of CLIP. To benefit from geometry information of point cloud and eliminate modality and data discrepancy when processed by CLIP, we project and render point cloud to multi-view normal and anomalous images. Then we design multi-view fusion module to fuse features of multi-view images extracted by CLIP which are used to facilitate visual representations for further enhancing vision-language correlation. Extensive experiments demonstrate that our method has a competitive performance of 3D few-shot anomaly classification and segmentation on MVTec-3D AD dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
468,246
1910.02546
A theorem of Kalman and minimal state-space realization of Vector Autoregressive Models
We introduce a concept of $autoregressive$ (AR)state-space realization that could be applied to all transfer functions $\boldsymbol{T}(L)$ with $\boldsymbol{T}(0)$ invertible. We show that a theorem of Kalman implies each Vector Autoregressive model (with exogenous variables) has a minimal $AR$-state-space realization of form $\boldsymbol{y}_t = \sum_{i=1}^p\boldsymbol{H}\boldsymbol{F}^{i-1}\boldsymbol{G}\boldsymbol{x}_{t-i}+\boldsymbol{\epsilon}_t$ where $\boldsymbol{F}$ is a nilpotent Jordan matrix and $\boldsymbol{H}, \boldsymbol{G}$ satisfy certain rank conditions. The case $VARX(1)$ corresponds to reduced-rank regression. Similar to that case, for a fixed Jordan form $\boldsymbol{F}$, $\boldsymbol{H}$ could be estimated by least square as a function of $\boldsymbol{G}$. The likelihood function is a determinant ratio generalizing the Rayleigh quotient. It is unchanged if $\boldsymbol{G}$ is replaced by $\boldsymbol{S}\boldsymbol{G}$ for an invertible matrix $\boldsymbol{S}$ commuting with $\boldsymbol{F}$. Using this invariant property, the search space for maximum likelihood estimate could be constrained to equivalent classes of matrices satisfying a number of orthogonal relations, extending the results in reduced-rank analysis. Our results could be considered a multi-lag canonical-correlation-analysis. The method considered here provides a solution in the general case to the polynomial product regression model of Velu et. al. We provide estimation examples. We also explore how the estimates vary with different Jordan matrix configurations and discuss methods to select a configuration. Our approach could provide an important dimensional reduction technique with potential applications in time series analysis and linear system identification. In the appendix, we link the reduced configuration space of $\boldsymbol{G}$ with a geometric object called a vector bundle.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
148,274
2211.01292
Learning an Artificial Language for Knowledge-Sharing in Multilingual Translation
The cornerstone of multilingual neural translation is shared representations across languages. Given the theoretically infinite representation power of neural networks, semantically identical sentences are likely represented differently. While representing sentences in the continuous latent space ensures expressiveness, it introduces the risk of capturing of irrelevant features which hinders the learning of a common representation. In this work, we discretize the encoder output latent space of multilingual models by assigning encoder states to entries in a codebook, which in effect represents source sentences in a new artificial language. This discretization process not only offers a new way to interpret the otherwise black-box model representations, but, more importantly, gives potential for increasing robustness in unseen testing conditions. We validate our approach on large-scale experiments with realistic data volumes and domains. When tested in zero-shot conditions, our approach is competitive with two strong alternatives from the literature. We also use the learned artificial language to analyze model behavior, and discover that using a similar bridge language increases knowledge-sharing among the remaining languages.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
328,171
2411.14100
BEST-STD: Bidirectional Mamba-Enhanced Speech Tokenization for Spoken Term Detection
Spoken term detection (STD) is often hindered by reliance on frame-level features and the computationally intensive DTW-based template matching, limiting its practicality. To address these challenges, we propose a novel approach that encodes speech into discrete, speaker-agnostic semantic tokens. This facilitates fast retrieval using text-based search algorithms and effectively handles out-of-vocabulary terms. Our approach focuses on generating consistent token sequences across varying utterances of the same term. We also propose a bidirectional state space modeling within the Mamba encoder, trained in a self-supervised learning framework, to learn contextual frame-level features that are further encoded into discrete tokens. Our analysis shows that our speech tokens exhibit greater speaker invariance than those from existing tokenizers, making them more suitable for STD tasks. Empirical evaluation on LibriSpeech and TIMIT databases indicates that our method outperforms existing STD baselines while being more efficient.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
510,041
2201.01942
Efficiently Disentangle Causal Representations
This paper proposes an efficient approach to learning disentangled representations with causal mechanisms based on the difference of conditional probabilities in original and new distributions. We approximate the difference with models' generalization abilities so that it fits in the standard machine learning framework and can be efficiently computed. In contrast to the state-of-the-art approach, which relies on the learner's adaptation speed to new distribution, the proposed approach only requires evaluating the model's generalization ability. We provide a theoretical explanation for the advantage of the proposed method, and our experiments show that the proposed technique is 1.9--11.0$\times$ more sample efficient and 9.4--32.4 times quicker than the previous method on various tasks. The source code is available at \url{https://github.com/yuanpeng16/EDCR}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
274,401
2306.13800
A First Order Meta Stackelberg Method for Robust Federated Learning
Previous research has shown that federated learning (FL) systems are exposed to an array of security risks. Despite the proposal of several defensive strategies, they tend to be non-adaptive and specific to certain types of attacks, rendering them ineffective against unpredictable or adaptive threats. This work models adversarial federated learning as a Bayesian Stackelberg Markov game (BSMG) to capture the defender's incomplete information of various attack types. We propose meta-Stackelberg learning (meta-SL), a provably efficient meta-learning algorithm, to solve the equilibrium strategy in BSMG, leading to an adaptable FL defense. We demonstrate that meta-SL converges to the first-order $\varepsilon$-equilibrium point in $O(\varepsilon^{-2})$ gradient iterations, with $O(\varepsilon^{-4})$ samples needed per iteration, matching the state of the art. Empirical evidence indicates that our meta-Stackelberg framework performs exceptionally well against potent model poisoning and backdoor attacks of an uncertain nature.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
375,404
1702.05985
Fano's inequality for random variables
We extend Fano's inequality, which controls the average probability of events in terms of the average of some $f$--divergences, to work with arbitrary events (not necessarily forming a partition) and even with arbitrary $[0,1]$--valued random variables, possibly in continuously infinite number. We provide two applications of these extensions, in which the consideration of random variables is particularly handy: we offer new and elegant proofs for existing lower bounds, on Bayesian posterior concentration (minimax or distribution-dependent) rates and on the regret in non-stochastic sequential learning.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,511
2408.04181
EdgeShield: A Universal and Efficient Edge Computing Framework for Robust AI
The increasing prevalence of adversarial attacks on Artificial Intelligence (AI) systems has created a need for innovative security measures. However, the current methods of defending against these attacks often come with a high computing cost and require back-end processing, making real-time defense challenging. Fortunately, there have been remarkable advancements in edge-computing, which make it easier to deploy neural networks on edge devices. Building upon these advancements, we propose an edge framework design to enable universal and efficient detection of adversarial attacks. This framework incorporates an attention-based adversarial detection methodology and a lightweight detection network formation, making it suitable for a wide range of neural networks and can be deployed on edge devices. To assess the effectiveness of our proposed framework, we conducted evaluations on five neural networks. The results indicate an impressive 97.43% F-score can be achieved, demonstrating the framework's proficiency in detecting adversarial attacks. Moreover, our proposed framework also exhibits significantly reduced computing complexity and cost in comparison to previous detection methods. This aspect is particularly beneficial as it ensures that the defense mechanism can be efficiently implemented in real-time on-edge devices.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
479,277
2309.15483
Energy-Efficient Precoding Designs for Multi-User Visible Light Communication Systems with Confidential Messages
This paper studies energy-efficient precoding designs for multi-user visible light communication (VLC) systems from the perspective of physical layer security where users' messages must be kept mutually confidential. For such systems, we first derive a lower bound on the achievable secrecy rate of each user. Next, the total power consumption for illumination and data transmission is thoroughly analyzed. We then tackle the problem of maximizing energy efficiency, given that each user's secrecy rate satisfies a certain threshold. The design problem is shown to be non-convex fractional programming, which renders finding the optimal solution computationally prohibitive. Our aim in this paper is, therefore, to find sub-optimal yet low complexity solutions. For this purpose, the traditional Dinkelbach algorithm is first employed to reformulate the original problem to a non-fractional parameterized one. Two different approaches based on the convex-concave procedure (CCCP) and Semidefinite Relaxation (SDR) are utilized to solve the non-convex parameterized problem. In addition, to further reduce the complexity, we investigate a design using the zero-forcing (ZF) technique. Numerical results are conducted to show the feasibility, convergence, and performance of the proposed algorithms depending on different parameters of the system.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
394,981
2502.09850
Elastic Representation: Mitigating Spurious Correlations for Group Robustness
Deep learning models can suffer from severe performance degradation when relying on spurious correlations between input features and labels, making the models perform well on training data but have poor prediction accuracy for minority groups. This problem arises especially when training data are limited or imbalanced. While most prior work focuses on learning invariant features (with consistent correlations to y), it overlooks the potential harm of spurious correlations between features. We hereby propose Elastic Representation (ElRep) to learn features by imposing Nuclear- and Frobenius-norm penalties on the representation from the last layer of a neural network. Similar to the elastic net, ElRep enjoys the benefits of learning important features without losing feature diversity. The proposed method is simple yet effective. It can be integrated into many deep learning approaches to mitigate spurious correlations and improve group robustness. Moreover, we theoretically show that ElRep has minimum negative impacts on in-distribution predictions. This is a remarkable advantage over approaches that prioritize minority groups at the cost of overall performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
533,629
2209.02000
Visual Odometry with Neuromorphic Resonator Networks
Visual Odometry (VO) is a method to estimate self-motion of a mobile robot using visual sensors. Unlike odometry based on integrating differential measurements that can accumulate errors, such as inertial sensors or wheel encoders, visual odometry is not compromised by drift. However, image-based VO is computationally demanding, limiting its application in use cases with low-latency, -memory, and -energy requirements. Neuromorphic hardware offers low-power solutions to many vision and AI problems, but designing such solutions is complicated and often has to be assembled from scratch. Here we propose to use Vector Symbolic Architecture (VSA) as an abstraction layer to design algorithms compatible with neuromorphic hardware. Building from a VSA model for scene analysis, described in our companion paper, we present a modular neuromorphic algorithm that achieves state-of-the-art performance on two-dimensional VO tasks. Specifically, the proposed algorithm stores and updates a working memory of the presented visual environment. Based on this working memory, a resonator network estimates the changing location and orientation of the camera. We experimentally validate the neuromorphic VSA-based approach to VO with two benchmarks: one based on an event camera dataset and the other in a dynamic scene with a robotic task.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
true
false
false
316,071
1108.0355
Using Java for distributed computing in the Gaia satellite data processing
In recent years Java has matured to a stable easy-to-use language with the flexibility of an interpreter (for reflection etc.) but the performance and type checking of a compiled language. When we started using Java for astronomical applications around 1999 they were the first of their kind in astronomy. Now a great deal of astronomy software is written in Java as are many business applications. We discuss the current environment and trends concerning the language and present an actual example of scientific use of Java for high-performance distributed computing: ESA's mission Gaia. The Gaia scanning satellite will perform a galactic census of about 1000 million objects in our galaxy. The Gaia community has chosen to write its processing software in Java. We explore the manifold reasons for choosing Java for this large science collaboration. Gaia processing is numerically complex but highly distributable, some parts being embarrassingly parallel. We describe the Gaia processing architecture and its realisation in Java. We delve into the astrometric solution which is the most advanced and most complex part of the processing. The Gaia simulator is also written in Java and is the most mature code in the system. This has been successfully running since about 2005 on the supercomputer "Marenostrum" in Barcelona. We relate experiences of using Java on a large shared machine. Finally we discuss Java, including some of its problems, for scientific computing.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
11,535
1908.09573
Embarrassingly Simple Binary Representation Learning
Recent binary representation learning models usually require sophisticated binary optimization, similarity measure or even generative models as auxiliaries. However, one may wonder whether these non-trivial components are needed to formulate practical and effective hashing models. In this paper, we answer the above question by proposing an embarrassingly simple approach to binary representation learning. With a simple classification objective, our model only incorporates two additional fully-connected layers onto the top of an arbitrary backbone network, whilst complying with the binary constraints during training. The proposed model lower-bounds the Information Bottleneck (IB) between data samples and their semantics, and can be related to many recent `learning to hash' paradigms. We show that, when properly designed, even such a simple network can generate effective binary codes, by fully exploring data semantics without any held-out alternating updating steps or auxiliary models. Experiments are conducted on conventional large-scale benchmarks, i.e., CIFAR-10, NUS-WIDE, and ImageNet, where the proposed simple model outperforms the state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
142,886
2308.00947
Decomposing and Coupling Saliency Map for Lesion Segmentation in Ultrasound Images
Complex scenario of ultrasound image, in which adjacent tissues (i.e., background) share similar intensity with and even contain richer texture patterns than lesion region (i.e., foreground), brings a unique challenge for accurate lesion segmentation. This work presents a decomposition-coupling network, called DC-Net, to deal with this challenge in a (foreground-background) saliency map disentanglement-fusion manner. The DC-Net consists of decomposition and coupling subnets, and the former preliminarily disentangles original image into foreground and background saliency maps, followed by the latter for accurate segmentation under the assistance of saliency prior fusion. The coupling subnet involves three aspects of fusion strategies, including: 1) regional feature aggregation (via differentiable context pooling operator in the encoder) to adaptively preserve local contextual details with the larger receptive field during dimension reduction; 2) relation-aware representation fusion (via cross-correlation fusion module in the decoder) to efficiently fuse low-level visual characteristics and high-level semantic features during resolution restoration; 3) dependency-aware prior incorporation (via coupler) to reinforce foreground-salient representation with the complementary information derived from background representation. Furthermore, a harmonic loss function is introduced to encourage the network to focus more attention on low-confidence and hard samples. The proposed method is evaluated on two ultrasound lesion segmentation tasks, which demonstrates the remarkable performance improvement over existing state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
383,089
1802.03656
TextZoo, a New Benchmark for Reconsidering Text Classification
Text representation is a fundamental concern in Natural Language Processing, especially in text classification. Recently, many neural network approaches with delicate representation model (e.g. FASTTEXT, CNN, RNN and many hybrid models with attention mechanisms) claimed that they achieved state-of-art in specific text classification datasets. However, it lacks an unified benchmark to compare these models and reveals the advantage of each sub-components for various settings. We re-implement more than 20 popular text representation models for classification in more than 10 datasets. In this paper, we reconsider the text classification task in the perspective of neural network and get serval effects with analysis of the above results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
90,033
2311.11099
Introducing NCL-SM: A Fully Annotated Dataset of Images from Human Skeletal Muscle Biopsies
Single cell analysis of skeletal muscle (SM) tissue is a fundamental tool for understanding many neuromuscular disorders. For this analysis to be reliable and reproducible, identification of individual fibres within microscopy images (segmentation) of SM tissue should be precise. There is currently no tool or pipeline that makes automatic and precise segmentation and curation of images of SM tissue cross-sections possible. Biomedical scientists in this field rely on custom tools and general machine learning (ML) models, both followed by labour intensive and subjective manual interventions to get the segmentation right. We believe that automated, precise, reproducible segmentation is possible by training ML models. However, there are currently no good quality, publicly available annotated imaging datasets available for ML model training. In this paper we release NCL-SM: a high quality bioimaging dataset of 46 human tissue sections from healthy control subjects and from patients with genetically diagnosed muscle pathology. These images include $>$ 50k manually segmented muscle fibres (myofibres). In addition we also curated high quality myofibres and annotated reasons for rejecting low quality myofibres and regions in SM tissue images, making this data completely ready for downstream analysis. This, we believe, will pave the way for development of a fully automatic pipeline that identifies individual myofibres within images of tissue sections and, in particular, also classifies individual myofibres that are fit for further analysis.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
408,795
1812.03892
An Open-Source System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments
We present an open-source system for Micro-Aerial Vehicle autonomous navigation from vision-based sensing. Our system focuses on dense mapping, safe local planning, and global trajectory generation, especially when using narrow field of view sensors in very cluttered environments. In addition, details about other necessary parts of the system and special considerations for applications in real-world scenarios are presented. We focus our experiments on evaluating global planning, path smoothing, and local planning methods on real maps made on MAVs in realistic search and rescue and industrial inspection scenarios. We also perform thousands of simulations in cluttered synthetic environments, and finally validate the complete system in real-world experiments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
116,101
2006.03715
Using Stable Matching to Optimize the Balance between Accuracy and Diversity in Recommendation
Increasing aggregate diversity (or catalog coverage) is an important system-level objective in many recommendation domains where it may be desirable to mitigate the popularity bias and to improve the coverage of long-tail items in recommendations given to users. This is especially important in multistakeholder recommendation scenarios where it may be important to optimize utilities not just for the end user, but also for other stakeholders such as item sellers or producers who desire a fair representation of their items across recommendation lists produced by the system. Unfortunately, attempts to increase aggregate diversity often result in lower recommendation accuracy for end users. Thus, addressing this problem requires an approach that can effectively manage the trade-offs between accuracy and aggregate diversity. In this work, we propose a two-sided post-processing approach in which both user and item utilities are considered. Our goal is to maximize aggregate diversity while minimizing loss in recommendation accuracy. Our solution is a generalization of the Deferred Acceptance algorithm which was proposed as an efficient algorithm to solve the well-known stable matching problem. We prove that our algorithm results in a unique user-optimal stable match between items and users. Using three recommendation datasets, we empirically demonstrate the effectiveness of our approach in comparison to several baselines. In particular, our results show that the proposed solution is quite effective in increasing aggregate diversity and item-side utility while optimizing recommendation accuracy for end users.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
180,410
1511.04970
Learning about Spanish dialects through Twitter
This paper maps the large-scale variation of the Spanish language by employing a corpus based on geographically tagged Twitter messages. Lexical dialects are extracted from an analysis of variants of tens of concepts. The resulting maps show linguistic variation on an unprecedented scale across the globe. We discuss the properties of the main dialects within a machine learning approach and find that varieties spoken in urban areas have an international character in contrast to country areas where dialects show a more regional uniformity.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
48,975
2003.04289
Knowledge distillation via adaptive instance normalization
This paper addresses the problem of model compression via knowledge distillation. To this end, we propose a new knowledge distillation method based on transferring feature statistics, specifically the channel-wise mean and variance, from the teacher to the student. Our method goes beyond the standard way of enforcing the mean and variance of the student to be similar to those of the teacher through an $L_2$ loss, which we found it to be of limited effectiveness. Specifically, we propose a new loss based on adaptive instance normalization to effectively transfer the feature statistics. The main idea is to transfer the learned statistics back to the teacher via adaptive instance normalization (conditioned on the student) and let the teacher network "evaluate" via a loss whether the statistics learned by the student are reliably transferred. We show that our distillation method outperforms other state-of-the-art distillation methods over a large set of experimental settings including different (a) network architectures, (b) teacher-student capacities, (c) datasets, and (d) domains.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
167,513
2106.09835
Dual-Teacher Class-Incremental Learning With Data-Free Generative Replay
This paper proposes two novel knowledge transfer techniques for class-incremental learning (CIL). First, we propose data-free generative replay (DF-GR) to mitigate catastrophic forgetting in CIL by using synthetic samples from a generative model. In the conventional generative replay, the generative model is pre-trained for old data and shared in extra memory for later incremental learning. In our proposed DF-GR, we train a generative model from scratch without using any training data, based on the pre-trained classification model from the past, so we curtail the cost of sharing pre-trained generative models. Second, we introduce dual-teacher information distillation (DT-ID) for knowledge distillation from two teachers to one student. In CIL, we use DT-ID to learn new classes incrementally based on the pre-trained model for old classes and another model (pre-)trained on the new data for new classes. We implemented the proposed schemes on top of one of the state-of-the-art CIL methods and showed the performance improvement on CIFAR-100 and ImageNet datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
241,805
2109.02865
Journalistic Guidelines Aware News Image Captioning
The task of news article image captioning aims to generate descriptive and informative captions for news article images. Unlike conventional image captions that simply describe the content of the image in general terms, news image captions follow journalistic guidelines and rely heavily on named entities to describe the image content, often drawing context from the whole article they are associated with. In this work, we propose a new approach to this task, motivated by caption guidelines that journalists follow. Our approach, Journalistic Guidelines Aware News Image Captioning (JoGANIC), leverages the structure of captions to improve the generation quality and guide our representation design. Experimental results, including detailed ablation studies, on two large-scale publicly available datasets show that JoGANIC substantially outperforms state-of-the-art methods both on caption generation and named entity related metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,879
2411.08599
A Preview of XiYan-SQL: A Multi-Generator Ensemble Framework for Text-to-SQL
To tackle the challenges of large language model performance in natural language to SQL tasks, we introduce XiYan-SQL, an innovative framework that employs a multi-generator ensemble strategy to improve candidate generation. We introduce M-Schema, a semi-structured schema representation method designed to enhance the understanding of database structures. To enhance the quality and diversity of generated candidate SQL queries, XiYan-SQL integrates the significant potential of in-context learning (ICL) with the precise control of supervised fine-tuning. On one hand, we propose a series of training strategies to fine-tune models to generate high-quality candidates with diverse preferences. On the other hand, we implement the ICL approach with an example selection method based on named entity recognition to prevent overemphasis on entities. The refiner optimizes each candidate by correcting logical or syntactical errors. To address the challenge of identifying the best candidate, we fine-tune a selection model to distinguish nuances of candidate SQL queries. The experimental results on multiple dialect datasets demonstrate the robustness of XiYan-SQL in addressing challenges across different scenarios. Overall, our proposed XiYan-SQL achieves the state-of-the-art execution accuracy of 75.63% on Bird benchmark, 89.65% on the Spider test set, 69.86% on SQL-Eval, 41.20% on NL2GQL. The proposed framework not only enhances the quality and diversity of SQL queries but also outperforms previous methods.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
true
false
507,947
2303.06906
A collection of memos dedicated to exact base-21 (EBTO) and quasi base-21 (QBTO) codes
This collection bundles the following memos dedicated to the so called exact base-21 (EBTO) and quasi base-21 (QBTO) serial transport codes: [1] "Base-21 Scrambling" (discusses about EBTO codes, present at pp. 1-4 in the bundle); [2] "Base-21 Word Alignment and Boundary Detection" (EBTO, pp. 5-8); [3] "Quasi Base-21 Words" (QBTO, pp. 9-12); [4] "Quasi Base-21 Words Generated Compactly" (QBTO, pp. 13-16); and [5] "Quasi Base-21 Words Balanced on the Framework" (QBTO, pp. 17-20).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
351,047
2003.09439
ROAM: Random Layer Mixup for Semi-Supervised Learning in Medical Imaging
Medical image segmentation is one of the major challenges addressed by machine learning methods. Yet, deep learning methods profoundly depend on a large amount of annotated data, which is time-consuming and costly. Though, semi-supervised learning methods approach this problem by leveraging an abundant amount of unlabeled data along with a small amount of labeled data in the training process. Recently, MixUp regularizer has been successfully introduced to semi-supervised learning methods showing superior performance. MixUp augments the model with new data points through linear interpolation of the data at the input space. We argue that this option is limited. Instead, we propose ROAM, a RandOm lAyer Mixup, which encourages the network to be less confident for interpolated data points at randomly selected space. ROAM generates more data points that have never seen before, and hence it avoids over-fitting and enhances the generalization ability. We conduct extensive experiments to validate our method on three publicly available datasets on whole-brain image segmentation. ROAM achieves state-of-the-art (SOTA) results in fully supervised (89.5%) and semi-supervised (87.0%) settings with a relative improvement of up to 2.40% and 16.50%, respectively for the whole-brain segmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
169,047
1810.11823
Multi-Spectral Imaging via Computed Tomography (MUSIC) - Comparing Unsupervised Spectral Segmentations for Material Differentiation
Multi-spectral computed tomography is an emerging technology for the non-destructive identification of object materials and the study of their physical properties. Applications of this technology can be found in various scientific and industrial contexts, such as luggage scanning at airports. Material distinction and its identification is challenging, even with spectral x-ray information, due to acquisition noise, tomographic reconstruction artefacts and scanning setup application constraints. We present MUSIC - and open access multi-spectral CT dataset in 2D and 3D - to promote further research in the area of material identification. We demonstrate the value of this dataset on the image analysis challenge of object segmentation purely based on the spectral response of its composing materials. In this context, we compare the segmentation accuracy of fast adaptive mean shift (FAMS) and unconstrained graph cuts on both datasets. We further discuss the impact of reconstruction artefacts and segmentation controls on the achievable results. Dataset, related software packages and further documentation are made available to the imaging community in an open-access manner to promote further data-driven research on the subject
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
111,603