id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1905.08284
Enriching Pre-trained Language Model with Entity Information for Relation Classification
Relation classification is an important NLP task to extract relations between entities. The state-of-the-art methods for relation classification are primarily based on Convolutional or Recurrent Neural Networks. Recently, the pre-trained BERT model achieves very successful results in many NLP classification / sequence labeling tasks. Relation classification differs from those tasks in that it relies on information of both the sentence and the two target entities. In this paper, we propose a model that both leverages the pre-trained BERT language model and incorporates information from the target entities to tackle the relation classification task. We locate the target entities and transfer the information through the pre-trained architecture and incorporate the corresponding encoding of the two entities. We achieve significant improvement over the state-of-the-art method on the SemEval-2010 task 8 relational dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
131,436
2306.05781
Adaptivity Complexity for Causal Graph Discovery
Causal discovery from interventional data is an important problem, where the task is to design an interventional strategy that learns the hidden ground truth causal graph $G(V,E)$ on $|V| = n$ nodes while minimizing the number of performed interventions. Most prior interventional strategies broadly fall into two categories: non-adaptive and adaptive. Non-adaptive strategies decide on a single fixed set of interventions to be performed while adaptive strategies can decide on which nodes to intervene on sequentially based on past interventions. While adaptive algorithms may use exponentially fewer interventions than their non-adaptive counterparts, there are practical concerns that constrain the amount of adaptivity allowed. Motivated by this trade-off, we study the problem of $r$-adaptivity, where the algorithm designer recovers the causal graph under a total of $r$ sequential rounds whilst trying to minimize the total number of interventions. For this problem, we provide a $r$-adaptive algorithm that achieves $O(\min\{r,\log n\} \cdot n^{1/\min\{r,\log n\}})$ approximation with respect to the verification number, a well-known lower bound for adaptive algorithms. Furthermore, for every $r$, we show that our approximation is tight. Our definition of $r$-adaptivity interpolates nicely between the non-adaptive ($r=1$) and fully adaptive ($r=n$) settings where our approximation simplifies to $O(n)$ and $O(\log n)$ respectively, matching the best-known approximation guarantees for both extremes. Our results also extend naturally to the bounded size interventions.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
372,333
1304.6078
Automating the Dispute Resolution in Task Dependency Network
When perturbation or unexpected events do occur, agents need protocols for repairing or reforming the supply chain. Unfortunate contingency could increase too much the cost of performance, while breaching the current contract may be more efficient. In our framework the principles of contract law are applied to set penalties: expectation damages, opportunity cost, reliance damages, and party design remedies, and they are introduced in the task dependency model
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
24,142
1905.09788
Multi-Sample Dropout for Accelerated Training and Better Generalization
Dropout is a simple but efficient regularization technique for achieving better generalization of deep neural networks (DNNs); hence it is widely used in tasks based on DNNs. During training, dropout randomly discards a portion of the neurons to avoid overfitting. This paper presents an enhanced dropout technique, which we call multi-sample dropout, for both accelerating training and improving generalization over the original dropout. The original dropout creates a randomly selected subset (called a dropout sample) from the input in each training iteration while the multi-sample dropout creates multiple dropout samples. The loss is calculated for each sample, and then the sample losses are averaged to obtain the final loss. This technique can be easily implemented by duplicating a part of the network after the dropout layer while sharing the weights among the duplicated fully connected layers. Experimental results using image classification tasks including ImageNet, CIFAR-10, and CIFAR-100 showed that multi-sample dropout accelerates training. Moreover, the networks trained using multi-sample dropout achieved lower error rates compared to networks trained with the original dropout. The additional computation cost due to the duplicated operations is not significant for deep convolutional networks because most of the computation time is consumed in the convolution layers before the dropout layer, which are not duplicated.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
true
false
false
131,832
2101.07676
COTORRA: COntext-aware Testbed fOR Robotic Applications
Edge & Fog computing have received considerable attention as promising candidates for the evolution of robotic systems. In this letter, we propose COTORRA, an Edge & Fog driven robotic testbed that combines context information with robot sensor data to validate innovative concepts for robotic systems prior to being applied in a production environment. In lab/university, we established COTORRA as an easy applicable and modular testbed on top of heterogeneous network infrastructure. COTORRA is open for pluggable robotic applications. To verify its feasibility and assess its performance, we ran set of experiments that show how autonomous navigation applications can achieve target latencies bellow 15ms or perform an inter-domain (DLT) federation within 19 seconds.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
216,110
2403.11219
Causality from Bottom to Top: A Survey
Causality has become a fundamental approach for explaining the relationships between events, phenomena, and outcomes in various fields of study. It has invaded various fields and applications, such as medicine, healthcare, economics, finance, fraud detection, cybersecurity, education, public policy, recommender systems, anomaly detection, robotics, control, sociology, marketing, and advertising. In this paper, we survey its development over the past five decades, shedding light on the differences between causality and other approaches, as well as the preconditions for using it. Furthermore, the paper illustrates how causality interacts with new approaches such as Artificial Intelligence (AI), Generative AI (GAI), Machine and Deep Learning, Reinforcement Learning (RL), and Fuzzy Logic. We study the impact of causality on various fields, its contribution, and its interaction with state-of-the-art approaches. Additionally, the paper exemplifies the trustworthiness and explainability of causality models. We offer several ways to evaluate causality models and discuss future directions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
438,598
2308.05411
Explainable AI applications in the Medical Domain: a systematic review
Artificial Intelligence in Medicine has made significant progress with emerging applications in medical imaging, patient care, and other areas. While these applications have proven successful in retrospective studies, very few of them were applied in practice.The field of Medical AI faces various challenges, in terms of building user trust, complying with regulations, using data ethically.Explainable AI (XAI) aims to enable humans understand AI and trust its results. This paper presents a literature review on the recent developments of XAI solutions for medical decision support, based on a representative sample of 198 articles published in recent years. The systematic synthesis of the relevant articles resulted in several findings. (1) model-agnostic XAI techniques were mostly employed in these solutions, (2) deep learning models are utilized more than other types of machine learning models, (3) explainability was applied to promote trust, but very few works reported the physicians participation in the loop, (4) visual and interactive user interface is more useful in understanding the explanation and the recommendation of the system. More research is needed in collaboration between medical and AI experts, that could guide the development of suitable frameworks for the design, implementation, and evaluation of XAI solutions in medicine.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
384,784
2403.16666
Revisiting the Sleeping Beauty problem
The Sleeping Beauty problem is a probability riddle with no definite solution for more than two decades and its solution is of great interest in many fields of knowledge. There are two main competing solutions to the problem: the halfer approach, and the thirder approach. The main reason for disagreement in the literature is connected to the use of different probability spaces to represent the same probabilistic riddle. In this work, we analyse the problem from a mathematical perspective, identifying probability distributions induced directly from the thought experiment's rules. The precise choices of probability spaces provide both halfer and thirder solutions to the problem. To try and decide on which approach to follow, a criterion involving the information available to Sleeping Beauty is proposed.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
441,144
2209.00475
REMOT: A Region-to-Whole Framework for Realistic Human Motion Transfer
Human Video Motion Transfer (HVMT) aims to, given an image of a source person, generate his/her video that imitates the motion of the driving person. Existing methods for HVMT mainly exploit Generative Adversarial Networks (GANs) to perform the warping operation based on the flow estimated from the source person image and each driving video frame. However, these methods always generate obvious artifacts due to the dramatic differences in poses, scales, and shifts between the source person and the driving person. To overcome these challenges, this paper presents a novel REgionto-whole human MOtion Transfer (REMOT) framework based on GANs. To generate realistic motions, the REMOT adopts a progressive generation paradigm: it first generates each body part in the driving pose without flow-based warping, then composites all parts into a complete person of the driving motion. Moreover, to preserve the natural global appearance, we design a Global Alignment Module to align the scale and position of the source person with those of the driving person based on their layouts. Furthermore, we propose a Texture Alignment Module to keep each part of the person aligned according to the similarity of the texture. Finally, through extensive quantitative and qualitative experiments, our REMOT achieves state-of-the-art results on two public benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
315,594
2304.07139
Neuromorphic Optical Flow and Real-time Implementation with Event Cameras
Optical flow provides information on relative motion that is an important component in many computer vision pipelines. Neural networks provide high accuracy optical flow, yet their complexity is often prohibitive for application at the edge or in robots, where efficiency and latency play crucial role. To address this challenge, we build on the latest developments in event-based vision and spiking neural networks. We propose a new network architecture, inspired by Timelens, that improves the state-of-the-art self-supervised optical flow accuracy when operated both in spiking and non-spiking mode. To implement a real-time pipeline with a physical event camera, we propose a methodology for principled model simplification based on activity and latency analysis. We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity while maintaining the accuracy, opening the path for real-time deployments.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
358,243
1505.02891
Ontology Based Document Clustering Using MapReduce
Nowadays, document clustering is considered as a data intensive task due to the dramatic, fast increase in the number of available documents. Nevertheless, the features that represent those documents are also too large. The most common method for representing documents is the vector space model, which represents document features as a bag of words and does not represent semantic relations between words. In this paper we introduce a distributed implementation for the bisecting k-means using MapReduce programming model. The aim behind our proposed implementation is to solve the problem of clustering intensive data documents. In addition, we propose integrating the WordNet ontology with bisecting k-means in order to utilize the semantic relations between words to enhance document clustering results. Our presented experimental results show that using lexical categories for nouns only enhances internal evaluation measures of document clustering; and decreases the documents features from thousands to tens features. Our experiments were conducted using Amazon Elastic MapReduce to deploy the Bisecting k-means algorithm.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
43,013
2302.03497
MMRec: Simplifying Multimodal Recommendation
This paper presents an open-source toolbox, MMRec for multimodal recommendation. MMRec simplifies and canonicalizes the process of implementing and comparing multimodal recommendation models. The objective of MMRec is to provide a unified and configurable arena that can minimize the effort in implementing and testing multimodal recommendation models. It enables multimodal models, ranging from traditional matrix factorization to modern graph-based algorithms, capable of fusing information from multiple modalities simultaneously. Our documentation, examples, and source code are available at \url{https://github.com/enoche/MMRec}.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
344,359
2412.07349
Disturbance Observer-Parameterized Control Barrier Function with Adaptive Safety Bounds
This letter presents a nonlinear disturbance observer-parameterized control barrier function (DOp-CBF) designed for a robust safety control system under external disturbances. This framework emphasizes that the safety bounds are relevant to the disturbances, acknowledging the critical impact of disturbances on system safety. This work incorporates a disturbance observer (DO) as an adaptive mechanism of the safety bounds design. Instead of considering the worst-case scenario, the safety bounds are dynamically adjusted using DO. The forward invariance of the proposed method regardless of the observer error is ensured, and the corresponding optimal control formulation is presented. The performance of the proposed method is demonstrated through simulations of a cruise control problem under varying road grades. The influence of road grade on the safe distance between vehicles is analyzed and managed using a DO. The results demonstrate the advantages of this approach in maintaining safety and improving system performance under disturbances.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
515,631
2309.02784
Norm Tweaking: High-performance Low-bit Quantization of Large Language Models
As the size of large language models (LLMs) continues to grow, model compression without sacrificing accuracy has become a crucial challenge for deployment. While some quantization methods, such as GPTQ, have made progress in achieving acceptable 4-bit weight-only quantization, attempts at lower-bit quantization often result in severe performance degradation. In this paper, we introduce a technique called norm tweaking, which can be used as a plugin in current PTQ methods to achieve high precision while being cost-efficient. Our approach is inspired by the observation that rectifying the quantized activation distribution to match its float counterpart can readily restore accuracy for LLMs. To achieve this, we carefully design a tweaking strategy that includes calibration data generation and channel-wise distance constraint to update the weights of normalization layers for better generalization. We conduct extensive experiments on various datasets using several open-sourced LLMs. Our method demonstrates significant improvements in both weight-only quantization and joint quantization of weights and activations, surpassing existing PTQ methods. On GLM-130B and OPT-66B, our method even achieves the same level of accuracy at 2-bit quantization as their float ones. Our simple and effective approach makes it more practical for real-world applications.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
390,166
2409.12162
Precise Forecasting of Sky Images Using Spatial Warping
The intermittency of solar power, due to occlusion from cloud cover, is one of the key factors inhibiting its widespread use in both commercial and residential settings. Hence, real-time forecasting of solar irradiance for grid-connected photovoltaic systems is necessary to schedule and allocate resources across the grid. Ground-based imagers that capture wide field-of-view images of the sky are commonly used to monitor cloud movement around a particular site in an effort to forecast solar irradiance. However, these wide FOV imagers capture a distorted image of sky image, where regions near the horizon are heavily compressed. This hinders the ability to precisely predict cloud motion near the horizon which especially affects prediction over longer time horizons. In this work, we combat the aforementioned constraint by introducing a deep learning method to predict a future sky image frame with higher resolution than previous methods. Our main contribution is to derive an optimal warping method to counter the adverse affects of clouds at the horizon, and learn a framework for future sky image prediction which better determines cloud evolution for longer time horizons.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
489,464
1707.05662
Learning Powers of Poisson Binomial Distributions
We introduce the problem of simultaneously learning all powers of a Poisson Binomial Distribution (PBD). A PBD of order $n$ is the distribution of a sum of $n$ mutually independent Bernoulli random variables $X_i$, where $\mathbb{E}[X_i] = p_i$. The $k$'th power of this distribution, for $k$ in a range $[m]$, is the distribution of $P_k = \sum_{i=1}^n X_i^{(k)}$, where each Bernoulli random variable $X_i^{(k)}$ has $\mathbb{E}[X_i^{(k)}] = (p_i)^k$. The learning algorithm can query any power $P_k$ several times and succeeds in learning all powers in the range, if with probability at least $1- \delta$: given any $k \in [m]$, it returns a probability distribution $Q_k$ with total variation distance from $P_k$ at most $\epsilon$. We provide almost matching lower and upper bounds on query complexity for this problem. We first show a lower bound on the query complexity on PBD powers instances with many distinct parameters $p_i$ which are separated, and we almost match this lower bound by examining the query complexity of simultaneously learning all the powers of a special class of PBD's resembling the PBD's of our lower bound. We study the fundamental setting of a Binomial distribution, and provide an optimal algorithm which uses $O(1/\epsilon^2)$ samples. Diakonikolas, Kane and Stewart [COLT'16] showed a lower bound of $\Omega(2^{1/\epsilon})$ samples to learn the $p_i$'s within error $\epsilon$. The question whether sampling from powers of PBDs can reduce this sampling complexity, has a negative answer since we show that the exponential number of samples is inevitable. Having sampling access to the powers of a PBD we then give a nearly optimal algorithm that learns its $p_i$'s. To prove our two last lower bounds we extend the classical minimax risk definition from statistics to estimating functions of sequences of distributions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
77,271
2406.07595
VulDetectBench: Evaluating the Deep Capability of Vulnerability Detection with Large Language Models
Large Language Models (LLMs) have training corpora containing large amounts of program code, greatly improving the model's code comprehension and generation capabilities. However, sound comprehensive research on detecting program vulnerabilities, a more specific task related to code, and evaluating the performance of LLMs in this more specialized scenario is still lacking. To address common challenges in vulnerability analysis, our study introduces a new benchmark, VulDetectBench, specifically designed to assess the vulnerability detection capabilities of LLMs. The benchmark comprehensively evaluates LLM's ability to identify, classify, and locate vulnerabilities through five tasks of increasing difficulty. We evaluate the performance of 17 models (both open- and closed-source) and find that while existing models can achieve over 80% accuracy on tasks related to vulnerability identification and classification, they still fall short on specific, more detailed vulnerability analysis tasks, with less than 30% accuracy, making it difficult to provide valuable auxiliary information for professional vulnerability mining. Our benchmark effectively evaluates the capabilities of various LLMs at different levels in the specific task of vulnerability detection, providing a foundation for future research and improvements in this critical area of code security. VulDetectBench is publicly available at https://github.com/Sweetaroo/VulDetectBench.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
463,134
2107.12825
Individual Survival Curves with Conditional Normalizing Flows
Survival analysis, or time-to-event modelling, is a classical statistical problem that has garnered a lot of interest for its practical use in epidemiology, demographics or actuarial sciences. Recent advances on the subject from the point of view of machine learning have been concerned with precise per-individual predictions instead of population studies, driven by the rise of individualized medicine. We introduce here a conditional normalizing flow based estimate of the time-to-event density as a way to model highly flexible and individualized conditional survival distributions. We use a novel hierarchical formulation of normalizing flows to enable efficient fitting of flexible conditional distributions without overfitting and show how the normalizing flow formulation can be efficiently adapted to the censored setting. We experimentally validate the proposed approach on a synthetic dataset as well as four open medical datasets and an example of a common financial problem.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
248,018
2302.05138
Plan-then-Seam: Towards Efficient Table-to-Text Generation
Table-to-text generation aims at automatically generating text to help people conveniently obtain salient information in tables. Recent works explicitly decompose the generation process into content planning and surface generation stages, employing two autoregressive networks for them respectively. However, they are computationally expensive due to the non-parallelizable nature of autoregressive decoding and the redundant parameters of two networks. In this paper, we propose the first totally non-autoregressive table-to-text model (Plan-then-Seam, PTS) that produces its outputs in parallel with one single network. PTS firstly writes and calibrates one plan of the content to be generated with a novel rethinking pointer predictor, and then takes the plan as the context for seaming to decode the description. These two steps share parameters and perform iteratively to capture token inter-dependency while keeping parallel decoding. Experiments on two public benchmarks show that PTS achieves 3.0~5.6 times speedup for inference time, reducing 50% parameters, while maintaining as least comparable performance against strong two-stage table-to-text competitors.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
344,950
2008.00824
State-of-the-art Techniques in Deep Edge Intelligence
The potential held by the gargantuan volumes of data being generated across networks worldwide has been truly unlocked by machine learning techniques and more recently Deep Learning. The advantages offered by the latter have seen it rapidly becoming a framework of choice for various applications. However, the centralization of computational resources and the need for data aggregation have long been limiting factors in the democratization of Deep Learning applications. Edge Computing is an emerging paradigm that aims to utilize the hitherto untapped processing resources available at the network periphery. Edge Intelligence (EI) has quickly emerged as a powerful alternative to enable learning using the concepts of Edge Computing. Deep Learning-based Edge Intelligence or Deep Edge Intelligence (DEI) lies in this rapidly evolving domain. In this article, we provide an overview of the major constraints in operationalizing DEI. The major research avenues in DEI have been consolidated under Federated Learning, Distributed Computation, Compression Schemes and Conditional Computation. We also present some of the prevalent challenges and highlight prospective research avenues.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
190,128
2110.03189
Pointwise Bounds for Distribution Estimation under Communication Constraints
We consider the problem of estimating a $d$-dimensional discrete distribution from its samples observed under a $b$-bit communication constraint. In contrast to most previous results that largely focus on the global minimax error, we study the local behavior of the estimation error and provide \emph{pointwise} bounds that depend on the target distribution $p$. In particular, we show that the $\ell_2$ error decays with $O\left(\frac{\lVert p\rVert_{1/2}}{n2^b}\vee \frac{1}{n}\right)$ (In this paper, we use $a\vee b$ and $a \wedge b$ to denote $\max(a, b)$ and $\min(a,b)$ respectively.) when $n$ is sufficiently large, hence it is governed by the \emph{half-norm} of $p$ instead of the ambient dimension $d$. For the achievability result, we propose a two-round sequentially interactive estimation scheme that achieves this error rate uniformly over all $p$. Our scheme is based on a novel local refinement idea, where we first use a standard global minimax scheme to localize $p$ and then use the remaining samples to locally refine our estimate. We also develop a new local minimax lower bound with (almost) matching $\ell_2$ error, showing that any interactive scheme must admit a $\Omega\left( \frac{\lVert p \rVert_{{(1+\delta)}/{2}}}{n2^b}\right)$ $\ell_2$ error for any $\delta > 0$. The lower bound is derived by first finding the best parametric sub-model containing $p$, and then upper bounding the quantized Fisher information under this model. Our upper and lower bounds together indicate that the $\mathcal{H}_{1/2}(p) = \log(\lVert p \rVert_{{1}/{2}})$ bits of communication is both sufficient and necessary to achieve the optimal (centralized) performance, where $\mathcal{H}_{{1}/{2}}(p)$ is the R\'enyi entropy of order $2$. Therefore, under the $\ell_2$ loss, the correct measure of the local communication complexity at $p$ is its R\'enyi entropy.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
259,405
2307.16419
Subspace Distillation for Continual Learning
An ultimate objective in continual learning is to preserve knowledge learned in preceding tasks while learning new tasks. To mitigate forgetting prior knowledge, we propose a novel knowledge distillation technique that takes into the account the manifold structure of the latent/output space of a neural network in learning novel tasks. To achieve this, we propose to approximate the data manifold up-to its first order, hence benefiting from linear subspaces to model the structure and maintain the knowledge of a neural network while learning novel concepts. We demonstrate that the modeling with subspaces provides several intriguing properties, including robustness to noise and therefore effective for mitigating Catastrophic Forgetting in continual learning. We also discuss and show how our proposed method can be adopted to address both classification and segmentation problems. Empirically, we observe that our proposed method outperforms various continual learning methods on several challenging datasets including Pascal VOC, and Tiny-Imagenet. Furthermore, we show how the proposed method can be seamlessly combined with existing learning approaches to improve their performances. The codes of this article will be available at https://github.com/csiro-robotics/SDCL.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
382,601
1709.05405
Commutativity and Commutative Pairs of Some Differential Equations
In this study, explicit differential equations representing commutative pairs of some well-known second-order linear time-varying systems have been derived. The commutativity of these systems are investigated by considering 30 second-order linear differential equations with variable coefficients. It is shown that the system modeled by each one of these equations has a commutative pair with (or without) some conditions or not. There appear special cases such that both, only one or neither of the original system and its commutative pair has explicit analytic solution. Some benefits of commutativity have already been mentioned in the literature but a new application for in cryptology for obscuring transmitted signals in telecommunication is illustrated in this paper.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
80,857
2412.19211
Large Language Models Meet Graph Neural Networks: A Perspective of Graph Mining
Graph mining is an important area in data mining and machine learning that involves extracting valuable information from graph-structured data. In recent years, significant progress has been made in this field through the development of graph neural networks (GNNs). However, GNNs are still deficient in generalizing to diverse graph data. Aiming to this issue, Large Language Models (LLMs) could provide new solutions for graph mining tasks with their superior semantic understanding. In this review, we systematically review the combination and application techniques of LLMs and GNNs and present a novel taxonomy for research in this interdisciplinary field, which involves three main categories: GNN-driving-LLM, LLM-driving-GNN, and GNN-LLM-co-driving. Within this framework, we reveal the capabilities of LLMs in enhancing graph feature extraction as well as improving the effectiveness of downstream tasks such as node classification, link prediction, and community detection. Although LLMs have demonstrated their great potential in handling graph-structured data, their high computational requirements and complexity remain challenges. Future research needs to continue to explore how to efficiently fuse LLMs and GNNs to achieve more powerful graph learning and reasoning capabilities and provide new impetus for the development of graph mining techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
520,759
2005.07376
Improving Neuroevolution Using Island Extinction and Repopulation
Neuroevolution commonly uses speciation strategies to better explore the search space of neural network architectures. One such speciation strategy is through the use of islands, which are also popular in improving performance and convergence of distributed evolutionary algorithms. However, in this approach some islands can become stagnant and not find new best solutions. In this paper, we propose utilizing extinction events and island repopulation to avoid premature convergence. We explore this with the Evolutionary eXploration of Augmenting Memory Models (EXAMM) neuro-evolution algorithm. In this strategy, all members of the worst performing island are killed of periodically and repopulated with mutated versions of the global best genome. This island based strategy is additionally compared to NEAT's (NeuroEvolution of Augmenting Topologies) speciation strategy. Experiments were performed using two different real world time series datasets (coal-fired power plant and aviation flight data). The results show that with statistical significance, this island extinction and repopulation strategy evolves better global best genomes than both EXAMM's original island based strategy and NEAT's speciation strategy.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
177,265
2206.12559
Self-supervised Context-aware Style Representation for Expressive Speech Synthesis
Expressive speech synthesis, like audiobook synthesis, is still challenging for style representation learning and prediction. Deriving from reference audio or predicting style tags from text requires a huge amount of labeled data, which is costly to acquire and difficult to define and annotate accurately. In this paper, we propose a novel framework for learning style representation from abundant plain text in a self-supervised manner. It leverages an emotion lexicon and uses contrastive learning and deep clustering. We further integrate the style representation as a conditioned embedding in a multi-style Transformer TTS. Comparing with multi-style TTS by predicting style tags trained on the same dataset but with human annotations, our method achieves improved results according to subjective evaluations on both in-domain and out-of-domain test sets in audiobook speech. Moreover, with implicit context-aware style representation, the emotion transition of synthesized audio in a long paragraph appears more natural. The audio samples are available on the demo web.
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
304,647
2007.10231
Integrating Network Embedding and Community Outlier Detection via Multiclass Graph Description
Network (or graph) embedding is the task to map the nodes of a graph to a lower dimensional vector space, such that it preserves the graph properties and facilitates the downstream network mining tasks. Real world networks often come with (community) outlier nodes, which behave differently from the regular nodes of the community. These outlier nodes can affect the embedding of the regular nodes, if not handled carefully. In this paper, we propose a novel unsupervised graph embedding approach (called DMGD) which integrates outlier and community detection with node embedding. We extend the idea of deep support vector data description to the framework of graph embedding when there are multiple communities present in the given network, and an outlier is characterized relative to its community. We also show the theoretical bounds on the number of outliers detected by DMGD. Our formulation boils down to an interesting minimax game between the outliers, community assignments and the node embedding function. We also propose an efficient algorithm to solve this optimization framework. Experimental results on both synthetic and real world networks show the merit of our approach compared to state-of-the-arts.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
188,210
2312.08735
Polyper: Boundary Sensitive Polyp Segmentation
We present a new boundary sensitive framework for polyp segmentation, called Polyper. Our method is motivated by a clinical approach that seasoned medical practitioners often leverage the inherent features of interior polyp regions to tackle blurred boundaries.Inspired by this, we propose explicitly leveraging polyp regions to bolster the model's boundary discrimination capability while minimizing computation. Our approach first extracts boundary and polyp regions from the initial segmentation map through morphological operators. Then, we design the boundary sensitive attention that concentrates on augmenting the features near the boundary regions using the interior polyp regions's characteristics to generate good segmentation results. Our proposed method can be seamlessly integrated with classical encoder networks, like ResNet-50, MiT-B1, and Swin Transformer. To evaluate the effectiveness of Polyper, we conduct experiments on five publicly available challenging datasets, and receive state-of-the-art performance on all of them. Code is available at https://github.com/haoshao-nku/medical_seg.git.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
415,427
1602.09118
Easy Monotonic Policy Iteration
A key problem in reinforcement learning for control with general function approximators (such as deep neural networks and other nonlinear functions) is that, for many algorithms employed in practice, updates to the policy or $Q$-function may fail to improve performance---or worse, actually cause the policy performance to degrade. Prior work has addressed this for policy iteration by deriving tight policy improvement bounds; by optimizing the lower bound on policy improvement, a better policy is guaranteed. However, existing approaches suffer from bounds that are hard to optimize in practice because they include sup norm terms which cannot be efficiently estimated or differentiated. In this work, we derive a better policy improvement bound where the sup norm of the policy divergence has been replaced with an average divergence; this leads to an algorithm, Easy Monotonic Policy Iteration, that generates sequences of policies with guaranteed non-decreasing returns and is easy to implement in a sample-based framework.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
52,726
2101.02692
Where2Act: From Pixels to Actions for Articulated 3D Objects
One of the fundamental goals of visual perception is to allow agents to meaningfully interact with their environment. In this paper, we take a step towards that long-term goal -- we extract highly localized actionable information related to elementary actions such as pushing or pulling for articulated objects with movable parts. For example, given a drawer, our network predicts that applying a pulling force on the handle opens the drawer. We propose, discuss, and evaluate novel network architectures that given image and depth data, predict the set of actions possible at each pixel, and the regions over articulated parts that are likely to move under the force. We propose a learning-from-interaction framework with an online data sampling strategy that allows us to train the network in simulation (SAPIEN) and generalizes across categories. Check the website for code and data release: https://cs.stanford.edu/~kaichun/where2act/
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
214,709
2402.18128
Downstream Task Guided Masking Learning in Masked Autoencoders Using Multi-Level Optimization
Masked Autoencoder (MAE) is a notable method for self-supervised pretraining in visual representation learning. It operates by randomly masking image patches and reconstructing these masked patches using the unmasked ones. A key limitation of MAE lies in its disregard for the varying informativeness of different patches, as it uniformly selects patches to mask. To overcome this, some approaches propose masking based on patch informativeness. However, these methods often do not consider the specific requirements of downstream tasks, potentially leading to suboptimal representations for these tasks. In response, we introduce the Multi-level Optimized Mask Autoencoder (MLO-MAE), a novel framework that leverages end-to-end feedback from downstream tasks to learn an optimal masking strategy during pretraining. Our experimental findings highlight MLO-MAE's significant advancements in visual representation learning. Compared to existing methods, it demonstrates remarkable improvements across diverse datasets and tasks, showcasing its adaptability and efficiency. Our code is available at: https://github.com/Alexiland/MLOMAE
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
433,297
2211.07687
Uncovering the Portability Limitation of Deep Learning-Based Wireless Device Fingerprints
Recent device fingerprinting approaches rely on deep learning to extract device-specific features solely from raw RF signals to identify, classify and authenticate wireless devices. One widely known issue lies in the inability of these approaches to maintain good performances when the training data and testing data are collected under varying deployment domains. For example, when the learning model is trained on data collected from one receiver but tested on data collected from a different receiver, the performance degrades substantially compared to when both training and testing data are collected using the same receiver. The same also happens when considering other varying domains, like channel condition and protocol configuration. In this paper, we begin by explaining, through testbed experiments, the challenges these fingerprinting techniques face when it comes to domain portability. We will then present some ideas on how to go about addressing these challenges so as to make deep learning-based device fingerprinting more resilient to domain variability.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
330,328
1702.03401
A Minimax Algorithm Better Than Alpha-beta?: No and Yes
This paper has three main contributions to our understanding of fixed-depth minimax search: (A) A new formulation for Stockman's SSS* algorithm, based on Alpha-Beta, is presented. It solves all the perceived drawbacks of SSS*, finally transforming it into a practical algorithm. In effect, we show that SSS* = alpha-beta + ransposition tables. The crucial step is the realization that transposition tables contain so-called solution trees, structures that are used in best-first search algorithms like SSS*. Having created a practical version, we present performance measurements with tournament game-playing programs for three different minimax games, yielding results that contradict a number of publications. (B) Based on the insights gained in our attempts at understanding SSS*, we present a framework that facilitates the construction of several best-first fixed- depth game-tree search algorithms, known and new. The framework is based on depth-first null-window Alpha-Beta search, enhanced with storage to allow for the refining of previous search results. It focuses attention on the essential differences between algorithms. (C) We present a new instance of the framework, MTD(f). It is well-suited for use with iterative deepening, and performs better than algorithms that are currently used in most state-of-the-art game-playing programs. We provide experimental evidence to explain why MTD(f) performs better than the other fixed-depth minimax algorithms.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
68,122
2210.14985
Learning Deep Sensorimotor Policies for Vision-based Autonomous Drone Racing
Autonomous drones can operate in remote and unstructured environments, enabling various real-world applications. However, the lack of effective vision-based algorithms has been a stumbling block to achieving this goal. Existing systems often require hand-engineered components for state estimation, planning, and control. Such a sequential design involves laborious tuning, human heuristics, and compounding delays and errors. This paper tackles the vision-based autonomous-drone-racing problem by learning deep sensorimotor policies. We use contrastive learning to extract robust feature representations from the input images and leverage a two-stage learning-by-cheating framework for training a neural network policy. The resulting policy directly infers control commands with feature representations learned from raw images, forgoing the need for globally-consistent state estimation, trajectory planning, and handcrafted control design. Our experimental results indicate that our vision-based policy can achieve the same level of racing performance as the state-based policy while being robust against different visual disturbances and distractors. We believe this work serves as a stepping-stone toward developing intelligent vision-based autonomous systems that control the drone purely from image inputs, like human pilots.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
326,754
2309.02054
An Adaptive Spatial-Temporal Local Feature Difference Method for Infrared Small-moving Target Detection
Detecting small moving targets accurately in infrared (IR) image sequences is a significant challenge. To address this problem, we propose a novel method called spatial-temporal local feature difference (STLFD) with adaptive background suppression (ABS). Our approach utilizes filters in the spatial and temporal domains and performs pixel-level ABS on the output to enhance the contrast between the target and the background. The proposed method comprises three steps. First, we obtain three temporal frame images based on the current frame image and extract two feature maps using the designed spatial domain and temporal domain filters. Next, we fuse the information of the spatial domain and temporal domain to produce the spatial-temporal feature maps and suppress noise using our pixel-level ABS module. Finally, we obtain the segmented binary map by applying a threshold. Our experimental results demonstrate that the proposed method outperforms existing state-of-the-art methods for infrared small-moving target detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
389,913
2005.10411
Interpretable and Accurate Fine-grained Recognition via Region Grouping
We present an interpretable deep model for fine-grained visual recognition. At the core of our method lies the integration of region-based part discovery and attribution within a deep neural network. Our model is trained using image-level object labels, and provides an interpretation of its results via the segmentation of object parts and the identification of their contributions towards classification. To facilitate the learning of object parts without direct supervision, we explore a simple prior of the occurrence of object parts. We demonstrate that this prior, when combined with our region-based part discovery and attribution, leads to an interpretable model that remains highly accurate. Our model is evaluated on major fine-grained recognition datasets, including CUB-200, CelebA and iNaturalist. Our results compare favorably to state-of-the-art methods on classification tasks, and our method outperforms previous approaches on the localization of object parts.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
178,168
1106.5264
Acquiring Correct Knowledge for Natural Language Generation
Natural language generation (NLG) systems are computer software systems that produce texts in English and other human languages, often from non-linguistic input data. NLG systems, like most AI systems, need substantial amounts of knowledge. However, our experience in two NLG projects suggests that it is difficult to acquire correct knowledge for NLG systems; indeed, every knowledge acquisition (KA) technique we tried had significant problems. In general terms, these problems were due to the complexity, novelty, and poorly understood nature of the tasks our systems attempted, and were worsened by the fact that people write so differently. This meant in particular that corpus-based KA approaches suffered because it was impossible to assemble a sizable corpus of high-quality consistent manually written texts in our domains; and structured expert-oriented KA techniques suffered because experts disagreed and because we could not get enough information about special and unusual cases to build robust systems. We believe that such problems are likely to affect many other NLG systems as well. In the long term, we hope that new KA techniques may emerge to help NLG system builders. In the shorter term, we believe that understanding how individual KA techniques can fail, and using a mixture of different KA techniques with different strengths and weaknesses, can help developers acquire NLG knowledge that is mostly correct.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
11,009
2003.02736
Claim Check-Worthiness Detection as Positive Unlabelled Learning
As the first step of automatic fact checking, claim check-worthiness detection is a critical component of fact checking systems. There are multiple lines of research which study this problem: check-worthiness ranking from political speeches and debates, rumour detection on Twitter, and citation needed detection from Wikipedia. To date, there has been no structured comparison of these various tasks to understand their relatedness, and no investigation into whether or not a unified approach to all of them is achievable. In this work, we illuminate a central challenge in claim check-worthiness detection underlying all of these tasks, being that they hinge upon detecting both how factual a sentence is, as well as how likely a sentence is to be believed without verification. As such, annotators only mark those instances they judge to be clear-cut check-worthy. Our best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning that finds instances which were incorrectly labelled as not check-worthy. In applying this, we out-perform the state of the art in two of the three tasks studied for claim check-worthiness detection in English.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
167,022
2404.11304
Dynamic Phasor Modeling of Single-Phase Grid-Forming Converters
In modern power systems, grid-forming power converters (GFMCs) have emerged as an enabling technology. However, the modeling of single-phase GFMCs faces new challenges. In particular, the nonlinear orthogonal signal generation unit, crucial for power measurement, still lacks an accurate model. To overcome the challenges, this letter proposes a dynamic phasor model of single-phase GFMCs. Moreover, we linearize the proposed model and perform stability analysis, which confirm that the proposed model is more accurate than existing models. Experimental results validate the improved accuracy of the proposed dynamic phasor model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
447,454
2208.09350
PyMIC: A deep learning toolkit for annotation-efficient medical image segmentation
Background and Objective: Open-source deep learning toolkits are one of the driving forces for developing medical image segmentation models. Existing toolkits mainly focus on fully supervised segmentation and require full and accurate pixel-level annotations that are time-consuming and difficult to acquire for segmentation tasks, which makes learning from imperfect labels highly desired for reducing the annotation cost. We aim to develop a new deep learning toolkit to support annotation-efficient learning for medical image segmentation. Methods: Our proposed toolkit named PyMIC is a modular deep learning library for medical image segmentation tasks. In addition to basic components that support development of high-performance models for fully supervised segmentation, it contains several advanced components tailored for learning from imperfect annotations, such as loading annotated and unannounced images, loss functions for unannotated, partially or inaccurately annotated images, and training procedures for co-learning between multiple networks, etc. PyMIC supports development of semi-supervised, weakly supervised and noise-robust learning methods for medical image segmentation. Results: We present several illustrative medical image segmentation tasks based on PyMIC: (1) Achieving competitive performance on fully supervised learning; (2) Semi-supervised cardiac structure segmentation with only 10% training images annotated; (3) Weakly supervised segmentation using scribble annotations; and (4) Learning from noisy labels for chest radiograph segmentation. Conclusions: The PyMIC toolkit is easy to use and facilitates efficient development of medical image segmentation models with imperfect annotations. It is modular and flexible, which enables researchers to develop high-performance models with low annotation cost. The source code is available at: https://github.com/HiLab-git/PyMIC.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
313,673
2406.07263
Active learning for affinity prediction of antibodies
The primary objective of most lead optimization campaigns is to enhance the binding affinity of ligands. For large molecules such as antibodies, identifying mutations that enhance antibody affinity is particularly challenging due to the combinatorial explosion of potential mutations. When the structure of the antibody-antigen complex is available, relative binding free energy (RBFE) methods can offer valuable insights into how different mutations will impact the potency and selectivity of a drug candidate, thereby reducing the reliance on costly and time-consuming wet-lab experiments. However, accurately simulating the physics of large molecules is computationally intensive. We present an active learning framework that iteratively proposes promising sequences for simulators to evaluate, thereby accelerating the search for improved binders. We explore different modeling approaches to identify the most effective surrogate model for this task, and evaluate our framework both using pre-computed pools of data and in a realistic full-loop setting.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
462,970
2105.06253
Exploring CTC Based End-to-End Techniques for Myanmar Speech Recognition
In this work, we explore a Connectionist Temporal Classification (CTC) based end-to-end Automatic Speech Recognition (ASR) model for the Myanmar language. A series of experiments is presented on the topology of the model in which the convolutional layers are added and dropped, different depths of bidirectional long short-term memory (BLSTM) layers are used and different label encoding methods are investigated. The experiments are carried out in low-resource scenarios using our recorded Myanmar speech corpus of nearly 26 hours. The best model achieves character error rate (CER) of 4.72% and syllable error rate (SER) of 12.38% on the test set.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
235,068
1103.5569
An upper bound on community size in scalable community detection
It is well-known that community detection methods based on modularity optimization often fails to discover small communities. Several objective functions used for community detection therefore involve a resolution parameter that allows the detection of communities at different scales. We provide an explicit upper bound on the community size of communities resulting from the optimization of several of these functions. We also show with a simple example that the use of the resolution parameter may artificially force the complete disaggregation of large and densely connected communities.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
9,795
2204.11640
Hybrid ISTA: Unfolding ISTA With Convergence Guarantees Using Free-Form Deep Neural Networks
It is promising to solve linear inverse problems by unfolding iterative algorithms (e.g., iterative shrinkage thresholding algorithm (ISTA)) as deep neural networks (DNNs) with learnable parameters. However, existing ISTA-based unfolded algorithms restrict the network architectures for iterative updates with the partial weight coupling structure to guarantee convergence. In this paper, we propose hybrid ISTA to unfold ISTA with both pre-computed and learned parameters by incorporating free-form DNNs (i.e., DNNs with arbitrary feasible and reasonable network architectures), while ensuring theoretical convergence. We first develop HCISTA to improve the efficiency and flexibility of classical ISTA (with pre-computed parameters) without compromising the convergence rate in theory. Furthermore, the DNN-based hybrid algorithm is generalized to popular variants of learned ISTA, dubbed HLISTA, to enable a free architecture of learned parameters with a guarantee of linear convergence. To our best knowledge, this paper is the first to provide a convergence-provable framework that enables free-form DNNs in ISTA-based unfolded algorithms. This framework is general to endow arbitrary DNNs for solving linear inverse problems with convergence guarantees. Extensive experiments demonstrate that hybrid ISTA can reduce the reconstruction error with an improved convergence rate in the tasks of sparse recovery and compressive sensing.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
293,219
2301.05833
Multi-Agent Coordination Fluid Flow Modeling and Experimental Evaluation
Reliability is a critical aspect of multi-agent system coordination as it ensures that the system functions correctly and consistently. If one agent in the system fails or behaves unexpectedly, it can negatively impact the performance and effectiveness of the entire system. Therefore, it is important to design and implement multi-agent systems with a high level of reliability to ensure that they can operate safely and move smoothly in the presence of unforeseen agent failure or lack of communication with some agent teams moving in a shared motion space. This paper presents a novel fluid flow navigation model that, in an ideal fluid flow, divides agents into cooperative (non-singular) and noncooperative (singular) agents, with cooperative agents sliding along streamlines safely enclosing noncooperative agents in a shared motion space. A series of flight experiments utilizing crazyflie quadcopters will experimentally validate the suggested model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
340,459
2008.09038
Battery State of Charge Modeling for Solar PV Array using Polynomial Regression
In this manuscript, we have investigated the response of the State of Charge (SoC) and the open-circuit voltage across the dynamic battery model under the variable voltage and current during the charging cycle of the battery. These variable input voltage and current have been obtained using the variable irradiance and surface temperature of a Solar PV array which is connected as an input of the dynamic battery model to store the energy within it. In order to match the Simulation result with reality, these variable irradiance and surface temperature of Solar PV Array with respect to time has been simulated. After forming and storing the energy within the dynamic battery model; the SoC of the battery has been estimated using the Kalman filter approach. After the successful estimation of SoC; the Open Circuit Voltage (OCV) and State of Charge (SoC) have been plotted using the polynomial regression technique. The regression plots between the OCV and SoC have been drawn for the polynomial degree of 2, 3,4, and 5. Results reveal that R$^2$ keeps increasing as we increase the degrees of regression. Simultaneously the value of RMSE keeps decreasing as we increase the degree of the polynomial regression.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
192,593
1609.01915
Polyp Detection and Segmentation from Video Capsule Endoscopy: A Review
Video capsule endoscopy (VCE) is used widely nowadays for visualizing the gastrointestinal (GI) tract. Capsule endoscopy exams are prescribed usually as an additional monitoring mechanism and can help in identifying polyps, bleeding, etc. To analyze the large scale video data produced by VCE exams automatic image processing, computer vision, and learning algorithms are required. Recently, automatic polyp detection algorithms have been proposed with various degrees of success. Though polyp detection in colonoscopy and other traditional endoscopy procedure based images is becoming a mature field, due to its unique imaging characteristics detecting polyps automatically in VCE is a hard problem. We review different polyp detection approaches for VCE imagery and provide systematic analysis with challenges faced by standard image processing and computer vision methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
60,653
2411.01527
Performance Evaluation of Deep Learning Models for Water Quality Index Prediction: A Comparative Study of LSTM, TCN, ANN, and MLP
Environmental monitoring and predictive modeling of the Water Quality Index (WQI) through the assessment of the water quality.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
505,111
2501.07563
Training-Free Motion-Guided Video Generation with Enhanced Temporal Consistency Using Motion Consistency Loss
In this paper, we address the challenge of generating temporally consistent videos with motion guidance. While many existing methods depend on additional control modules or inference-time fine-tuning, recent studies suggest that effective motion guidance is achievable without altering the model architecture or requiring extra training. Such approaches offer promising compatibility with various video generation foundation models. However, existing training-free methods often struggle to maintain consistent temporal coherence across frames or to follow guided motion accurately. In this work, we propose a simple yet effective solution that combines an initial-noise-based approach with a novel motion consistency loss, the latter being our key innovation. Specifically, we capture the inter-frame feature correlation patterns of intermediate features from a video diffusion model to represent the motion pattern of the reference video. We then design a motion consistency loss to maintain similar feature correlation patterns in the generated video, using the gradient of this loss in the latent space to guide the generation process for precise motion control. This approach improves temporal consistency across various motion control tasks while preserving the benefits of a training-free setup. Extensive experiments show that our method sets a new standard for efficient, temporally coherent video generation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
524,436
2209.15308
Effective Early Stopping of Point Cloud Neural Networks
Early stopping techniques can be utilized to decrease the time cost, however currently the ultimate goal of early stopping techniques is closely related to the accuracy upgrade or the ability of the neural network to generalize better on unseen data without being large or complex in structure and not directly with its efficiency. Time efficiency is a critical factor in neural networks, especially when dealing with the segmentation of 3D point cloud data, not only because a neural network itself is computationally expensive, but also because point clouds are large and noisy data, making learning processes even more costly. In this paper, we propose a new early stopping technique based on fundamental mathematics aiming to upgrade the trade-off between the learning efficiency and accuracy of neural networks dealing with 3D point clouds. Our results show that by employing our early stopping technique in four distinct and highly utilized neural networks in segmenting 3D point clouds, the training time efficiency of the models is greatly improved, with efficiency gain values reaching up to 94\%, while the models achieving in just a few epochs approximately similar segmentation accuracy metric values like the ones that are obtained in the training of the neural networks in 200 epochs. Also, our proposal outperforms four conventional early stopping approaches in segmentation accuracy, implying a promising innovative early stopping technique in point cloud segmentation.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
320,562
2408.16725
Mini-Omni: Language Models Can Hear, Talk While Thinking in Streaming
Recent advances in language models have achieved significant progress. GPT-4o, as a new milestone, has enabled real-time conversations with humans, demonstrating near-human natural fluency. Such human-computer interaction necessitates models with the capability to perform reasoning directly with the audio modality and generate output in streaming. However, this remains beyond the reach of current academic models, as they typically depend on extra TTS systems for speech synthesis, resulting in undesirable latency. This paper introduces the Mini-Omni, an audio-based end-to-end conversational model, capable of real-time speech interaction. To achieve this capability, we propose a text-instructed speech generation method, along with batch-parallel strategies during inference to further boost the performance. Our method also helps to retain the original model's language capabilities with minimal degradation, enabling other works to establish real-time interaction capabilities. We call this training method "Any Model Can Talk". We also introduce the VoiceAssistant-400K dataset to fine-tune models optimized for speech output. To our best knowledge, Mini-Omni is the first fully end-to-end, open-source model for real-time speech interaction, offering valuable potential for future research.
true
false
true
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
484,427
2404.08955
Consistency analysis of refined instrumental variable methods for continuous-time system identification in closed-loop
Refined instrumental variable methods have been broadly used for identification of continuous-time systems in both open and closed-loop settings. However, the theoretical properties of these methods are still yet to be fully understood when operating in closed-loop. In this paper, we address the consistency of the simplified refined instrumental variable method for continuous-time systems (SRIVC) and its closed-loop variant CLSRIVC when they are applied on data that is generated from a feedback loop. In particular, we consider feedback loops consisting of continuous-time controllers, as well as the discrete-time control case. This paper proves that the SRIVC and CLSRIVC estimators are not generically consistent when there is a continuous-time controller in the loop, and that generic consistency can be achieved when the controller is implemented in discrete-time. Numerical simulations are presented to support the theoretical results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
446,476
2202.07047
Vector Coded Caching Multiplicatively Boosts the Throughput of Realistic Downlink Systems
The recent introduction of vector coded caching has revealed that multi-rank transmissions in the presence of receiver-side cache content can dramatically ameliorate the file-size bottleneck of coded caching and substantially boost performance in error-free wire-like channels. We here employ large-matrix analysis to explore the effect of vector coded caching in realistic wireless multi-antenna downlink systems. Our analysis answers a simple question: Under a fixed set of antenna and SNR resources, and a given downlink MISO system which can already enjoy an optimized exploitation of multiplexing and beamforming gains, what is the multiplicative boost in the throughput when we are now allowed to occasionally add content inside reasonably-sized receiver-side caches? The derived closed-form expressions capture various linear precoders, and a variety of practical considerations such as power dissemination across signals, realistic SNR values, as well as feedback costs. The schemes are very simple (we simply collapse precoding vectors into a single vector), and the recorded gains are notable. For example, for 32 transmit antennas, a received SNR of 20 dB, a coherence bandwidth of 300 kHz, a coherence period of 40 ms, and under realistic file-size and cache-size constraints, vector coded caching is here shown to offer a multiplicative throughput boost of about 310% with ZF/RZF precoding and a 430% boost in the performance of already optimized MF-based systems. Interestingly, vector coded caching also accelerates channel hardening to the benefit of feedback acquisition, often surpassing 540% gains over traditional hardening-constrained downlink systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
280,407
2012.09719
Image-Based Jet Analysis
Image-based jet analysis is built upon the jet image representation of jets that enables a direct connection between high energy physics and the fields of computer vision and deep learning. Through this connection, a wide array of new jet analysis techniques have emerged. In this text, we survey jet image based classification models, built primarily on the use of convolutional neural networks, examine the methods to understand what these models have learned and what is their sensitivity to uncertainties, and review the recent successes in moving these models from phenomenological studies to real world application on experiments at the LHC. Beyond jet classification, several other applications of jet image based techniques, including energy estimation, pileup noise reduction, data generation, and anomaly detection, are discussed.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
212,153
1003.0628
Linguistic Geometries for Unsupervised Dimensionality Reduction
Text documents are complex high dimensional objects. To effectively visualize such data it is important to reduce its dimensionality and visualize the low dimensional embedding as a 2-D or 3-D scatter plot. In this paper we explore dimensionality reduction methods that draw upon domain knowledge in order to achieve a better low dimensional embedding and visualization of documents. We consider the use of geometries specified manually by an expert, geometries derived automatically from corpus statistics, and geometries computed from linguistic resources.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
5,829
1906.05685
A Focus on Neural Machine Translation for African Languages
African languages are numerous, complex and low-resourced. The datasets required for machine translation are difficult to discover, and existing research is hard to reproduce. Minimal attention has been given to machine translation for African languages so there is scant research regarding the problems that arise when using machine translation techniques. To begin addressing these problems, we trained models to translate English to five of the official South African languages (Afrikaans, isiZulu, Northern Sotho, Setswana, Xitsonga), making use of modern neural machine translation techniques. The results obtained show the promise of using neural machine translation techniques for African languages. By providing reproducible publicly-available data, code and results, this research aims to provide a starting point for other researchers in African machine translation to compare to and build upon.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
135,090
2310.20656
Non-Compositionality in Sentiment: New Data and Analyses
When natural language phrases are combined, their meaning is often more than the sum of their parts. In the context of NLP tasks such as sentiment analysis, where the meaning of a phrase is its sentiment, that still applies. Many NLP studies on sentiment analysis, however, focus on the fact that sentiment computations are largely compositional. We, instead, set out to obtain non-compositionality ratings for phrases with respect to their sentiment. Our contributions are as follows: a) a methodology for obtaining those non-compositionality ratings, b) a resource of ratings for 259 phrases -- NonCompSST -- along with an analysis of that resource, and c) an evaluation of computational models for sentiment analysis using this new resource.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
404,470
2105.06463
Contrastive Learning of Image Representations with Cross-Video Cycle-Consistency
Recent works have advanced the performance of self-supervised representation learning by a large margin. The core among these methods is intra-image invariance learning. Two different transformations of one image instance are considered as a positive sample pair, where various tasks are designed to learn invariant representations by comparing the pair. Analogically, for video data, representations of frames from the same video are trained to be closer than frames from other videos, i.e. intra-video invariance. However, cross-video relation has barely been explored for visual representation learning. Unlike intra-video invariance, ground-truth labels of cross-video relation is usually unavailable without human labors. In this paper, we propose a novel contrastive learning method which explores the cross-video relation by using cycle-consistency for general image representation learning. This allows to collect positive sample pairs across different video instances, which we hypothesize will lead to higher-level semantics. We validate our method by transferring our image representation to multiple downstream tasks including visual object tracking, image classification, and action recognition. We show significant improvement over state-of-the-art contrastive learning methods. Project page is available at https://happywu.github.io/cycle_contrast_video.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
235,132
2205.04042
Incremental-DETR: Incremental Few-Shot Object Detection via Self-Supervised Learning
Incremental few-shot object detection aims at detecting novel classes without forgetting knowledge of the base classes with only a few labeled training data from the novel classes. Most related prior works are on incremental object detection that rely on the availability of abundant training samples per novel class that substantially limits the scalability to real-world setting where novel data can be scarce. In this paper, we propose the Incremental-DETR that does incremental few-shot object detection via fine-tuning and self-supervised learning on the DETR object detector. To alleviate severe over-fitting with few novel class data, we first fine-tune the class-specific components of DETR with self-supervision from additional object proposals generated using Selective Search as pseudo labels. We further introduce an incremental few-shot fine-tuning strategy with knowledge distillation on the class-specific components of DETR to encourage the network in detecting novel classes without forgetting the base classes. Extensive experiments conducted on standard incremental object detection and incremental few-shot object detection settings show that our approach significantly outperforms state-of-the-art methods by a large margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
295,515
2003.09870
NSM Converges to a k-NN Regressor Under Loose Lipschitz Estimates
Although it is known that having accurate Lipschitz estimates is essential for certain models to deliver good predictive performance, refining this constant in practice can be a difficult task especially when the input dimension is high. In this work, we shed light on the consequences of employing loose Lipschitz bounds in the Nonlinear Set Membership (NSM) framework, showing that the model converges to a nearest neighbor regressor (k-NN with k=1). This convergence process is moreover not uniform, and is monotonic in the univariate case. An intuitive geometrical interpretation of the result is then given and its practical implications are discussed.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
169,169
2008.00325
Bringing UMAP Closer to the Speed of Light with GPU Acceleration
The Uniform Manifold Approximation and Projection (UMAP) algorithm has become widely popular for its ease of use, quality of results, and support for exploratory, unsupervised, supervised, and semi-supervised learning. While many algorithms can be ported to a GPU in a simple and direct fashion, such efforts have resulted in inefficient and inaccurate versions of UMAP. We show a number of techniques that can be used to make a faster and more faithful GPU version of UMAP, and obtain speedups of up to 100x in practice. Many of these design choices/lessons are general purpose and may inform the conversion of other graph and manifold learning algorithms to use GPUs. Our implementation has been made publicly available as part of the open source RAPIDS cuML library (https://github.com/rapidsai/cuml).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
189,977
1908.07247
An efficient bounded-variable nonlinear least-squares algorithm for embedded MPC
This paper presents a new approach to solve linear and nonlinear model predictive control (MPC) problems that requires small memory footprint and throughput and is particularly suitable when the model and/or controller parameters change at runtime. Typically MPC requires two phases: 1) construct an optimization problem based on the given MPC parameters (prediction model, tuning weights, prediction horizon, and constraints), which results in a quadratic or nonlinear programming problem, and then 2) call an optimization algorithm to solve the resulting problem. In the proposed approach the problem construction step is systematically eliminated, as in the optimization algorithm problem matrices are expressed in terms of abstract functions of the MPC parameters. We present a unifying algorithmic framework based on active-set methods with bounded variables that can cope with linear, nonlinear, and adaptive MPC variants based on a broad class of prediction models and a sum-of-squares cost function. The theoretical and numerical results demonstrate the potential, applicability, and efficiency of the proposed framework for practical real-time embedded MPC.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
142,245
1110.5183
Diffusion of Information in Robot Swarms
This work is devoted to communication approaches, which spread information in robot swarms. These mechanisms are useful for large-scale systems and also for such cases when a limited communication equipment does not allow routing of information packages. We focus on two approaches such as virtual fields and epidemic algorithms, discuss several aspects of hardware implementation and demonstrate experiments performed with microrobots "Jasmine".
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
12,755
2412.14492
FaultExplainer: Leveraging Large Language Models for Interpretable Fault Detection and Diagnosis
Machine learning algorithms are increasingly being applied to fault detection and diagnosis (FDD) in chemical processes. However, existing data-driven FDD platforms often lack interpretability for process operators and struggle to identify root causes of previously unseen faults. This paper presents FaultExplainer, an interactive tool designed to improve fault detection, diagnosis, and explanation in the Tennessee Eastman Process (TEP). FaultExplainer integrates real-time sensor data visualization, Principal Component Analysis (PCA)-based fault detection, and identification of top contributing variables within an interactive user interface powered by large language models (LLMs). We evaluate the LLMs' reasoning capabilities in two scenarios: one where historical root causes are provided, and one where they are not to mimic the challenge of previously unseen faults. Experimental results using GPT-4o and o1-preview models demonstrate the system's strengths in generating plausible and actionable explanations, while also highlighting its limitations, including reliance on PCA-selected features and occasional hallucinations.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
518,728
2310.04516
Vulnerability Analysis of Nonlinear Control Systems to Stealthy False Data Injection Attacks
In this work, we focus on analyzing vulnerability of nonlinear dynamical control systems to stealthy false data injection attacks on sensors. We start by defining the stealthiness notion in the most general form where an attack is considered stealthy if it would be undetected by any intrusion detector, i.e., any intrusion detector could not do better than a random guess. Depending on the level of attacker's knowledge about the plant model, controller, and the system states, two different attack models are considered. For each attack model, we derive the conditions for which the system will be vulnerable to stealthy impactful attacks, in addition to finding a methodology for designing such sequence of false data injection attacks. When the attacker has complete knowledge about the system, we show that if the closed loop system is incrementally exponentially stable while the open loop plant is incrementally unstable, then the system is vulnerable to stealthy yet impactful attacks on sensors. However, in the second attack model, with less knowledge about the system, additional conditions need to be satisfied and the level of stealthiness depends on the accuracy of attacker's knowledge about the system. We also consider the impact of stealthy attacks on state estimation, and show that if the closed loop control system including the estimator is incrementally stable, then the state estimation in the presence of attack converges to the attack free estimates. Finally, we illustrate our results on numerical case studies.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
397,693
1606.05694
DeepStance at SemEval-2016 Task 6: Detecting Stance in Tweets Using Character and Word-Level CNNs
This paper describes our approach for the Detecting Stance in Tweets task (SemEval-2016 Task 6). We utilized recent advances in short text categorization using deep learning to create word-level and character-level models. The choice between word-level and character-level models in each particular case was informed through validation performance. Our final system is a combination of classifiers using word-level or character-level models. We also employed novel data augmentation techniques to expand and diversify our training dataset, thus making our system more robust. Our system achieved a macro-average precision, recall and F1-scores of 0.67, 0.61 and 0.635 respectively.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
57,450
1309.5993
Combining smart card data and household travel survey to analyze jobs-housing relationships in Beijing
Location Based Services (LBS) provide a new perspective for spatiotemporally analyzing dynamic urban systems. Research has investigated urban dynamics using GSM (Global System for Mobile Communications), GPS (Global Positioning System), SNS (Social Networking Services) and Wi-Fi techniques. However, less attention has been paid to the analysis of urban structure (especially commuting pattern) using smart card data (SCD), which are widely available in most cities. Additionally, ubiquitous LBS data, although providing rich spatial and temporal information, lacks rich information on the social dimension, which limits its in-depth application. To bridge this gap, this paper combines bus SCD for a one-week period with a one-day household travel survey, as well as a parcel-level land use map to identify job-housing locations and commuting trip routes in Beijing. Two data forms (TRIP and PTD) are proposed, with PTD used for jobs-housing identification and TRIP used for commuting trip route identification. The results of the identification are aggregated in the bus stop and traffic analysis zone (TAZ) scales, respectively. Particularly, commuting trips from three typical residential communities to six main business zones are mapped and compared to analyze commuting patterns in Beijing. The identified commuting trips are validated on three levels by comparison with those from the survey in terms of commuting time and distance, and the positive validation results prove the applicability of our approach. Our experiment, as a first step toward enriching LBS data using conventional survey and urban GIS data, can obtain solid identification results based on rules extracted from existing surveys or censuses.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
27,213
2309.07390
Unleashing the Power of Depth and Pose Estimation Neural Networks by Designing Compatible Endoscopic Images
Deep learning models have witnessed depth and pose estimation framework on unannotated datasets as a effective pathway to succeed in endoscopic navigation. Most current techniques are dedicated to developing more advanced neural networks to improve the accuracy. However, existing methods ignore the special properties of endoscopic images, resulting in an inability to fully unleash the power of neural networks. In this study, we conduct a detail analysis of the properties of endoscopic images and improve the compatibility of images and neural networks, to unleash the power of current neural networks. First, we introcude the Mask Image Modelling (MIM) module, which inputs partial image information instead of complete image information, allowing the network to recover global information from partial pixel information. This enhances the network' s ability to perceive global information and alleviates the phenomenon of local overfitting in convolutional neural networks due to local artifacts. Second, we propose a lightweight neural network to enhance the endoscopic images, to explicitly improve the compatibility between images and neural networks. Extensive experiments are conducted on the three public datasets and one inhouse dataset, and the proposed modules improve baselines by a large margin. Furthermore, the enhanced images we proposed, which have higher network compatibility, can serve as an effective data augmentation method and they are able to extract more stable feature points in traditional feature point matching tasks and achieve outstanding performance.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
391,758
2305.09678
Anomaly Detection Dataset for Industrial Control Systems
Over the past few decades, Industrial Control Systems (ICSs) have been targeted by cyberattacks and are becoming increasingly vulnerable as more ICSs are connected to the internet. Using Machine Learning (ML) for Intrusion Detection Systems (IDS) is a promising approach for ICS cyber protection, but the lack of suitable datasets for evaluating ML algorithms is a challenge. Although there are a few commonly used datasets, they may not reflect realistic ICS network data, lack necessary features for effective anomaly detection, or be outdated. This paper presents the 'ICS-Flow' dataset, which offers network data and process state variables logs for supervised and unsupervised ML-based IDS assessment. The network data includes normal and anomalous network packets and flows captured from simulated ICS components and emulated networks. The anomalies were injected into the system through various attack techniques commonly used by hackers to modify network traffic and compromise ICSs. We also proposed open-source tools, `ICSFlowGenerator' for generating network flow parameters from Raw network packets. The final dataset comprises over 25,000,000 raw network packets, network flow records, and process variable logs. The paper describes the methodology used to collect and label the dataset and provides a detailed data analysis. Finally, we implement several ML models, including the decision tree, random forest, and artificial neural network to detect anomalies and attacks, demonstrating that our dataset can be used effectively for training intrusion detection ML models.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
364,735
2304.13409
Efficient Explainable Face Verification based on Similarity Score Argument Backpropagation
Explainable Face Recognition is gaining growing attention as the use of the technology is gaining ground in security-critical applications. Understanding why two faces images are matched or not matched by a given face recognition system is important to operators, users, anddevelopers to increase trust, accountability, develop better systems, and highlight unfair behavior. In this work, we propose xSSAB, an approach to back-propagate similarity score-based arguments that support or oppose the face matching decision to visualize spatial maps that indicate similar and dissimilar areas as interpreted by the underlying FR model. Furthermore, we present Patch-LFW, a new explainable face verification benchmark that enables along with a novel evaluation protocol, the first quantitative evaluation of the validity of similarity and dissimilarity maps in explainable face recognition approaches. We compare our efficient approach to state-of-the-art approaches demonstrating a superior trade-off between efficiency and performance. The code as well as the proposed Patch-LFW is publicly available at: https://github.com/marcohuber/xSSAB.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
360,565
1506.02923
Compact Shape Trees: A Contribution to the Forest of Shape Correspondences and Matching Methods
We propose a novel technique, termed compact shape trees, for computing correspondences of single-boundary 2-D shapes in O(n2) time. Together with zero or more features defined at each of n sample points on the shape's boundary, the compact shape tree of a shape comprises the O(n) collection of vectors emanating from any of the sample points on the shape's boundary to the rest of the sample points on the boundary. As it turns out, compact shape trees have a number of elegant properties both in the spatial and frequency domains. In particular, via a simple vector-algebraic argument, we show that the O(n) collection of vectors in a compact shape tree possesses at least the same discriminatory power as the O(n2) collection of lines emanating from each sample point to every other sample point on a shape's boundary. In addition, we describe neat approaches for achieving scale and rotation invariance with compact shape trees in the spatial domain; by viewing compact shape trees as aperiodic discrete signals, we also prove scale and rotation invariance properties for them in the Fourier domain. Towards these, along the way, using concepts from differential geometry and the Calculus, we propose a novel theory for sampling 2-D shape boundaries in a scale and rotation invariant manner. Finally, we propose a number of shape recognition experiments to test the efficacy of our concept.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
43,988
0909.3027
Language Models for Handwritten Short Message Services
Handwriting is an alternative method for entering texts composing Short Message Services. However, a whole new language features the texts which are produced. They include for instance abbreviations and other consonantal writing which sprung up for time saving and fashion. We have collected and processed a significant number of such handwriting SMS, and used various strategies to tackle this challenging area of handwriting recognition. We proposed to study more specifically three different phenomena: consonant skeleton, rebus, and phonetic writing. For each of them, we compare the rough results produced by a standard recognition system with those obtained when using a specific language model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
4,507
2408.06776
Robust Deep Reinforcement Learning for Inverter-based Volt-Var Control in Partially Observable Distribution Networks
Inverter-based volt-var control is studied in this paper. One key issue in DRL-based approaches is the limited measurement deployment in active distribution networks, which leads to problems of a partially observable state and unknown reward. To address those problems, this paper proposes a robust DRL approach with a conservative critic and a surrogate reward. The conservative critic utilizes the quantile regression technology to estimate conservative state-action value function based on the partially observable state, which helps to train a robust policy; the surrogate rewards of power loss and voltage violation are designed that can be calculated from the limited measurements. The proposed approach optimizes the power loss of the whole network and the voltage profile of buses with measurable voltages while indirectly improving the voltage profile of other buses. Extensive simulations verify the effectiveness of the robust DRL approach in different limited measurement conditions, even when only the active power injection of the root bus and less than 10% of bus voltages are measurable.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
480,338
2007.06902
Enabling Adaptive and Enhanced Acoustic Sensing Using Nonlinear Dynamics
Transmission of real-time data is strongly increasing due to remote processing of sensor data, among other things. A route to meet this demand is adaptive sensing, in which sensors acquire only relevant information using pre-processing at sensor level. We present here adaptive acoustic sensors based on mechanical oscillators with integrated sensing and actuation. Their dynamics are shifted into a nonlinear regime using feedback or coupling. This enhances dynamic range, frequency resolution and signal-to-noise ratio. Combining tunable sensing properties with sound analysis could enable acquiring of only relevant information rather than extracting this from irrelevant data by post-processing.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
187,167
2005.05079
A Survey on Sampling and Profiling over Big Data (Technical Report)
Due to the development of internet technology and computer science, data is exploding at an exponential rate. Big data brings us new opportunities and challenges. On the one hand, we can analyze and mine big data to discover hidden information and get more potential value. On the other hand, the 5V characteristic of big data, especially Volume which means large amount of data, brings challenges to storage and processing. For some traditional data mining algorithms, machine learning algorithms and data profiling tasks, it is very difficult to handle such a large amount of data. The large amount of data is highly demanding hardware resources and time consuming. Sampling methods can effectively reduce the amount of data and help speed up data processing. Hence, sampling technology has been widely studied and used in big data context, e.g., methods for determining sample size, combining sampling with big data processing frameworks. Data profiling is the activity that finds metadata of data set and has many use cases, e.g., performing data profiling tasks on relational data, graph data, and time series data for anomaly detection and data repair. However, data profiling is computationally expensive, especially for large data sets. Therefore, this paper focuses on researching sampling and profiling in big data context and investigates the application of sampling in different categories of data profiling tasks. From the experimental results of these studies, the results got from the sampled data are close to or even exceed the results of the full amount of data. Therefore, sampling technology plays an important role in the era of big data, and we also have reason to believe that sampling technology will become an indispensable step in big data processing in the future.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
176,637
2202.03674
Trained Model in Supervised Deep Learning is a Conditional Risk Minimizer
We proved that a trained model in supervised deep learning minimizes the conditional risk for each input (Theorem 2.1). This property provided insights into the behavior of trained models and established a connection between supervised and unsupervised learning in some cases. In addition, when the labels are intractable but can be written as a conditional risk minimizer, we proved an equivalent form of the original supervised learning problem with accessible labels (Theorem 2.2). We demonstrated that many existing works, such as Noise2Score, Noise2Noise and score function estimation can be explained by our theorem. Moreover, we derived a property of classification problem with noisy labels using Theorem 2.1 and validated it using MNIST dataset. Furthermore, We proposed a method to estimate uncertainty in image super-resolution based on Theorem 2.2 and validated it using ImageNet dataset. Our code is available on github.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
279,301
2306.15089
Energy Modelling and Forecasting for an Underground Agricultural Farm using a Higher Order Dynamic Mode Decomposition Approach
This paper presents an approach based on higher order dynamic mode decomposition (HODMD) to model, analyse, and forecast energy behaviour in an urban agriculture farm situated in a retrofitted London underground tunnel, where observed measurements are influenced by noisy and occasionally transient conditions. HODMD is a data-driven reduced order modelling method typically used to analyse and predict highly noisy and complex flows in fluid dynamics or any type of complex data from dynamical systems. HODMD is a recent extension of the classical dynamic mode decomposition method (DMD), customised to handle scenarios where the spectral complexity underlying the measurement data is higher than its spatial complexity, such as is the environmental behaviour of the farm. HODMD decomposes temporal data as a linear expansion of physically-meaningful DMD-modes in a semi-automatic approach, using a time-delay embedded approach. We apply HODMD to three seasonal scenarios using real data measured by sensors located at at the cross-sectional centre of the the underground farm. Through the study we revealed three physically-interpretable mode pairs that govern the environmental behaviour at the centre of the farm, consistently across environmental scenarios. Subsequently, we demonstrate how we can reconstruct the fundamental structure of the observed time-series using only these modes, and forecast for three days ahead, as one, compact and interpretable reduced-order model. We find HODMD to serve as a robust, semi-automatic modelling alternative for predictive modelling in Digital Twins.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
375,903
cs/0603097
On Pinsker's Type Inequalities and Csiszar's f-divergences. Part I: Second and Fourth-Order Inequalities
We study conditions on $f$ under which an $f$-divergence $D_f$ will satisfy $D_f \geq c_f V^2$ or $D_f \geq c_{2,f} V^2 + c_{4,f} V^4$, where $V$ denotes variational distance and the coefficients $c_f$, $c_{2,f}$ and $c_{4,f}$ are {\em best possible}. As a consequence, we obtain lower bounds in terms of $V$ for many well known distance and divergence measures. For instance, let $D_{(\alpha)} (P,Q) = [\alpha (\alpha-1)]^{-1} [\int q^{\alpha} p^{1-\alpha} d \mu -1]$ and ${\cal I}_\alpha (P,Q) = (\alpha -1)^{-1} \log [\int p^\alpha q^{1-\alpha} d \mu]$ be respectively the {\em relative information of type} ($1-\alpha$) and {\em R\'{e}nyi's information gain of order} $\alpha$. We show that $D_{(\alpha)} \geq {1/2} V^2 + {1/72} (\alpha+1)(2-\alpha) V^4$ whenever $-1 \leq \alpha \leq 2$, $\alpha \not= 0,1$ and that ${\cal I}_{\alpha} = \frac{\alpha}{2} V^2 + {1/36} \alpha (1 + 5 \alpha - 5 \alpha^2) V^4$ for $0 < \alpha < 1$. Pinsker's inequality $D \geq {1/2} V^2$ and its extension $D \geq {1/2} V^2 + {1/36} V^4$ are special cases of each one of these.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
539,349
1703.01664
Diversified Texture Synthesis with Feed-forward Networks
Recent progresses on deep discriminative and generative modeling have shown promising results on texture synthesis. However, existing feed-forward based methods trade off generality for efficiency, which suffer from many issues, such as shortage of generality (i.e., build one network per texture), lack of diversity (i.e., always produce visually identical output) and suboptimality (i.e., generate less satisfying visual effects). In this work, we focus on solving these issues for improved texture synthesis. We propose a deep generative feed-forward network which enables efficient synthesis of multiple textures within one single network and meaningful interpolation between them. Meanwhile, a suite of important techniques are introduced to achieve better convergence and diversity. With extensive experiments, we demonstrate the effectiveness of the proposed model and techniques for synthesizing a large number of textures and show its applications with the stylization.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,409
2311.03421
Hopfield-Enhanced Deep Neural Networks for Artifact-Resilient Brain State Decoding
The study of brain states, ranging from highly synchronous to asynchronous neuronal patterns like the sleep-wake cycle, is fundamental for assessing the brain's spatiotemporal dynamics and their close connection to behavior. However, the development of new techniques to accurately identify them still remains a challenge, as these are often compromised by the presence of noise, artifacts, and suboptimal recording quality. In this study, we propose a two-stage computational framework combining Hopfield Networks for artifact data preprocessing with Convolutional Neural Networks (CNNs) for classification of brain states in rat neural recordings under different levels of anesthesia. To evaluate the robustness of our framework, we deliberately introduced noise artifacts into the neural recordings. We evaluated our hybrid Hopfield-CNN pipeline by benchmarking it against two comparative models: a standalone CNN handling the same noisy inputs, and another CNN trained and tested on artifact-free data. Performance across various levels of data compression and noise intensities showed that our framework can effectively mitigate artifacts, allowing the model to reach parity with the clean-data CNN at lower noise levels. Although this study mainly benefits small-scale experiments, the findings highlight the necessity for advanced deep learning and Hopfield Network models to improve scalability and robustness in diverse real-world settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
405,854
1611.00889
Designing Sparse Reliable Pose-Graph SLAM: A Graph-Theoretic Approach
In this paper, we aim to design sparse D-optimal (determinantoptimal) pose-graph SLAM problems through the synthesis of sparse graphs with the maximum weighted number of spanning trees. Characterizing graphs with the maximum number of spanning trees is an open problem in general. To tackle this problem, several new theoretical results are established in this paper, including the monotone log-submodularity of the weighted number of spanning trees. By exploiting these structures, we design a complementary pair of near-optimal efficient approximation algorithms with provable guarantees. Our theoretical results are validated using random graphs and a publicly available pose-graph SLAM dataset.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
63,295
1305.2741
Can Human-Like Bots Control Collective Mood: Agent-Based Simulations of Online Chats
Using agent-based modeling approach, in this paper, we study self-organized dynamics of interacting agents in the presence of chat Bots. Different Bots with tunable ``human-like'' attributes, which exchange emotional messages with agents, are considered, and collective emotional behavior of agents is quantitatively analysed. In particular, using detrended fractal analysis we determine persistent fluctuations and temporal correlations in time series of agent's activity and statistics of avalanches carrying emotional messages of agents when Bots favoring positive/negative affects are active. We determine the impact of Bots and identify parameters that can modulate it. Our analysis suggests that, by these measures, the emotional Bots induce collective emotion among interacting agents by suitably altering the fractal characteristics of the underlying stochastic process.Positive-emotion Bots are slightly more effective than the negative ones. Moreover, the Bots which are periodically alternating between positive and negative emotion, can enhance fluctuations in the system leading to the avalanches of agent's messages that are reminiscent of self-organized critical states.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
24,544
2303.01173
Resource-Constrained Station-Keeping for Helium Balloons using Reinforcement Learning
High altitude balloons have proved useful for ecological aerial surveys, atmospheric monitoring, and communication relays. However, due to weight and power constraints, there is a need to investigate alternate modes of propulsion to navigate in the stratosphere. Very recently, reinforcement learning has been proposed as a control scheme to maintain the balloon in the region of a fixed location, facilitated through diverse opposing wind-fields at different altitudes. Although air-pump based station keeping has been explored, there is no research on the control problem for venting and ballasting actuated balloons, which is commonly used as a low-cost alternative. We show how reinforcement learning can be used for this type of balloon. Specifically, we use the soft actor-critic algorithm, which on average is able to station-keep within 50\;km for 25\% of the flight, consistent with state-of-the-art. Furthermore, we show that the proposed controller effectively minimises the consumption of resources, thereby supporting long duration flights. We frame the controller as a continuous control reinforcement learning problem, which allows for a more diverse range of trajectories, as opposed to current state-of-the-art work, which uses discrete action spaces. Furthermore, through continuous control, we can make use of larger ascent rates which are not possible using air-pumps. The desired ascent-rate is decoupled into desired altitude and time-factor to provide a more transparent policy, compared to low-level control commands used in previous works. Finally, by applying the equations of motion, we establish appropriate thresholds for venting and ballasting to prevent the agent from exploiting the environment. More specifically, we ensure actions are physically feasible by enforcing constraints on venting and ballasting.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
348,857
2112.01707
TransCouplet:Transformer based Chinese Couplet Generation
Chinese couplet is a special form of poetry composed of complex syntax with ancient Chinese language. Due to the complexity of semantic and grammatical rules, creation of a suitable couplet is a formidable challenge. This paper presents a transformer-based sequence-to-sequence couplet generation model. With the utilization of AnchiBERT, the model is able to capture ancient Chinese language understanding. Moreover, we evaluate the Glyph, PinYin and Part-of-Speech tagging on the couplet grammatical rules to further improve the model.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
269,580
2406.11087
DP-MemArc: Differential Privacy Transfer Learning for Memory Efficient Language Models
Large language models have repeatedly shown outstanding performance across diverse applications. However, deploying these models can inadvertently risk user privacy. The significant memory demands during training pose a major challenge in terms of resource consumption. This substantial size places a heavy load on memory resources, raising considerable practical concerns. In this paper, we introduce DP-MemArc, a novel training framework aimed at reducing the memory costs of large language models while emphasizing the protection of user data privacy. DP-MemArc incorporates side network or reversible network designs to support a variety of differential privacy memory-efficient fine-tuning schemes. Our approach not only achieves about 2.5 times in memory optimization but also ensures robust privacy protection, keeping user data secure and confidential. Extensive experiments have demonstrated that DP-MemArc effectively provides differential privacy-efficient fine-tuning across different task scenarios.
false
false
false
false
true
false
true
false
true
false
false
false
true
false
false
false
false
false
464,710
2002.11221
Distributed Weighted Least-squares Estimation for Networked Systems with Edge Measurements
This paper studies the problem of distributed weighted least-squares (WLS) estimation for an interconnected linear measurement network with additive noise. Two types of measurements are considered: self measurements for individual nodes, and edge measurements for the connecting nodes. Each node in the network carries out distributed estimation by using its own measurement and information transmitted from its neighbours. We study two distributed estimation algorithms: a recently proposed distributed WLS algorithm and the so-called Gaussian Belief Propagation (BP) algorithm. We first establish the equivalence of the two algorithms. We then prove a key result which shows that the information matrix is always generalised diagonally dominant, under some very mild condition. Using these two results and some known convergence properties of the Gaussian BP algorithm, we show that the aforementioned distributed WLS algorithm gives the globally optimal WLS estimate asymptotically. A bound on its convergence rate is also presented.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
165,640
1909.09132
Spoken Speech Enhancement using EEG
In this paper we demonstrate spoken speech enhancement using electroencephalography (EEG) signals using a generative adversarial network (GAN) based model, gated recurrent unit (GRU) regression based model, temporal convolutional network (TCN) regression model and finally using a mixed TCN GRU regression model. We compare our EEG based speech enhancement results with traditional log minimum mean-square error (MMSE) speech enhancement algorithm and our proposed methods demonstrate significant improvement in speech enhancement quality compared to the traditional method. Our overall results demonstrate that EEG features can be used to clean speech recorded in presence of background noise. To the best of our knowledge this is the first time a spoken speech enhancement is demonstrated using EEG features recorded in parallel with spoken speech.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
146,154
1607.04331
Random projections of random manifolds
Interesting data often concentrate on low dimensional smooth manifolds inside a high dimensional ambient space. Random projections are a simple, powerful tool for dimensionality reduction of such data. Previous works have studied bounds on how many projections are needed to accurately preserve the geometry of these manifolds, given their intrinsic dimensionality, volume and curvature. However, such works employ definitions of volume and curvature that are inherently difficult to compute. Therefore such theory cannot be easily tested against numerical simulations to understand the tightness of the proven bounds. We instead study typical distortions arising in random projections of an ensemble of smooth Gaussian random manifolds. We find explicitly computable, approximate theoretical bounds on the number of projections required to accurately preserve the geometry of these manifolds. Our bounds, while approximate, can only be violated with a probability that is exponentially small in the ambient dimension, and therefore they hold with high probability in cases of practical interest. Moreover, unlike previous work, we test our theoretical bounds against numerical experiments on the actual geometric distortions that typically occur for random projections of random smooth manifolds. We find our bounds are tighter than previous results by several orders of magnitude.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
58,601
2409.15161
A Gated Residual Kolmogorov-Arnold Networks for Mixtures of Experts
This paper introduces KAMoE, a novel Mixture of Experts (MoE) framework based on Gated Residual Kolmogorov-Arnold Networks (GRKAN). We propose GRKAN as an alternative to the traditional gating function, aiming to enhance efficiency and interpretability in MoE modeling. Through extensive experiments on digital asset markets and real estate valuation, we demonstrate that KAMoE consistently outperforms traditional MoE architectures across various tasks and model types. Our results show that GRKAN exhibits superior performance compared to standard Gating Residual Networks, particularly in LSTM-based models for sequential tasks. We also provide insights into the trade-offs between model complexity and performance gains in MoE and KAMoE architectures.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
490,783
2010.12306
Network Classifiers Based on Social Learning
This work proposes a new way of combining independently trained classifiers over space and time. Combination over space means that the outputs of spatially distributed classifiers are aggregated. Combination over time means that the classifiers respond to streaming data during testing and continue to improve their performance even during this phase. By doing so, the proposed architecture is able to improve prediction performance over time with unlabeled data. Inspired by social learning algorithms, which require prior knowledge of the observations distribution, we propose a Social Machine Learning (SML) paradigm that is able to exploit the imperfect models generated during the learning phase. We show that this strategy results in consistent learning with high probability, and it yields a robust structure against poorly trained classifiers. Simulations with an ensemble of feedforward neural networks are provided to illustrate the theoretical results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
202,646
2209.07578
Pixel-wise classification in graphene-detection with tree-based machine learning algorithms
Mechanical exfoliation of graphene and its identification by optical inspection is one of the milestones in condensed matter physics that sparked the field of 2D materials. Finding regions of interest from the entire sample space and identification of layer number is a routine task potentially amenable to automatization. We propose supervised pixel-wise classification methods showing a high performance even with a small number of training image datasets that require short computational time without GPU. We introduce four different tree-based machine learning algorithms -- decision tree, random forest, extreme gradient boost, and light gradient boosting machine. We train them with five optical microscopy images of graphene, and evaluate their performances with multiple metrics and indices. We also discuss combinatorial machine learning models between the three single classifiers and assess their performances in identification and reliability. The code developed in this paper is open to the public and will be released at github.com/gjung-group/Graphene_segmentation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
317,798
2011.02126
Incremental Machine Speech Chain Towards Enabling Listening while Speaking in Real-time
Inspired by a human speech chain mechanism, a machine speech chain framework based on deep learning was recently proposed for the semi-supervised development of automatic speech recognition (ASR) and text-to-speech synthesis TTS) systems. However, the mechanism to listen while speaking can be done only after receiving entire input sequences. Thus, there is a significant delay when encountering long utterances. By contrast, humans can listen to what hey speak in real-time, and if there is a delay in hearing, they won't be able to continue speaking. In this work, we propose an incremental machine speech chain towards enabling machine to listen while speaking in real-time. Specifically, we construct incremental ASR (ISR) and incremental TTS (ITTS) by letting both systems improve together through a short-term loop. Our experimental results reveal that our proposed framework is able to reduce delays due to long utterances while keeping a comparable performance to the non-incremental basic machine speech chain.
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
204,829
2104.12709
Rich Semantics Improve Few-shot Learning
Human learning benefits from multi-modal inputs that often appear as rich semantics (e.g., description of an object's attributes while learning about it). This enables us to learn generalizable concepts from very limited visual examples. However, current few-shot learning (FSL) methods use numerical class labels to denote object classes which do not provide rich semantic meanings about the learned concepts. In this work, we show that by using 'class-level' language descriptions, that can be acquired with minimal annotation cost, we can improve the FSL performance. Given a support set and queries, our main idea is to create a bottleneck visual feature (hybrid prototype) which is then used to generate language descriptions of the classes as an auxiliary task during training. We develop a Transformer based forward and backward encoding mechanism to relate visual and semantic tokens that can encode intricate relationships between the two modalities. Forcing the prototypes to retain semantic information about class description acts as a regularizer on the visual features, improving their generalization to novel classes at inference. Furthermore, this strategy imposes a human prior on the learned representations, ensuring that the model is faithfully relating visual and semantic concepts, thereby improving model interpretability. Our experiments on four datasets and ablation studies show the benefit of effectively modeling rich semantics for FSL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
232,298
2007.09493
Deep Hough-Transform Line Priors
Classical work on line segment detection is knowledge-based; it uses carefully designed geometric priors using either image gradients, pixel groupings, or Hough transform variants. Instead, current deep learning methods do away with all prior knowledge and replace priors by training deep networks on large manually annotated datasets. Here, we reduce the dependency on labeled data by building on the classic knowledge-based priors while using deep networks to learn features. We add line priors through a trainable Hough transform block into a deep network. Hough transform provides the prior knowledge about global line parameterizations, while the convolutional layers can learn the local gradient-like line features. On the Wireframe (ShanghaiTech) and York Urban datasets we show that adding prior knowledge improves data efficiency as line priors no longer need to be learned from data. Keywords: Hough transform; global line prior, line segment detection.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
187,965
2204.01952
Towards On-Board Panoptic Segmentation of Multispectral Satellite Images
With tremendous advancements in low-power embedded computing devices and remote sensing instruments, the traditional satellite image processing pipeline which includes an expensive data transfer step prior to processing data on the ground is being replaced by on-board processing of captured data. This paradigm shift enables critical and time-sensitive analytic intelligence to be acquired in a timely manner on-board the satellite itself. However, at present, the on-board processing of multi-spectral satellite images is limited to classification and segmentation tasks. Extending this processing to its next logical level, in this paper we propose a lightweight pipeline for on-board panoptic segmentation of multi-spectral satellite images. Panoptic segmentation offers major economic and environmental insights, ranging from yield estimation from agricultural lands to intelligence for complex military applications. Nevertheless, the on-board intelligence extraction raises several challenges due to the loss of temporal observations and the need to generate predictions from a single image sample. To address this challenge, we propose a multimodal teacher network based on a cross-modality attention-based fusion strategy to improve the segmentation accuracy by exploiting data from multiple modes. We also propose an online knowledge distillation framework to transfer the knowledge learned by this multi-modal teacher network to a uni-modal student which receives only a single frame input, and is more appropriate for an on-board environment. We benchmark our approach against existing state-of-the-art panoptic segmentation models using the PASTIS multi-spectral panoptic segmentation dataset considering an on-board processing setting. Our evaluations demonstrate a substantial increase in accuracy metrics compared to the existing state-of-the-art models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
289,776
1812.10613
Generative Adversarial User Model for Reinforcement Learning Based Recommendation System
There are great interests as well as many challenges in applying reinforcement learning (RL) to recommendation systems. In this setting, an online user is the environment; neither the reward function nor the environment dynamics are clearly defined, making the application of RL challenging. In this paper, we propose a novel model-based reinforcement learning framework for recommendation systems, where we develop a generative adversarial network to imitate user behavior dynamics and learn her reward function. Using this user model as the simulation environment, we develop a novel Cascading DQN algorithm to obtain a combinatorial recommendation policy which can handle a large number of candidate items efficiently. In our experiments with real data, we show this generative adversarial user model can better explain user behavior than alternatives, and the RL policy based on this model can lead to a better long-term reward for the user and higher click rate for the system.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
117,408
2110.14038
Robustness of Graph Neural Networks at Scale
Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications. Yet, existing studies of their vulnerability to adversarial attacks rely on relatively small graphs. We address this gap and study how to attack and defend GNNs at scale. We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation despite optimizing over a number of parameters which is quadratic in the number of nodes. We show that common surrogate losses are not well-suited for global attacks on GNNs. Our alternatives can double the attack strength. Moreover, to improve GNNs' reliability we design a robust aggregation function, Soft Median, resulting in an effective defense at all scales. We evaluate our attacks and defense with standard GNNs on graphs more than 100 times larger compared to previous work. We even scale one order of magnitude further by extending our techniques to a scalable GNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,389
2408.05146
Performative Prediction on Games and Mechanism Design
Agents often have individual goals which depend on a group's actions. If agents trust a forecast of collective action and adapt strategically, such prediction can influence outcomes non-trivially, resulting in a form of performative prediction. This effect is ubiquitous in scenarios ranging from pandemic predictions to election polls, but existing work has ignored interdependencies among predicted agents. As a first step in this direction, we study a collective risk dilemma where agents dynamically decide whether to trust predictions based on past accuracy. As predictions shape collective outcomes, social welfare arises naturally as a metric of concern. We explore the resulting interplay between accuracy and welfare, and demonstrate that searching for stable accurate predictions can minimize social welfare with high probability in our setting. By assuming knowledge of a Bayesian agent behavior model, we then show how to achieve better trade-offs and use them for mechanism design.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
true
479,673
2204.10377
Contrastive Test-Time Adaptation
Test-time adaptation is a special setting of unsupervised domain adaptation where a trained model on the source domain has to adapt to the target domain without accessing source data. We propose a novel way to leverage self-supervised contrastive learning to facilitate target feature learning, along with an online pseudo labeling scheme with refinement that significantly denoises pseudo labels. The contrastive learning task is applied jointly with pseudo labeling, contrasting positive and negative pairs constructed similarly as MoCo but with source-initialized encoder, and excluding same-class negative pairs indicated by pseudo labels. Meanwhile, we produce pseudo labels online and refine them via soft voting among their nearest neighbors in the target feature space, enabled by maintaining a memory queue. Our method, AdaContrast, achieves state-of-the-art performance on major benchmarks while having several desirable properties compared to existing works, including memory efficiency, insensitivity to hyper-parameters, and better model calibration. Project page: sites.google.com/view/adacontrast.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
292,759
2402.10618
Enhancing Role-playing Systems through Aggressive Queries: Evaluation and Improvement
The advent of Large Language Models (LLMs) has propelled dialogue generation into new realms, particularly in the field of role-playing systems (RPSs). While enhanced with ordinary role-relevant training dialogues, existing LLM-based RPSs still struggle to align with roles when handling intricate and trapped queries in boundary scenarios. In this paper, we design the Modular ORchestrated Trap-setting Interaction SystEm (MORTISE) to benchmark and improve the role-playing LLMs' performance. MORTISE can produce highly role-relevant aggressive queries through the collaborative effort of multiple LLM-based modules, and formulate corresponding responses to create an adversarial training dataset via a consistent response generator. We select 190 Chinese and English roles to construct aggressive queries to benchmark existing role-playing LLMs. Through comprehensive evaluation, we find that existing models exhibit a general deficiency in role alignment capabilities. We further select 180 of the roles to collect an adversarial training dataset (named RoleAD) and retain the other 10 roles for testing. Experiments on models improved by RoleAD indicate that our adversarial dataset ameliorates this deficiency, with the improvements demonstrating a degree of generalizability in ordinary scenarios.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
430,037