id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2
classes | cs.CE bool 2
classes | cs.SD bool 2
classes | cs.SI bool 2
classes | cs.AI bool 2
classes | cs.IR bool 2
classes | cs.LG bool 2
classes | cs.RO bool 2
classes | cs.CL bool 2
classes | cs.IT bool 2
classes | cs.SY bool 2
classes | cs.CV bool 2
classes | cs.CR bool 2
classes | cs.CY bool 2
classes | cs.MA bool 2
classes | cs.NE bool 2
classes | cs.DB bool 2
classes | Other bool 2
classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2111.08869 | Enhanced Correlation Matching based Video Frame Interpolation | We propose a novel DNN based framework called the Enhanced Correlation Matching based Video Frame Interpolation Network to support high resolution like 4K, which has a large scale of motion and occlusion. Considering the extensibility of the network model according to resolution, the proposed scheme employs the recurrent pyramid architecture that shares the parameters among each pyramid layer for optical flow estimation. In the proposed flow estimation, the optical flows are recursively refined by tracing the location with maximum correlation. The forward warping based correlation matching enables to improve the accuracy of flow update by excluding incorrectly warped features around the occlusion area. Based on the final bi-directional flows, the intermediate frame at arbitrary temporal position is synthesized using the warping and blending network and it is further improved by refinement network. Experiment results demonstrate that the proposed scheme outperforms the previous works at 4K video data and low-resolution benchmark datasets as well in terms of objective and subjective quality with the smallest number of model parameters. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 266,835 |
2501.10348 | Credit Risk Identification in Supply Chains Using Generative Adversarial
Networks | Credit risk management within supply chains has emerged as a critical research area due to its significant implications for operational stability and financial sustainability. The intricate interdependencies among supply chain participants mean that credit risks can propagate across networks, with impacts varying by industry. This study explores the application of Generative Adversarial Networks (GANs) to enhance credit risk identification in supply chains. GANs enable the generation of synthetic credit risk scenarios, addressing challenges related to data scarcity and imbalanced datasets. By leveraging GAN-generated data, the model improves predictive accuracy while effectively capturing dynamic and temporal dependencies in supply chain data. The research focuses on three representative industries-manufacturing (steel), distribution (pharmaceuticals), and services (e-commerce) to assess industry-specific credit risk contagion. Experimental results demonstrate that the GAN-based model outperforms traditional methods, including logistic regression, decision trees, and neural networks, achieving superior accuracy, recall, and F1 scores. The findings underscore the potential of GANs in proactive risk management, offering robust tools for mitigating financial disruptions in supply chains. Future research could expand the model by incorporating external market factors and supplier relationships to further enhance predictive capabilities. Keywords- Generative Adversarial Networks (GANs); Supply Chain Risk; Credit Risk Identification; Machine Learning; Data Augmentation | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 525,480 |
2406.14194 | VLBiasBench: A Comprehensive Benchmark for Evaluating Bias in Large
Vision-Language Model | The emergence of Large Vision-Language Models (LVLMs) marks significant strides towards achieving general artificial intelligence. However, these advancements are accompanied by concerns about biased outputs, a challenge that has yet to be thoroughly explored. Existing benchmarks are not sufficiently comprehensive in evaluating biases due to their limited data scale, single questioning format and narrow sources of bias. To address this problem, we introduce VLBiasBench, a comprehensive benchmark designed to evaluate biases in LVLMs. VLBiasBench, features a dataset that covers nine distinct categories of social biases, including age, disability status, gender, nationality, physical appearance, race, religion, profession, social economic status, as well as two intersectional bias categories: race x gender and race x social economic status. To build a large-scale dataset, we use Stable Diffusion XL model to generate 46,848 high-quality images, which are combined with various questions to creat 128,342 samples. These questions are divided into open-ended and close-ended types, ensuring thorough consideration of bias sources and a comprehensive evaluation of LVLM biases from multiple perspectives. We conduct extensive evaluations on 15 open-source models as well as two advanced closed-source models, yielding new insights into the biases present in these models. Our benchmark is available at https://github.com/Xiangkui-Cao/VLBiasBench. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 466,201 |
2207.11860 | Behind Every Domain There is a Shift: Adapting Distortion-aware Vision
Transformers for Panoramic Semantic Segmentation | In this paper, we address panoramic semantic segmentation which is under-explored due to two critical challenges: (1) image distortions and object deformations on panoramas; (2) lack of semantic annotations in the 360{\deg} imagery. To tackle these problems, first, we propose the upgraded Transformer for Panoramic Semantic Segmentation, i.e., Trans4PASS+, equipped with Deformable Patch Embedding (DPE) and Deformable MLP (DMLPv2) modules for handling object deformations and image distortions whenever (before or after adaptation) and wherever (shallow or deep levels). Second, we enhance the Mutual Prototypical Adaptation (MPA) strategy via pseudo-label rectification for unsupervised domain adaptive panoramic segmentation. Third, aside from Pinhole-to-Panoramic (Pin2Pan) adaptation, we create a new dataset (SynPASS) with 9,080 panoramic images, facilitating Synthetic-to-Real (Syn2Real) adaptation scheme in 360{\deg} imagery. Extensive experiments are conducted, which cover indoor and outdoor scenarios, and each of them is investigated with Pin2Pan and Syn2Real regimens. Trans4PASS+ achieves state-of-the-art performances on four domain adaptive panoramic semantic segmentation benchmarks. Code is available at https://github.com/jamycheung/Trans4PASS. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 309,806 |
2407.13101 | Retrieve, Summarize, Plan: Advancing Multi-hop Question Answering with
an Iterative Approach | Multi-hop question answering is a challenging task with distinct industrial relevance, and Retrieval-Augmented Generation (RAG) methods based on large language models (LLMs) have become a popular approach to tackle this task. Owing to the potential inability to retrieve all necessary information in a single iteration, a series of iterative RAG methods has been recently developed, showing significant performance improvements. However, existing methods still face two critical challenges: context overload resulting from multiple rounds of retrieval, and over-planning and repetitive planning due to the lack of a recorded retrieval trajectory. In this paper, we propose a novel iterative RAG method called ReSP, equipped with a dual-function summarizer. This summarizer compresses information from retrieved documents, targeting both the overarching question and the current sub-question concurrently. Experimental results on the multi-hop question-answering datasets HotpotQA and 2WikiMultihopQA demonstrate that our method significantly outperforms the state-of-the-art, and exhibits excellent robustness concerning context length. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 474,233 |
2307.14856 | Exploiting the Potential of Seq2Seq Models as Robust Few-Shot Learners | In-context learning, which offers substantial advantages over fine-tuning, is predominantly observed in decoder-only models, while encoder-decoder (i.e., seq2seq) models excel in methods that rely on weight updates. Recently, a few studies have demonstrated the feasibility of few-shot learning with seq2seq models; however, this has been limited to tasks that align well with the seq2seq architecture, such as summarization and translation. Inspired by these initial studies, we provide a first-ever extensive experiment comparing the in-context few-shot learning capabilities of decoder-only and encoder-decoder models on a broad range of tasks. Furthermore, we propose two methods to more effectively elicit in-context learning ability in seq2seq models: objective-aligned prompting and a fusion-based approach. Remarkably, our approach outperforms a decoder-only model that is six times larger and exhibits significant performance improvements compared to conventional seq2seq models across a variety of settings. We posit that, with the right configuration and prompt design, seq2seq models can be highly effective few-shot learners for a wide spectrum of applications. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 382,070 |
2011.01151 | Optimize what matters: Training DNN-HMM Keyword Spotting Model Using End
Metric | Deep Neural Network--Hidden Markov Model (DNN-HMM) based methods have been successfully used for many always-on keyword spotting algorithms that detect a wake word to trigger a device. The DNN predicts the state probabilities of a given speech frame, while HMM decoder combines the DNN predictions of multiple speech frames to compute the keyword detection score. The DNN, in prior methods, is trained independent of the HMM parameters to minimize the cross-entropy loss between the predicted and the ground-truth state probabilities. The mis-match between the DNN training loss (cross-entropy) and the end metric (detection score) is the main source of sub-optimal performance for the keyword spotting task. We address this loss-metric mismatch with a novel end-to-end training strategy that learns the DNN parameters by optimizing for the detection score. To this end, we make the HMM decoder (dynamic programming) differentiable and back-propagate through it to maximize the score for the keyword and minimize the scores for non-keyword speech segments. Our method does not require any change in the model architecture or the inference framework; therefore, there is no overhead in run-time memory or compute requirements. Moreover, we show significant reduction in false rejection rate (FRR) at the same false trigger experience (> 70% over independent DNN training). | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 204,506 |
2003.00703 | UFTR: A Unified Framework for Ticket Routing | Corporations today face increasing demands for the timely and effective delivery of customer service. This creates the need for a robust and accurate automated solution to what is formally known as the ticket routing problem. This task is to match each unresolved service incident, or "ticket", to the right group of service experts. Existing studies divide the task into two independent subproblems - initial group assignment and inter-group transfer. However, our study addresses both subproblems jointly using an end-to-end modeling approach. We first performed a preliminary analysis of half a million archived tickets to uncover relevant features. Then, we devised the UFTR, a Unified Framework for Ticket Routing using four types of features (derived from tickets, groups, and their interactions). In our experiments, we implemented two ranking models with the UFTR. Our models outperform baselines on three routing metrics. Furthermore, a post-hoc analysis reveals that this superior performance can largely be attributed to the features that capture the associations between ticket assignment and group assignment. In short, our results demonstrate that the UFTR is a superior solution to the ticket routing problem because it takes into account previously unexploited interrelationships between the group assignment and group transfer problems. | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 166,385 |
1504.02351 | When Face Recognition Meets with Deep Learning: an Evaluation of
Convolutional Neural Networks for Face Recognition | Deep learning, in particular Convolutional Neural Network (CNN), has achieved promising results in face recognition recently. However, it remains an open question: why CNNs work well and how to design a 'good' architecture. The existing works tend to focus on reporting CNN architectures that work well for face recognition rather than investigate the reason. In this work, we conduct an extensive evaluation of CNN-based face recognition systems (CNN-FRS) on a common ground to make our work easily reproducible. Specifically, we use public database LFW (Labeled Faces in the Wild) to train CNNs, unlike most existing CNNs trained on private databases. We propose three CNN architectures which are the first reported architectures trained using LFW data. This paper quantitatively compares the architectures of CNNs and evaluate the effect of different implementation choices. We identify several useful properties of CNN-FRS. For instance, the dimensionality of the learned features can be significantly reduced without adverse effect on face recognition accuracy. In addition, traditional metric learning method exploiting CNN-learned features is evaluated. Experiments show two crucial factors to good CNN-FRS performance are the fusion of multiple CNNs and metric learning. To make our work reproducible, source code and models will be made publicly available. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 41,910 |
2106.09866 | On Minimizing Cost in Legal Document Review Workflows | Technology-assisted review (TAR) refers to human-in-the-loop machine learning workflows for document review in legal discovery and other high recall review tasks. Attorneys and legal technologists have debated whether review should be a single iterative process (one-phase TAR workflows) or whether model training and review should be separate (two-phase TAR workflows), with implications for the choice of active learning algorithm. The relative cost of manual labeling for different purposes (training vs. review) and of different documents (positive vs. negative examples) is a key and neglected factor in this debate. Using a novel cost dynamics analysis, we show analytically and empirically that these relative costs strongly impact whether a one-phase or two-phase workflow minimizes cost. We also show how category prevalence, classification task difficulty, and collection size impact the optimal choice not only of workflow type, but of active learning method and stopping point. | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 241,813 |
2006.10637 | Temporal Graph Networks for Deep Learning on Dynamic Graphs | Graph Neural Networks (GNNs) have recently become increasingly popular due to their ability to learn complex systems of relations or interactions arising in a broad spectrum of problems ranging from biology and particle physics to social networks and recommendation systems. Despite the plethora of different models for deep learning on graphs, few approaches have been proposed thus far for dealing with graphs that present some sort of dynamic nature (e.g. evolving features or connectivity over time). In this paper, we present Temporal Graph Networks (TGNs), a generic, efficient framework for deep learning on dynamic graphs represented as sequences of timed events. Thanks to a novel combination of memory modules and graph-based operators, TGNs are able to significantly outperform previous approaches being at the same time more computationally efficient. We furthermore show that several previous models for learning on dynamic graphs can be cast as specific instances of our framework. We perform a detailed ablation study of different components of our framework and devise the best configuration that achieves state-of-the-art performance on several transductive and inductive prediction tasks for dynamic graphs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 182,955 |
1605.00942 | TheanoLM - An Extensible Toolkit for Neural Network Language Modeling | We present a new tool for training neural network language models (NNLMs), scoring sentences, and generating text. The tool has been written using Python library Theano, which allows researcher to easily extend it and tune any aspect of the training process. Regardless of the flexibility, Theano is able to generate extremely fast native code that can utilize a GPU or multiple CPU cores in order to parallelize the heavy numerical computations. The tool has been evaluated in difficult Finnish and English conversational speech recognition tasks, and significant improvement was obtained over our best back-off n-gram models. The results that we obtained in the Finnish task were compared to those from existing RNNLM and RWTHLM toolkits, and found to be as good or better, while training times were an order of magnitude shorter. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | 55,405 |
2404.19462 | Continual Model-based Reinforcement Learning for Data Efficient Wireless
Network Optimisation | We present a method that addresses the pain point of long lead-time required to deploy cell-level parameter optimisation policies to new wireless network sites. Given a sequence of action spaces represented by overlapping subsets of cell-level configuration parameters provided by domain experts, we formulate throughput optimisation as Continual Reinforcement Learning of control policies. Simulation results suggest that the proposed system is able to shorten the end-to-end deployment lead-time by two-fold compared to a reinitialise-and-retrain baseline without any drop in optimisation gain. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 450,654 |
2308.12110 | Constrained Stein Variational Trajectory Optimization | We present Constrained Stein Variational Trajectory Optimization (CSVTO), an algorithm for performing trajectory optimization with constraints on a set of trajectories in parallel. We frame constrained trajectory optimization as a novel form of constrained functional minimization over trajectory distributions, which avoids treating the constraints as a penalty in the objective and allows us to generate diverse sets of constraint-satisfying trajectories. Our method uses Stein Variational Gradient Descent (SVGD) to find a set of particles that approximates a distribution over low-cost trajectories while obeying constraints. CSVTO is applicable to problems with differentiable equality and inequality constraints and includes a novel particle re-sampling step to escape local minima. By explicitly generating diverse sets of trajectories, CSVTO is better able to avoid poor local minima and is more robust to initialization. We demonstrate that CSVTO outperforms baselines in challenging highly-constrained tasks, such as a 7DoF wrench manipulation task, where CSVTO outperforms all baselines both in success and constraint satisfaction. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 387,422 |
1602.06042 | Structured Sparse Regression via Greedy Hard-Thresholding | Several learning applications require solving high-dimensional regression problems where the relevant features belong to a small number of (overlapping) groups. For very large datasets and under standard sparsity constraints, hard thresholding methods have proven to be extremely efficient, but such methods require NP hard projections when dealing with overlapping groups. In this paper, we show that such NP-hard projections can not only be avoided by appealing to submodular optimization, but such methods come with strong theoretical guarantees even in the presence of poorly conditioned data (i.e. say when two features have correlation $\geq 0.99$), which existing analyses cannot handle. These methods exhibit an interesting computation-accuracy trade-off and can be extended to significantly harder problems such as sparse overlapping groups. Experiments on both real and synthetic data validate our claims and demonstrate that the proposed methods are orders of magnitude faster than other greedy and convex relaxation techniques for learning with group-structured sparsity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 52,319 |
2103.07765 | Image Classifiers for Network Intrusions | This research recasts the network attack dataset from UNSW-NB15 as an intrusion detection problem in image space. Using one-hot-encodings, the resulting grayscale thumbnails provide a quarter-million examples for deep learning algorithms. Applying the MobileNetV2's convolutional neural network architecture, the work demonstrates a 97% accuracy in distinguishing normal and attack traffic. Further class refinements to 9 individual attack families (exploits, worms, shellcodes) show an overall 56% accuracy. Using feature importance rank, a random forest solution on subsets show the most important source-destination factors and the least important ones as mainly obscure protocols. The dataset is available on Kaggle. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 224,686 |
1904.12894 | DiamondGAN: Unified Multi-Modal Generative Adversarial Networks for MRI
Sequences Synthesis | Synthesizing MR imaging sequences is highly relevant in clinical practice, as single sequences are often missing or are of poor quality (e.g. due to motion). Naturally, the idea arises that a target modality would benefit from multi-modal input, as proprietary information of individual modalities can be synergistic. However, existing methods fail to scale up to multiple non-aligned imaging modalities, facing common drawbacks of complex imaging sequences. We propose a novel, scalable and multi-modal approach called DiamondGAN. Our model is capable of performing exible non-aligned cross-modality synthesis and data infill, when given multiple modalities or any of their arbitrary subsets, learning structured information in an end-to-end fashion. We synthesize two MRI sequences with clinical relevance (i.e., double inversion recovery (DIR) and contrast-enhanced T1 (T1-c)), reconstructed from three common sequences. In addition, we perform a multi-rater visual evaluation experiment and find that trained radiologists are unable to distinguish synthetic DIR images from real ones. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 129,243 |
2302.03640 | SSR-2D: Semantic 3D Scene Reconstruction from 2D Images | Most deep learning approaches to comprehensive semantic modeling of 3D indoor spaces require costly dense annotations in the 3D domain. In this work, we explore a central 3D scene modeling task, namely, semantic scene reconstruction without using any 3D annotations. The key idea of our approach is to design a trainable model that employs both incomplete 3D reconstructions and their corresponding source RGB-D images, fusing cross-domain features into volumetric embeddings to predict complete 3D geometry, color, and semantics with only 2D labeling which can be either manual or machine-generated. Our key technical innovation is to leverage differentiable rendering of color and semantics to bridge 2D observations and unknown 3D space, using the observed RGB images and 2D semantics as supervision, respectively. We additionally develop a learning pipeline and corresponding method to enable learning from imperfect predicted 2D labels, which could be additionally acquired by synthesizing in an augmented set of virtual training views complementing the original real captures, enabling more efficient self-supervision loop for semantics. As a result, our end-to-end trainable solution jointly addresses geometry completion, colorization, and semantic mapping from limited RGB-D images, without relying on any 3D ground-truth information. Our method achieves the state-of-the-art performance of semantic scene completion on two large-scale benchmark datasets MatterPort3D and ScanNet, surpasses baselines even with costly 3D annotations in predicting both geometry and semantics. To our knowledge, our method is also the first 2D-driven method addressing completion and semantic segmentation of real-world 3D scans simultaneously. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 344,414 |
2103.16103 | Local Collaborative Autoencoders | Top-N recommendation is a challenging problem because complex and sparse user-item interactions should be adequately addressed to achieve high-quality recommendation results. The local latent factor approach has been successfully used with multiple local models to capture diverse user preferences with different sub-communities. However, previous studies have not fully explored the potential of local models, and failed to identify many small and coherent sub-communities. In this paper, we present Local Collaborative Autoencoders (LOCA), a generalized local latent factor framework. Specifically, LOCA adopts different neighborhood ranges at the training and inference stages. Besides, LOCA uses a novel sub-community discovery method, maximizing the coverage of a union of local models and employing a large number of diverse local models. By adopting autoencoders as the base model, LOCA captures latent non-linear patterns representing meaningful user-item interactions within sub-communities. Our experimental results demonstrate that LOCA is scalable and outperforms state-of-the-art models on several public benchmarks, by 2.99~4.70% in Recall and 1.02~7.95% in NDCG, respectively. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 227,459 |
2107.03011 | Visual Odometry with an Event Camera Using Continuous Ray Warping and
Volumetric Contrast Maximization | We present a new solution to tracking and mapping with an event camera. The motion of the camera contains both rotation and translation, and the displacements happen in an arbitrarily structured environment. As a result, the image matching may no longer be represented by a low-dimensional homographic warping, thus complicating an application of the commonly used Image of Warped Events (IWE). We introduce a new solution to this problem by performing contrast maximization in 3D. The 3D location of the rays cast for each event is smoothly varied as a function of a continuous-time motion parametrization, and the optimal parameters are found by maximizing the contrast in a volumetric ray density field. Our method thus performs joint optimization over motion and structure. The practical validity of our approach is supported by an application to AGV motion estimation and 3D reconstruction with a single vehicle-mounted event camera. The method approaches the performance obtained with regular cameras, and eventually outperforms in challenging visual conditions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 245,020 |
1902.10173 | Online Learning with Continuous Ranked Probability Score | Probabilistic forecasts in the form of probability distributions over future events have become popular in several fields of statistical science. The dissimilarity between a probability forecast and an outcome is measured by a loss function (scoring rule). Popular example of scoring rule for continuous outcomes is the continuous ranked probability score (CRPS). We consider the case where several competing methods produce online predictions in the form of probability distribution functions. In this paper, the problem of combining probabilistic forecasts is considered in the prediction with expert advice framework. We show that CRPS is a mixable loss function and then the time independent upper bound for the regret of the Vovk's aggregating algorithm using CRPS as a loss function can be obtained. We present the results of numerical experiments illustrating the proposed methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,599 |
2204.09607 | On the Practical Design of Tube-Enhanced Multi-Stage Nonlinear Model
Predictive Control | Tube-enhanced multi-stage nonlinear model predictive control is a robust control scheme that can handle a wide range of uncertainties with reduced conservatism and manageable computational complexity. In this paper, we elaborate on the flexibility of the approach from an application point of view. We discuss the path to making design decisions to implement the novel scheme systematically. We illustrate the critical steps in the design and implementation of the scheme for an industrial example. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 292,499 |
1806.05759 | Insights on representational similarity in neural networks with
canonical correlation | Comparing different neural network representations and determining how representations evolve over time remain challenging open questions in our understanding of the function of neural networks. Comparing representations in neural networks is fundamentally difficult as the structure of representations varies greatly, even across groups of networks trained on identical tasks, and over the course of training. Here, we develop projection weighted CCA (Canonical Correlation Analysis) as a tool for understanding neural networks, building off of SVCCA, a recently proposed method (Raghu et al., 2017). We first improve the core method, showing how to differentiate between signal and noise, and then apply this technique to compare across a group of CNNs, demonstrating that networks which generalize converge to more similar representations than networks which memorize, that wider networks converge to more similar solutions than narrow networks, and that trained networks with identical topology but different learning rates converge to distinct clusters with diverse representations. We also investigate the representational dynamics of RNNs, across both training and sequential timesteps, finding that RNNs converge in a bottom-up pattern over the course of training and that the hidden state is highly variable over the course of a sequence, even when accounting for linear transforms. Together, these results provide new insights into the function of CNNs and RNNs, and demonstrate the utility of using CCA to understand representations. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 100,548 |
2205.06359 | Deep Learning for Prawn Farming: Forecasting and Anomaly Detection | We present a decision support system for managing water quality in prawn ponds. The system uses various sources of data and deep learning models in a novel way to provide 24-hour forecasting and anomaly detection of water quality parameters. It provides prawn farmers with tools to proactively avoid a poor growing environment, thereby optimising growth and reducing the risk of losing stock. This is a major shift for farmers who are forced to manage ponds by reactively correcting poor water quality conditions. To our knowledge, we are the first to apply Transformer as an anomaly detection model, and the first to apply anomaly detection in general to this aquaculture problem. Our technical contributions include adapting ForecastNet for multivariate data and adapting Transformer and the Attention model to incorporate weather forecast data into their decoders. We attain an average mean absolute percentage error of 12% for dissolved oxygen forecasts and we demonstrate two anomaly detection case studies. The system is successfully running in its second year of deployment on a commercial prawn farm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 296,215 |
2103.04751 | A Memory Optimized Data Structure for Binary Chromosomes in Genetic
Algorithm | This paper presents a memory-optimized metadata-based data structure for implementation of binary chromosome in Genetic Algorithm. In GA different types of genotypes are used depending on the problem domain. Among these, binary genotype is the most popular one for non-enumerated encoding owing to its representational and computational simplicity. This paper proposes a memory-optimized implementation approach of binary genotype. The approach improves the memory utilization as well as capacity of retaining alleles. Mathematical proof has been provided to establish the same. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 223,752 |
2107.11333 | Robust Adaptive Submodular Maximization | The goal of a sequential decision making problem is to design an interactive policy that adaptively selects a group of items, each selection is based on the feedback from the past, in order to maximize the expected utility of selected items. It has been shown that the utility functions of many real-world applications are adaptive submodular. However, most of existing studies on adaptive submodular optimization focus on the average-case. Unfortunately, a policy that has a good average-case performance may have very poor performance under the worst-case realization. In this study, we propose to study two variants of adaptive submodular optimization problems, namely, worst-case adaptive submodular maximization and robust submodular maximization. The first problem aims to find a policy that maximizes the worst-case utility and the latter one aims to find a policy, if any, that achieves both near optimal average-case utility and worst-case utility simultaneously. We introduce a new class of stochastic functions, called \emph{worst-case submodular function}. For the worst-case adaptive submodular maximization problem subject to a $p$-system constraint, we develop an adaptive worst-case greedy policy that achieves a $\frac{1}{p+1}$ approximation ratio against the optimal worst-case utility if the utility function is worst-case submodular. For the robust adaptive submodular maximization problem subject to cardinality constraints (resp. partition matroid constraints), if the utility function is both worst-case submodular and adaptive submodular, we develop a hybrid adaptive policy that achieves an approximation close to $1-e^{-\frac{1}{2}}$ (resp. $1/3$) under both worst- and average-case settings simultaneously. We also describe several applications of our theoretical results, including pool-base active learning, stochastic submodular set cover and adaptive viral marketing. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 247,564 |
1510.06141 | Multiple-Input Multiple-Output OFDM with Index Modulation | Orthogonal frequency division multiplexing with index modulation (OFDM-IM) is a novel multicarrier transmission technique which has been proposed as an alternative to classical OFDM. The main idea of OFDM-IM is the use of the indices of the active subcarriers in an OFDM system as an additional source of information. In this work, we propose multiple-input multiple-output OFDM-IM (MIMO-OFDM-IM) scheme by combining OFDM-IM and MIMO transmission techniques. The low complexity transceiver structure of the MIMO-OFDM-IM scheme is developed and it is shown via computer simulations that the proposed MIMO-OFDM-IM scheme achieves significantly better error performance than classical MIMO-OFDM for several different system configurations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 48,089 |
2410.15849 | Focus Where It Matters: Graph Selective State Focused Attention Networks | Traditional graph neural networks (GNNs) lack scalability and lose individual node characteristics due to over-smoothing, especially in the case of deeper networks. This results in sub-optimal feature representation, affecting the model's performance on tasks involving dynamically changing graphs. To address this issue, we present Graph Selective States Focused Attention Networks (GSANs) based neural network architecture for graph-structured data. The GSAN is enabled by multi-head masked self-attention (MHMSA) and selective state space modeling (S3M) layers to overcome the limitations of GNNs. In GSAN, the MHMSA allows GSAN to dynamically emphasize crucial node connections, particularly in evolving graph environments. The S3M layer enables the network to adjust dynamically in changing node states and improving predictions of node behavior in varying contexts without needing primary knowledge of the graph structure. Furthermore, the S3M layer enhances the generalization of unseen structures and interprets how node states influence link importance. With this, GSAN effectively outperforms inductive and transductive tasks and overcomes the issues that traditional GNNs experience. To analyze the performance behavior of GSAN, a set of state-of-the-art comparative experiments are conducted on graphs benchmark datasets, including $Cora$, $Citeseer$, $Pubmed$ network citation, and $protein-protein-interaction$ datasets, as an outcome, GSAN improved the classification accuracy by $1.56\%$, $8.94\%$, $0.37\%$, and $1.54\%$ on $F1-score$ respectively. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,760 |
2310.01946 | CoralVOS: Dataset and Benchmark for Coral Video Segmentation | Coral reefs formulate the most valuable and productive marine ecosystems, providing habitat for many marine species. Coral reef surveying and analysis are currently confined to coral experts who invest substantial effort in generating comprehensive and dependable reports (\emph{e.g.}, coral coverage, population, spatial distribution, \textit{etc}), from the collected survey data. However, performing dense coral analysis based on manual efforts is significantly time-consuming, the existing coral analysis algorithms compromise and opt for performing down-sampling and only conducting sparse point-based coral analysis within selected frames. However, such down-sampling will \textbf{inevitable} introduce the estimation bias or even lead to wrong results. To address this issue, we propose to perform \textbf{dense coral video segmentation}, with no down-sampling involved. Through video object segmentation, we could generate more \textit{reliable} and \textit{in-depth} coral analysis than the existing coral reef analysis algorithms. To boost such dense coral analysis, we propose a large-scale coral video segmentation dataset: \textbf{CoralVOS} as demonstrated in Fig. 1. To the best of our knowledge, our CoralVOS is the first dataset and benchmark supporting dense coral video segmentation. We perform experiments on our CoralVOS dataset, including 6 recent state-of-the-art video object segmentation (VOS) algorithms. We fine-tuned these VOS algorithms on our CoralVOS dataset and achieved observable performance improvement. The results show that there is still great potential for further promoting the segmentation accuracy. The dataset and trained models will be released with the acceptance of this work to foster the coral reef research community. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 396,647 |
2408.15637 | Transfer Learning from Simulated to Real Scenes for Monocular 3D Object
Detection | Accurately detecting 3D objects from monocular images in dynamic roadside scenarios remains a challenging problem due to varying camera perspectives and unpredictable scene conditions. This paper introduces a two-stage training strategy to address these challenges. Our approach initially trains a model on the large-scale synthetic dataset, RoadSense3D, which offers a diverse range of scenarios for robust feature learning. Subsequently, we fine-tune the model on a combination of real-world datasets to enhance its adaptability to practical conditions. Experimental results of the Cube R-CNN model on challenging public benchmarks show a remarkable improvement in detection performance, with a mean average precision rising from 0.26 to 12.76 on the TUM Traffic A9 Highway dataset and from 2.09 to 6.60 on the DAIR-V2X-I dataset when performing transfer learning. Code, data, and qualitative video results are available on the project website: https://roadsense3d.github.io. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 484,016 |
2409.08260 | Improving Text-guided Object Inpainting with Semantic Pre-inpainting | Recent years have witnessed the success of large text-to-image diffusion models and their remarkable potential to generate high-quality images. The further pursuit of enhancing the editability of images has sparked significant interest in the downstream task of inpainting a novel object described by a text prompt within a designated region in the image. Nevertheless, the problem is not trivial from two aspects: 1) Solely relying on one single U-Net to align text prompt and visual object across all the denoising timesteps is insufficient to generate desired objects; 2) The controllability of object generation is not guaranteed in the intricate sampling space of diffusion model. In this paper, we propose to decompose the typical single-stage object inpainting into two cascaded processes: 1) semantic pre-inpainting that infers the semantic features of desired objects in a multi-modal feature space; 2) high-fieldity object generation in diffusion latent space that pivots on such inpainted semantic features. To achieve this, we cascade a Transformer-based semantic inpainter and an object inpainting diffusion model, leading to a novel CAscaded Transformer-Diffusion (CAT-Diffusion) framework for text-guided object inpainting. Technically, the semantic inpainter is trained to predict the semantic features of the target object conditioning on unmasked context and text prompt. The outputs of the semantic inpainter then act as the informative visual prompts to guide high-fieldity object generation through a reference adapter layer, leading to controllable object inpainting. Extensive evaluations on OpenImages-V6 and MSCOCO validate the superiority of CAT-Diffusion against the state-of-the-art methods. Code is available at \url{https://github.com/Nnn-s/CATdiffusion}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 487,833 |
2406.08854 | Current applications and potential future directions of reinforcement
learning-based Digital Twins in agriculture | Digital Twins have gained attention in various industries for simulation, monitoring, and decision-making, relying on ever-improving machine learning models. However, agricultural Digital Twin implementations are limited compared to other industries. Meanwhile, machine learning, particularly reinforcement learning, has shown potential in agricultural applications like optimizing decision-making, task automation, and resource management. A key aspect of Digital Twins is representing physical assets or systems in a virtual environment, which aligns well with reinforcement learning's need for environment representations to learn the best policy for a task. Reinforcement learning in agriculture can thus enable various Digital Twin applications in agricultural domains. This review aims to categorize existing research employing reinforcement learning in agricultural settings by application domains like robotics, greenhouse management, irrigation systems, and crop management, identifying potential future areas for reinforcement learning-based Digital Twins. It also categorizes the reinforcement learning techniques used, including tabular methods, Deep Q-Networks (DQN), Policy Gradient methods, and Actor-Critic algorithms, to overview currently employed models. The review seeks to provide insights into the state-of-the-art in integrating Digital Twins and reinforcement learning in agriculture, identifying gaps and opportunities for future research, and exploring synergies to tackle agricultural challenges and optimize farming, paving the way for more efficient and sustainable farming methodologies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 463,661 |
2211.15377 | Whose Emotion Matters? Speaking Activity Localisation without Prior
Knowledge | The task of emotion recognition in conversations (ERC) benefits from the availability of multiple modalities, as provided, for example, in the video-based Multimodal EmotionLines Dataset (MELD). However, only a few research approaches use both acoustic and visual information from the MELD videos. There are two reasons for this: First, label-to-video alignments in MELD are noisy, making those videos an unreliable source of emotional speech data. Second, conversations can involve several people in the same scene, which requires the localisation of the utterance source. In this paper, we introduce MELD with Fixed Audiovisual Information via Realignment (MELD-FAIR) by using recent active speaker detection and automatic speech recognition models, we are able to realign the videos of MELD and capture the facial expressions from speakers in 96.92% of the utterances provided in MELD. Experiments with a self-supervised voice recognition model indicate that the realigned MELD-FAIR videos more closely match the transcribed utterances given in the MELD dataset. Finally, we devise a model for emotion recognition in conversations trained on the realigned MELD-FAIR videos, which outperforms state-of-the-art models for ERC based on vision alone. This indicates that localising the source of speaking activities is indeed effective for extracting facial expressions from the uttering speakers and that faces provide more informative visual cues than the visual features state-of-the-art models have been using so far. The MELD-FAIR realignment data, and the code of the realignment procedure and of the emotional recognition, are available at https://github.com/knowledgetechnologyuhh/MELD-FAIR. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 333,231 |
2110.12633 | Age and Gender Prediction using Deep CNNs and Transfer Learning | The last decade or two has witnessed a boom of images. With the increasing ubiquity of cameras and with the advent of selfies, the number of facial images available in the world has skyrocketed. Consequently, there has been a growing interest in automatic age and gender prediction of a person using facial images. We in this paper focus on this challenging problem. Specifically, this paper focuses on age estimation, age classification and gender classification from still facial images of an individual. We train different models for each problem and we also draw comparisons between building a custom CNN (Convolutional Neural Network) architecture and using various CNN architectures as feature extractors, namely VGG16 pre-trained on VGGFace, Res-Net50 and SE-ResNet50 pre-trained on VGGFace2 dataset and training over those extracted features. We also provide baseline performance of various machine learning algorithms on the feature extraction which gave us the best results. It was observed that even simple linear regression trained on such extracted features outperformed training CNN, ResNet50 and ResNeXt50 from scratch for age estimation. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 262,916 |
2406.03766 | Privacy Preserving Semi-Decentralized Mean Estimation over
Intermittently-Connected Networks | We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network, where communications between nodes can fail intermittently. We adopt a semi-decentralized setup, wherein to mitigate the impact of intermittently connected links, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server. In such a setting, the communications between any pair of nodes must ensure that the privacy of the nodes is rigorously maintained to prevent unauthorized information leakage. We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes and, subsequently, propose PriCER: Private Collaborative Estimation via Relaying -- a differentially private collaborative algorithm for mean estimation to optimize this tradeoff. The privacy guarantees of PriCER arise (i) implicitly, by exploiting the inherent stochasticity of the flaky network connections, and (ii) explicitly, by adding Gaussian perturbations to the estimates exchanged by the nodes. Local and central privacy guarantees are provided against eavesdroppers who can observe different signals, such as the communications amongst nodes during local consensus and (possibly multiple) transmissions from the relays to the central server. We substantiate our theoretical findings with numerical simulations. Our implementation is available at https://github.com/rajarshisaha95/private-collaborative-relaying. | false | false | false | false | false | false | true | false | false | true | true | false | false | false | false | false | false | true | 461,382 |
2007.07524 | Learning with Privileged Information for Efficient Image
Super-Resolution | Convolutional neural networks (CNNs) have allowed remarkable advances in single image super-resolution (SISR) over the last decade. Most SR methods based on CNNs have focused on achieving performance gains in terms of quality metrics, such as PSNR and SSIM, over classical approaches. They typically require a large amount of memory and computational units. FSRCNN, consisting of few numbers of convolutional layers, has shown promising results, while using an extremely small number of network parameters. We introduce in this paper a novel distillation framework, consisting of teacher and student networks, that allows to boost the performance of FSRCNN drastically. To this end, we propose to use ground-truth high-resolution (HR) images as privileged information. The encoder in the teacher learns the degradation process, subsampling of HR images, using an imitation loss. The student and the decoder in the teacher, having the same network architecture as FSRCNN, try to reconstruct HR images. Intermediate features in the decoder, affordable for the student to learn, are transferred to the student through feature distillation. Experimental results on standard benchmarks demonstrate the effectiveness and the generalization ability of our framework, which significantly boosts the performance of FSRCNN as well as other SR methods. Our code and model are available online: https://cvlab.yonsei.ac.kr/projects/PISR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 187,363 |
1901.11332 | Optimization of the Area Under the ROC Curve using Neural Network
Supervectors for Text-Dependent Speaker Verification | This paper explores two techniques to improve the performance of text-dependent speaker verification systems based on deep neural networks. Firstly, we propose a general alignment mechanism to keep the temporal structure of each phrase and obtain a supervector with the speaker and phrase information, since both are relevant for a text-dependent verification. As we show, it is possible to use different alignment techniques to replace the global average pooling providing significant gains in performance. Moreover, we also present a novel back-end approach to train a neural network for detection tasks by optimizing the Area Under the Curve (AUC) as an alternative to the usual triplet loss function, so the system is end-to-end, with a cost function close to our desired measure of performance. As we can see in the experimental section, this approach improves the system performance, since our triplet neural network based on an approximation of the AUC (aAUC) learns how to discriminate between pairs of examples from the same identity and pairs of different identities. The different alignment techniques to produce supervectors in addition to the new back-end approach were tested on the RSR2015-Part I database for text-dependent speaker verification, providing competitive results compared to similar size networks using the global average pooling to extract supervectors and using a simple back-end or triplet loss training. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,220 |
1204.3972 | EigenGP: Sparse Gaussian process models with data-dependent
eigenfunctions | Gaussian processes (GPs) provide a nonparametric representation of functions. However, classical GP inference suffers from high computational cost and it is difficult to design nonstationary GP priors in practice. In this paper, we propose a sparse Gaussian process model, EigenGP, based on the Karhunen-Loeve (KL) expansion of a GP prior. We use the Nystrom approximation to obtain data dependent eigenfunctions and select these eigenfunctions by evidence maximization. This selection reduces the number of eigenfunctions in our model and provides a nonstationary covariance function. To handle nonlinear likelihoods, we develop an efficient expectation propagation (EP) inference algorithm, and couple it with expectation maximization for eigenfunction selection. Because the eigenfunctions of a Gaussian kernel are associated with clusters of samples - including both the labeled and unlabeled - selecting relevant eigenfunctions enables EigenGP to conduct semi-supervised learning. Our experimental results demonstrate improved predictive performance of EigenGP over alternative state-of-the-art sparse GP and semisupervised learning methods for regression, classification, and semisupervised classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 15,553 |
2404.06107 | Exploring the Necessity of Visual Modality in Multimodal Machine
Translation using Authentic Datasets | Recent research in the field of multimodal machine translation (MMT) has indicated that the visual modality is either dispensable or offers only marginal advantages. However, most of these conclusions are drawn from the analysis of experimental results based on a limited set of bilingual sentence-image pairs, such as Multi30k. In these kinds of datasets, the content of one bilingual parallel sentence pair must be well represented by a manually annotated image, which is different from the real-world translation scenario. In this work, we adhere to the universal multimodal machine translation framework proposed by Tang et al. (2022). This approach allows us to delve into the impact of the visual modality on translation efficacy by leveraging real-world translation datasets. Through a comprehensive exploration via probing tasks, we find that the visual modality proves advantageous for the majority of authentic translation datasets. Notably, the translation performance primarily hinges on the alignment and coherence between textual and visual contents. Furthermore, our results suggest that visual information serves a supplementary role in multimodal translation and can be substituted. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 445,329 |
2309.06896 | Domain-Aware Augmentations for Unsupervised Online General Continual
Learning | Continual Learning has been challenging, especially when dealing with unsupervised scenarios such as Unsupervised Online General Continual Learning (UOGCL), where the learning agent has no prior knowledge of class boundaries or task change information. While previous research has focused on reducing forgetting in supervised setups, recent studies have shown that self-supervised learners are more resilient to forgetting. This paper proposes a novel approach that enhances memory usage for contrastive learning in UOGCL by defining and using stream-dependent data augmentations together with some implementation tricks. Our proposed method is simple yet effective, achieves state-of-the-art results compared to other unsupervised approaches in all considered setups, and reduces the gap between supervised and unsupervised continual learning. Our domain-aware augmentation procedure can be adapted to other replay-based methods, making it a promising strategy for continual learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 391,584 |
2104.14468 | Star DGT: a Robust Gabor Transform for Speech Denoising | In this paper, we address the speech denoising problem, where Gaussian, pink and blue additive noises are to be removed from a given speech signal. Our approach is based on a redundant, analysis-sparse representation of the original speech signal. We pick an eigenvector of the Zauner unitary matrix and -- under certain assumptions on the ambient dimension -- we use it as window vector to generate a spark deficient Gabor frame. The analysis operator associated with such a frame, is a (highly) redundant Gabor transform, which we use as a sparsifying transform in denoising procedure. We conduct computational experiments on real-world speech data, using as baseline three Gabor transforms generated by state-of-the-art window vectors in time-frequency analysis and compare their performance to the proposed Gabor transform. The results show that our proposed redundant Gabor transform outperforms all others, consistently for all signals. | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 232,835 |
2206.06346 | Bringing Image Scene Structure to Video via Frame-Clip Consistency of
Object Tokens | Recent action recognition models have achieved impressive results by integrating objects, their locations and interactions. However, obtaining dense structured annotations for each frame is tedious and time-consuming, making these methods expensive to train and less scalable. At the same time, if a small set of annotated images is available, either within or outside the domain of interest, how could we leverage these for a video downstream task? We propose a learning framework StructureViT (SViT for short), which demonstrates how utilizing the structure of a small number of images only available during training can improve a video model. SViT relies on two key insights. First, as both images and videos contain structured information, we enrich a transformer model with a set of \emph{object tokens} that can be used across images and videos. Second, the scene representations of individual frames in video should "align" with those of still images. This is achieved via a \emph{Frame-Clip Consistency} loss, which ensures the flow of structured information between images and videos. We explore a particular instantiation of scene structure, namely a \emph{Hand-Object Graph}, consisting of hands and objects with their locations as nodes, and physical relations of contact/no-contact as edges. SViT shows strong performance improvements on multiple video understanding tasks and datasets. Furthermore, it won in the Ego4D CVPR'22 Object State Localization challenge. For code and pretrained models, visit the project page at \url{https://eladb3.github.io/SViT/} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 302,343 |
2111.05548 | Deep Attention-guided Graph Clustering with Dual Self-supervision | Existing deep embedding clustering works only consider the deepest layer to learn a feature embedding and thus fail to well utilize the available discriminative information from cluster assignments, resulting performance limitation. To this end, we propose a novel method, namely deep attention-guided graph clustering with dual self-supervision (DAGC). Specifically, DAGC first utilizes a heterogeneity-wise fusion module to adaptively integrate the features of an auto-encoder and a graph convolutional network in each layer and then uses a scale-wise fusion module to dynamically concatenate the multi-scale features in different layers. Such modules are capable of learning a discriminative feature embedding via an attention-based mechanism. In addition, we design a distribution-wise fusion module that leverages cluster assignments to acquire clustering results directly. To better explore the discriminative information from the cluster assignments, we develop a dual self-supervision solution consisting of a soft self-supervision strategy with a triplet Kullback-Leibler divergence loss and a hard self-supervision strategy with a pseudo supervision loss. Extensive experiments validate that our method consistently outperforms state-of-the-art methods on six benchmark datasets. Especially, our method improves the ARI by more than 18.14% over the best baseline. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 265,830 |
1810.06425 | Modelling of temporal fluctuation scaling in online news network with
independent cascade model | We show that activity of online news outlets follows a temporal fluctuation scaling law and we recover this feature using an independent cascade model augmented with a varying hype parameter representing a viral potential of an original article. We use the Event Registry platform to track activity of over 10,000 news outlets in 11 different topics in the course of the year 2016. Analyzing over 22,000,000 articles, we found that fluctuation scaling exponents $\alpha$ depend on time window size $\Delta$ in a characteristic way for all the considered topics -- news outlets activities are partially synchronized for $\Delta>15\mathrm{min}$ with a cross-over for $\Delta=1\mathrm{day}$. The proposed model was run on several synthetic network models as well as on a network extracted from the real data. Our approach discards timestamps as not fully reliable observables and focuses on co-occurrences of publishers in cascades of similarly phrased news items. We make use of the Event Registry news clustering feature to find correlations between content published by news outlets in order to uncover common information propagation paths in published articles and to estimate weights of edges in the independent cascade model. While the independent cascade model follows the fluctuation scaling law with a trivial exponent $\alpha=0.5$, we argue that besides the topology of the underlying cooperation network a temporal clustering of articles with similar hypes is necessary to qualitatively reproduce the fluctuation scaling observed in the data. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 110,433 |
2410.17394 | packetLSTM: Dynamic LSTM Framework for Streaming Data with Varying
Feature Space | We study the online learning problem characterized by the varying input feature space of streaming data. Although LSTMs have been employed to effectively capture the temporal nature of streaming data, they cannot handle the dimension-varying streams in an online learning setting. Therefore, we propose a dynamic LSTM-based novel method, called packetLSTM, to model the dimension-varying streams. The packetLSTM's dynamic framework consists of an evolving packet of LSTMs, each dedicated to processing one input feature. Each LSTM retains the local information of its corresponding feature, while a shared common memory consolidates global information. This configuration facilitates continuous learning and mitigates the issue of forgetting, even when certain features are absent for extended time periods. The idea of utilizing one LSTM per feature coupled with a dimension-invariant operator for information aggregation enhances the dynamic nature of packetLSTM. This dynamic nature is evidenced by the model's ability to activate, deactivate, and add new LSTMs as required, thus seamlessly accommodating varying input dimensions. The packetLSTM achieves state-of-the-art results on five datasets, and its underlying principle is extended to other RNN types, like GRU and vanilla RNN. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 501,438 |
1911.12587 | FairPrep: Promoting Data to a First-Class Citizen in Studies on
Fairness-Enhancing Interventions | The importance of incorporating ethics and legal compliance into machine-assisted decision-making is broadly recognized. Further, several lines of recent work have argued that critical opportunities for improving data quality and representativeness, controlling for bias, and allowing humans to oversee and impact computational processes are missed if we do not consider the lifecycle stages upstream from model training and deployment. Yet, very little has been done to date to provide system-level support to data scientists who wish to develop and deploy responsible machine learning methods. We aim to fill this gap and present FairPrep, a design and evaluation framework for fairness-enhancing interventions. FairPrep is based on a developer-centered design, and helps data scientists follow best practices in software engineering and machine learning. As part of our contribution, we identify shortcomings in existing empirical studies for analyzing fairness-enhancing interventions. We then show how FairPrep can be used to measure the impact of sound best practices, such as hyperparameter tuning and feature scaling. In particular, our results suggest that the high variability of the outcomes of fairness-enhancing interventions observed in previous studies is often an artifact of a lack of hyperparameter tuning. Further, we show that the choice of a data cleaning method can impact the effectiveness of fairness-enhancing interventions. | false | false | false | false | false | false | true | false | false | false | false | false | false | true | false | false | true | false | 155,443 |
2301.04381 | Determinate Node Selection for Semi-supervised Classification Oriented
Graph Convolutional Networks | Graph Convolutional Networks (GCNs) have been proved successful in the field of semi-supervised node classification by extracting structural information from graph data. However, the random selection of labeled nodes used by GCNs may lead to unstable generalization performance of GCNs. In this paper, we propose an efficient method for the deterministic selection of labeled nodes: the Determinate Node Selection (DNS) algorithm. The DNS algorithm identifies two categories of representative nodes in the graph: typical nodes and divergent nodes. These labeled nodes are selected by exploring the structure of the graph and determining the ability of the nodes to represent the distribution of data within the graph. The DNS algorithm can be applied quite simply on a wide range of semi-supervised graph neural network models for node classification tasks. Through extensive experimentation, we have demonstrated that the incorporation of the DNS algorithm leads to a remarkable improvement in the average accuracy of the model and a significant decrease in the standard deviation, as compared to the original method. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 340,044 |
2310.20205 | The differential properties of certain permutation polynomials over
finite fields | Finding functions, particularly permutations, with good differential properties has received a lot of attention due to their varied applications. For instance, in combinatorial design theory, a correspondence of perfect $c$-nonlinear functions and difference sets in some quasigroups was recently shown by Anbar et al. (J. Comb. Des. 31(12):1-24, 2023). Additionally, in a recent manuscript by Pal et al. (Adv. Math. Communications, to appear), a very interesting connection between the $c$-differential uniformity and boomerang uniformity, when $c=-1$, was pointed out, showing that they are the same for an odd APN permutation, sparking yet more interest in the construction of functions with low $c$-differential uniformity. We investigate the $c$-differential uniformity of some classes of permutation polynomials. As a result, we add four more classes of permutation polynomials to the family of functions that only contains a few (non-trivial) perfect $c$-nonlinear functions over finite fields of even characteristic. Moreover, we include a class of permutation polynomials with low $c$-differential uniformity over the field of characteristic~$3$. To solve the involved equations over finite fields, we use various number theoretical techniques, in particular, we find explicitly many Walsh transform coefficients and Weil sums that may be of an independent interest. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 404,300 |
2211.10763 | PIDray: A Large-scale X-ray Benchmark for Real-World Prohibited Item
Detection | Automatic security inspection relying on computer vision technology is a challenging task in real-world scenarios due to many factors, such as intra-class variance, class imbalance, and occlusion. Most previous methods rarely touch the cases where the prohibited items are deliberately hidden in messy objects because of the scarcity of large-scale datasets, hindering their applications. To address this issue and facilitate related research, we present a large-scale dataset, named PIDray, which covers various cases in real-world scenarios for prohibited item detection, especially for deliberately hidden items. In specific, PIDray collects 124,486 X-ray images for $12$ categories of prohibited items, and each image is manually annotated with careful inspection, which makes it, to our best knowledge, to largest prohibited items detection dataset to date. Meanwhile, we propose a general divide-and-conquer pipeline to develop baseline algorithms on PIDray. Specifically, we adopt the tree-like structure to suppress the influence of the long-tailed issue in the PIDray dataset, where the first course-grained node is tasked with the binary classification to alleviate the influence of head category, while the subsequent fine-grained node is dedicated to the specific tasks of the tail categories. Based on this simple yet effective scheme, we offer strong task-specific baselines across object detection, instance segmentation, and multi-label classification tasks and verify the generalization ability on common datasets (e.g., COCO and PASCAL VOC). Extensive experiments on PIDray demonstrate that the proposed method performs favorably against current state-of-the-art methods, especially for deliberately hidden items. Our benchmark and codes will be released at https://github.com/lutao2021/PIDray. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 331,428 |
2312.16603 | Beyond the Surface: Advanced Wash Trading Detection in Decentralized NFT
Markets | Wash trading in decentralized markets remains a significant concern magnified by the pseudonymous and public nature of blockchains. In this paper we introduce an innovative methodology designed to detect wash trading activities beyond surface-level transactions. Our approach integrates NFT ownership traces with the Ethereum Transaction Network, encompassing the complete historical record of all Ethereum account normal transactions. By analyzing both networks, our method offers a notable advancement over techniques proposed by existing research. We analyzed the wash trading activity of 7 notable NFT collections. Our results show that wash trading in unregulated NFT markets is an underestimated concern and is much more widespread both in terms of frequency as well as volume. Excluding the Meebits collection, which emerged as an outlier, we found that wash trading constituted up to 25% of the total trading volume. Specifically, for the Meebits collection, a staggering 93% of its total trade volume was attributed to wash trading. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 418,446 |
2105.04458 | Learning Robust Latent Representations for Controllable Speech Synthesis | State-of-the-art Variational Auto-Encoders (VAEs) for learning disentangled latent representations give impressive results in discovering features like pitch, pause duration, and accent in speech data, leading to highly controllable text-to-speech (TTS) synthesis. However, these LSTM-based VAEs fail to learn latent clusters of speaker attributes when trained on either limited or noisy datasets. Further, different latent variables start encoding the same features, limiting the control and expressiveness during speech synthesis. To resolve these issues, we propose RTI-VAE (Reordered Transformer with Information reduction VAE) where we minimize the mutual information between different latent variables and devise a modified Transformer architecture with layer reordering to learn controllable latent representations in speech data. We show that RTI-VAE reduces the cluster overlap of speaker attributes by at least 30\% over LSTM-VAE and by at least 7\% over vanilla Transformer-VAE. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 234,514 |
2308.01207 | BiERL: A Meta Evolutionary Reinforcement Learning Framework via Bilevel
Optimization | Evolutionary reinforcement learning (ERL) algorithms recently raise attention in tackling complex reinforcement learning (RL) problems due to high parallelism, while they are prone to insufficient exploration or model collapse without carefully tuning hyperparameters (aka meta-parameters). In the paper, we propose a general meta ERL framework via bilevel optimization (BiERL) to jointly update hyperparameters in parallel to training the ERL model within a single agent, which relieves the need for prior domain knowledge or costly optimization procedure before model deployment. We design an elegant meta-level architecture that embeds the inner-level's evolving experience into an informative population representation and introduce a simple and feasible evaluation of the meta-level fitness function to facilitate learning efficiency. We perform extensive experiments in MuJoCo and Box2D tasks to verify that as a general framework, BiERL outperforms various baselines and consistently improves the learning performance for a diversity of ERL algorithms. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | false | false | 383,179 |
2309.11248 | Box2Poly: Memory-Efficient Polygon Prediction of Arbitrarily Shaped and
Rotated Text | Recently, Transformer-based text detection techniques have sought to predict polygons by encoding the coordinates of individual boundary vertices using distinct query features. However, this approach incurs a significant memory overhead and struggles to effectively capture the intricate relationships between vertices belonging to the same instance. Consequently, irregular text layouts often lead to the prediction of outlined vertices, diminishing the quality of results. To address these challenges, we present an innovative approach rooted in Sparse R-CNN: a cascade decoding pipeline for polygon prediction. Our method ensures precision by iteratively refining polygon predictions, considering both the scale and location of preceding results. Leveraging this stabilized regression pipeline, even employing just a single feature vector to guide polygon instance regression yields promising detection results. Simultaneously, the leverage of instance-level feature proposal substantially enhances memory efficiency (>50% less vs. the state-of-the-art method DPText-DETR) and reduces inference speed (>40% less vs. DPText-DETR) with minor performance drop on benchmarks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 393,337 |
2410.09343 | ELICIT: LLM Augmentation via External In-Context Capability | Enhancing the adaptive capabilities of large language models is a critical pursuit in both research and application. Traditional fine-tuning methods require substantial data and computational resources, especially for enhancing specific capabilities, while in-context learning is limited by the need for appropriate demonstrations and efficient token usage. Inspired by the expression of in-context learned capabilities through task vectors and the concept of modularization, we propose \alg, a framework consisting of two modules designed to effectively store and reuse task vectors to elicit the diverse capabilities of models without additional training or inference tokens. Our comprehensive experiments and analysis demonstrate that our pipeline is highly transferable across different input formats, tasks, and model architectures. ELICIT serves as a plug-and-play performance booster to enable adaptive elicitation of model capabilities. By externally storing and reusing vectors that represent in-context learned capabilities, \alg not only demonstrates the potential to operate modular capabilities but also significantly enhances the performance, versatility, adaptability, and scalability of large language models. Our code will be publicly available at https://github.com/LINs-lab/ELICIT. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 497,547 |
2412.17029 | GraphAgent: Agentic Graph Language Assistant | Real-world data is represented in both structured (e.g., graph connections) and unstructured (e.g., textual, visual information) formats, encompassing complex relationships that include explicit links (such as social connections and user behaviors) and implicit interdependencies among semantic entities, often illustrated through knowledge graphs. In this work, we propose GraphAgent, an automated agent pipeline that addresses both explicit graph dependencies and implicit graph-enhanced semantic inter-dependencies, aligning with practical data scenarios for predictive tasks (e.g., node classification) and generative tasks (e.g., text generation). GraphAgent comprises three key components: (i) a Graph Generator Agent that builds knowledge graphs to reflect complex semantic dependencies; (ii) a Task Planning Agent that interprets diverse user queries and formulates corresponding tasks through agentic self-planning; and (iii) a Task Execution Agent that efficiently executes planned tasks while automating tool matching and invocation in response to user queries. These agents collaborate seamlessly, integrating language models with graph language models to uncover intricate relational information and data semantic dependencies. Through extensive experiments on various graph-related predictive and text generative tasks on diverse datasets, we demonstrate the effectiveness of our GraphAgent across various settings. We have made our proposed GraphAgent open-source at: https://github.com/HKUDS/GraphAgent. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 519,801 |
2006.09647 | Regulating algorithmic filtering on social media | By filtering the content that users see, social media platforms have the ability to influence users' perceptions and decisions, from their dining choices to their voting preferences. This influence has drawn scrutiny, with many calling for regulations on filtering algorithms, but designing and enforcing regulations remains challenging. In this work, we examine three questions. First, given a regulation, how would one design an audit to enforce it? Second, does the audit impose a performance cost on the platform? Third, how does the audit affect the content that the platform is incentivized to filter? In response, we propose a method such that, given a regulation, an auditor can test whether that regulation is met with only black-box access to the filtering algorithm. We then turn to the platform's perspective. The platform's goal is to maximize an objective function while meeting regulation. We find that there are conditions under which the regulation does not place a high performance cost on the platform and, notably, that content diversity can play a key role in aligning the interests of the platform and regulators. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 182,612 |
1503.08855 | Decentralized learning for wireless communications and networking | This chapter deals with decentralized learning algorithms for in-network processing of graph-valued data. A generic learning problem is formulated and recast into a separable form, which is iteratively minimized using the alternating-direction method of multipliers (ADMM) so as to gain the desired degree of parallelization. Without exchanging elements from the distributed training sets and keeping inter-node communications at affordable levels, the local (per-node) learners consent to the desired quantity inferred globally, meaning the one obtained if the entire training data set were centrally available. Impact of the decentralized learning framework to contemporary wireless communications and networking tasks is illustrated through case studies including target tracking using wireless sensor networks, unveiling Internet traffic anomalies, power system state estimation, as well as spectrum cartography for wireless cognitive radio networks. | false | false | false | false | false | false | true | false | false | true | true | false | false | false | true | false | false | false | 41,629 |
2407.17413 | $A^*$ for Graphs of Convex Sets | We present a novel algorithm that fuses the existing convex-programming based approach with heuristic information to find optimality guarantees and near-optimal paths for the Shortest Path Problem in the Graph of Convex Sets (SPP-GCS). Our method, inspired by $A^*$, initiates a best-first-like procedure from a designated subset of vertices and iteratively expands it until further growth is neither possible nor beneficial. Traditionally, obtaining solutions with bounds for an optimization problem involves solving a relaxation, modifying the relaxed solution to a feasible one, and then comparing the two solutions to establish bounds. However, for SPP-GCS, we demonstrate that reversing this process can be more advantageous, especially with Euclidean travel costs. In other words, we initially employ $A^*$ to find a feasible solution for SPP-GCS, then solve a convex relaxation restricted to the vertices explored by $A^*$ to obtain a relaxed solution, and finally, compare the solutions to derive bounds. We present numerical results to highlight the advantages of our algorithm over the existing approach in terms of the sizes of the convex programs solved and computation time. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 475,959 |
2303.11434 | ResDTA: Predicting Drug-Target Binding Affinity Using Residual Skip
Connections | The discovery of novel drug target (DT) interactions is an important step in the drug development process. The majority of computer techniques for predicting DT interactions have focused on binary classification, with the goal of determining whether or not a DT pair interacts. Protein ligand interactions, on the other hand, assume a continuous range of binding strength values, also known as binding affinity, and forecasting this value remains a difficulty. As the amount of affinity data in DT knowledge-bases grows, advanced learning techniques such as deep learning architectures can be used to predict binding affinities. In this paper, we present a deep-learning-based methodology for predicting DT binding affinities using just sequencing information from both targets and drugs. The results show that the proposed deep learning-based model that uses the 1D representations of targets and drugs is an effective approach for drug target binding affinity prediction and it does not require additional chemical domain knowledge to work with. The model in which high-level representations of a drug and a target are constructed via CNNs that uses residual skip connections and also with an additional stream to create a high-level combined representation of the drug-target pair achieved the best Concordance Index (CI) performance in one of the largest benchmark datasets, outperforming the recent state-of-the-art method AttentionDTA and many other machine-learning and deep-learning based baseline methods for DT binding affinity prediction that uses the 1D representations of targets and drugs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 352,846 |
2007.04848 | Guru, Partner, or Pencil Sharpener? Understanding Designers' Attitudes
Towards Intelligent Creativity Support Tools | Creativity Support Tools (CST) aim to enhance human creativity, but the deeply personal and subjective nature of creativity makes the design of universal support tools challenging. Individuals develop personal approaches to creativity, particularly in the context of commercial design where signature styles and techniques are valuable commodities. Artificial Intelligence (AI) and Machine Learning (ML) techniques could provide a means of creating 'intelligent' CST which learn and adapt to personal styles of creativity. Identifying what kind of role such tools could play in the design process requires a better understanding of designers' attitudes towards working with AI, and their willingness to include it in their personal creative process. This paper details the results of a survey of professional designers which indicates a positive and pragmatic attitude towards collaborating with AI tools, and a particular opportunity for incorporating them in the research stages of a design project. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 186,492 |
2106.09435 | Multi-Agent Training beyond Zero-Sum with Correlated Equilibrium
Meta-Solvers | Two-player, constant-sum games are well studied in the literature, but there has been limited progress outside of this setting. We propose Joint Policy-Space Response Oracles (JPSRO), an algorithm for training agents in n-player, general-sum extensive form games, which provably converges to an equilibrium. We further suggest correlated equilibria (CE) as promising meta-solvers, and propose a novel solution concept Maximum Gini Correlated Equilibrium (MGCE), a principled and computationally efficient family of solutions for solving the correlated equilibrium selection problem. We conduct several experiments using CE meta-solvers for JPSRO and demonstrate convergence on n-player, general-sum games. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | true | 241,662 |
2104.07275 | Span Pointer Networks for Non-Autoregressive Task-Oriented Semantic
Parsing | An effective recipe for building seq2seq, non-autoregressive, task-oriented parsers to map utterances to semantic frames proceeds in three steps: encoding an utterance $x$, predicting a frame's length |y|, and decoding a |y|-sized frame with utterance and ontology tokens. Though empirically strong, these models are typically bottlenecked by length prediction, as even small inaccuracies change the syntactic and semantic characteristics of resulting frames. In our work, we propose span pointer networks, non-autoregressive parsers which shift the decoding task from text generation to span prediction; that is, when imputing utterance spans into frame slots, our model produces endpoints (e.g., [i, j]) as opposed to text (e.g., "6pm"). This natural quantization of the output space reduces the variability of gold frames, therefore improving length prediction and, ultimately, exact match. Furthermore, length prediction is now responsible for frame syntax and the decoder is responsible for frame semantics, resulting in a coarse-to-fine model. We evaluate our approach on several task-oriented semantic parsing datasets. Notably, we bridge the quality gap between non-autogressive and autoregressive parsers, achieving 87 EM on TOPv2 (Chen et al. 2020). Furthermore, due to our more consistent gold frames, we show strong improvements in model generalization in both cross-domain and cross-lingual transfer in low-resource settings. Finally, due to our diminished output vocabulary, we observe 70% reduction in latency and 83% reduction in memory at beam size 5 compared to prior non-autoregressive parsers. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 230,362 |
2111.06070 | Explainable Sentence-Level Sentiment Analysis for Amazon Product Reviews | In this paper, we conduct a sentence level sentiment analysis on the product reviews from Amazon and thorough analysis on the model interpretability. For the sentiment analysis task, we use the BiLSTM model with attention mechanism. For the study of interpretability, we consider the attention weights distribution of single sentence and the attention weights of main aspect terms. The model has an accuracy of up to 0.96. And we find that the aspect terms have the same or even more attention weights than the sentimental words in sentences. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 265,980 |
1705.01325 | On-The-Fly Secure Key Generation with Deterministic Models | It is well-known that wireless channel reciprocity together with fading can be exploited to generate a common secret key between two legitimate communication partners. This can be achieved by exchanging known deterministic pilot signals between both partners from which the random fading gains can be estimated and processed. However, the entropy and thus quality of the generated key depends on the channel coherence time. This can result in poor key generation rates in a low mobility environment, where the fading gains are nearly constant. Therefore, wide-spread deployment of wireless channel-based secret key generation is limited. To overcome these issues, we follow up on a recent idea which uses unknown random pilots and enables "on-the-fly" key generation. In addition, the scheme is able to incorporate local sources of randomness but performance bounds are hard to obtain with standard methods. In this paper, we analyse such a scheme analytically and derive achievable key rates in the Alice-Bob-Eve setting. For this purpose, we develop a novel approximation model which is inspired by the linear deterministic and the lower triangular deterministic model. Using this model, we can derive key rates for specific scenarios. We claim that our novel approach provides an intuitive and clear framework to analyse similar key generation problems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 72,827 |
2410.15618 | Erasing Undesirable Concepts in Diffusion Models with Adversarial
Preservation | Diffusion models excel at generating visually striking content from text but can inadvertently produce undesirable or harmful content when trained on unfiltered internet data. A practical solution is to selectively removing target concepts from the model, but this may impact the remaining concepts. Prior approaches have tried to balance this by introducing a loss term to preserve neutral content or a regularization term to minimize changes in the model parameters, yet resolving this trade-off remains challenging. In this work, we propose to identify and preserving concepts most affected by parameter changes, termed as \textit{adversarial concepts}. This approach ensures stable erasure with minimal impact on the other concepts. We demonstrate the effectiveness of our method using the Stable Diffusion model, showing that it outperforms state-of-the-art erasure methods in eliminating unwanted content while maintaining the integrity of other unrelated elements. Our code is available at \url{https://github.com/tuananhbui89/Erasing-Adversarial-Preservation}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 500,640 |
1407.5719 | Artificial Life and the Web: WebAL Comes of Age | A brief survey is presented of the first 18 years of web-based Artificial Life ("WebAL") research and applications, covering the period 1995-2013. The survey is followed by a short discussion of common methodologies employed and current technologies relevant to WebAL research. The paper concludes with a quick look at what the future may hold for work in this exciting area. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | 34,799 |
1909.10924 | Understanding Semantics from Speech Through Pre-training | End-to-end Spoken Language Understanding (SLU) is proposed to infer the semantic meaning directly from audio features without intermediate text representation. Although the acoustic model component of an end-to-end SLU system can be pre-trained with Automatic Speech Recognition (ASR) targets, the SLU component can only learn semantic features from limited task-specific training data. In this paper, for the first time we propose to do large-scale unsupervised pre-training for the SLU component of an end-to-end SLU system, so that the SLU component may preserve semantic features from massive unlabeled audio data. As the output of the acoustic model component, i.e. phoneme posterior sequences, has much different characteristic from text sequences, we propose a novel pre-training model called BERT-PLM, which stands for Bidirectional Encoder Representations from Transformers through Permutation Language Modeling. BERT-PLM trains the SLU component on unlabeled data through a regression objective equivalent to the partial permutation language modeling objective, while leverages full bi-directional context information with BERT networks. The experiment results show that our approach out-perform the state-of-the-art end-to-end systems with over 12.5% error reduction. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 146,671 |
1805.05270 | Iterative Bounded Distance Decoding of Product Codes with Scaled
Reliability | We propose a modified iterative bounded distance decoding of product codes. The proposed algorithm is based on exchanging hard messages iteratively and exploiting channel reliabilities to make hard decisions at each iteration. Performance improvements up to 0.26 dB are achieved. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 97,398 |
2206.08185 | UAVs Beneath the Surface: Cooperative Autonomy for Subterranean Search
and Rescue in DARPA SubT | This paper presents a novel approach for autonomous cooperating UAVs in search and rescue operations in subterranean domains with complex topology. The proposed system was ranked second in the Virtual Track of the DARPA SubT Finals as part of the team CTU-CRAS-NORLAB. In contrast to the winning solution that was developed specifically for the Virtual Track, the proposed solution also proved to be a robust system for deployment onboard physical UAVs flying in the extremely harsh and confined environment of the real-world competition. The proposed approach enables fully autonomous and decentralized deployment of a UAV team with seamless simulation-to-world transfer, and proves its advantage over less mobile UGV teams in the flyable space of diverse environments. The main contributions of the paper are present in the mapping and navigation pipelines. The mapping approach employs novel map representations -- SphereMap for efficient risk-aware long-distance planning, FacetMap for surface coverage, and the compressed topological-volumetric LTVMap for allowing multi-robot cooperation under low-bandwidth communication. These representations are used in navigation together with novel methods for visibility-constrained informed search in a general 3D environment with no assumptions about the environment structure, while balancing deep exploration with sensor-coverage exploitation. The proposed solution also includes a visual-perception pipeline for on-board detection and localization of objects of interest in four RGB stream at 5 Hz each without a dedicated GPU. Apart from participation in the DARPA SubT, the performance of the UAV system is supported by extensive experimental verification in diverse environments with both qualitative and quantitative evaluation. | false | false | false | false | true | false | false | true | false | false | true | false | false | false | true | false | false | false | 303,034 |
1212.1187 | Compressed Sensing Recoverability In Imaging Modalities | The paper introduces a framework for the recoverability analysis in compressive sensing for imaging applications such as CI cameras, rapid MRI and coded apertures. This is done using the fact that the Spherical Section Property (SSP) of a sensing matrix provides a lower bound for unique sparse recovery condition. The lower bound is evaluated for different sampling paradigms adopted from the aforementioned imaging modalities. In particular, a platform is provided to analyze the well-posedness of sub-sampling patterns commonly used in practical scenarios. The effectiveness of the various designed patterns for sparse image recovery is studied through numerical experiments. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,155 |
2001.00762 | Deep Unsupervised Common Representation Learning for LiDAR and Camera
Data using Double Siamese Networks | Domain gaps of sensor modalities pose a challenge for the design of autonomous robots. Taking a step towards closing this gap, we propose two unsupervised training frameworks for finding a common representation of LiDAR and camera data. The first method utilizes a double Siamese training structure to ensure consistency in the results. The second method uses a Canny edge image guiding the networks towards a desired representation. All networks are trained in an unsupervised manner, leaving room for scalability. The results are evaluated using common computer vision applications, and the limitations of the proposed approaches are outlined. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 159,319 |
2311.11334 | Using Causal Threads to Explain Changes in a Dynamic System | We explore developing rich semantic models of systems. Specifically, we consider structured causal explanations about state changes in those systems. Essentially, we are developing process-based dynamic knowledge graphs. As an example, we construct a model of the causal threads for geological changes proposed by the Snowball Earth theory. Further, we describe an early prototype of a graphical interface to present the explanations. Unlike statistical approaches to summarization and explanation such as Large Language Models (LLMs), our approach of direct representation can be inspected and verified directly. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 408,909 |
2410.16941 | Business Process Simulation: Probabilistic Modeling of Intermittent
Resource Availability and Multitasking Behavior | In business process simulation, resource availability is typically modeled by assigning a calendar to each resource, e.g., Monday-Friday, 9:00-18:00. Resources are assumed to be always available during each time slot in their availability calendar. This assumption often becomes invalid due to interruptions, breaks, or time-sharing across processes. In other words, existing approaches fail to capture intermittent availability. Another limitation of existing approaches is that they either do not consider multitasking behavior, or if they do, they assume that resources always multitask (up to a maximum capacity) whenever available. However, studies have shown that the multitasking patterns vary across days. This paper introduces a probabilistic approach to model resource availability and multitasking behavior for business process simulation. In this approach, each time slot in a resource calendar has an associated availability probability and a multitasking probability per multitasking level. For example, a resource may be available on Fridays between 14:00-15:00 with 90\% probability, and given that they are performing one task during this slot, they may take on a second concurrent task with 60\% probability. We propose algorithms to discover probabilistic calendars and probabilistic multitasking capacities from event logs. An evaluation shows that, with these enhancements, simulation models discovered from event logs better replicate the distribution of activities and cycle times, relative to approaches with crisp calendars and monotasking assumptions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 501,253 |
2102.09490 | Mean-Based Trace Reconstruction over Oblivious Synchronization Channels | Mean-based reconstruction is a fundamental, natural approach to worst-case trace reconstruction over channels with synchronization errors. It is known that $\exp(O(n^{1/3}))$ traces are necessary and sufficient for mean-based worst-case trace reconstruction over the deletion channel, and this result was also extended to certain channels combining deletions and geometric insertions of uniformly random bits. In this work, we use a simple extension of the original complex-analytic approach to show that these results are examples of a much more general phenomenon. We introduce oblivious synchronization channels, which map each input bit to an arbitrarily distributed sequence of replications and insertions of random bits. This general class captures all previously considered synchronization channels. We show that for any oblivious synchronization channel whose output length follows a sub-exponential distribution either mean-based trace reconstruction is impossible or $\exp(O(n^{1/3}))$ traces suffice for this task. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 220,792 |
2208.00330 | Convex duality for stochastic shortest path problems in known and
unknown environments | This paper studies Stochastic Shortest Path (SSP) problems in known and unknown environments from the perspective of convex optimisation. It first recalls results in the known parameter case, and develops understanding through different proofs. It then focuses on the unknown parameter case, where it studies extended value iteration (EVI) operators. This includes the existing operators used in Rosenberg et al. [26] and Tarbouriech et al. [31] based on the l-1 norm and supremum norm, as well as defining EVI operators corresponding to other norms and divergences, such as the KL-divergence. This paper shows in general how the EVI operators relate to convex programs, and the form of their dual, where strong duality is exhibited. This paper then focuses on whether the bounds from finite horizon research of Neu and Pike-Burke [21] can be applied to these extended value iteration operators in the SSP setting. It shows that similar bounds to [21] for these operators exist, however they lead to operators that are not in general monotone and have more complex convergence properties. In a special case we observe oscillating behaviour. This paper generates open questions on how research may progress, with several examples that require further examination. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 310,818 |
2402.15487 | RoboEXP: Action-Conditioned Scene Graph via Interactive Exploration for
Robotic Manipulation | We introduce the novel task of interactive scene exploration, wherein robots autonomously explore environments and produce an action-conditioned scene graph (ACSG) that captures the structure of the underlying environment. The ACSG accounts for both low-level information (geometry and semantics) and high-level information (action-conditioned relationships between different entities) in the scene. To this end, we present the Robotic Exploration (RoboEXP) system, which incorporates the Large Multimodal Model (LMM) and an explicit memory design to enhance our system's capabilities. The robot reasons about what and how to explore an object, accumulating new information through the interaction process and incrementally constructing the ACSG. Leveraging the constructed ACSG, we illustrate the effectiveness and efficiency of our RoboEXP system in facilitating a wide range of real-world manipulation tasks involving rigid, articulated objects, nested objects, and deformable objects. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 432,159 |
2109.06583 | A Semantic Indexing Structure for Image Retrieval | In large-scale image retrieval, many indexing methods have been proposed to narrow down the searching scope of retrieval. The features extracted from images usually are of high dimensions or unfixed sizes due to the existence of key points. Most of existing index structures suffer from the dimension curse, the unfixed feature size and/or the loss of semantic similarity. In this paper a new classification-based indexing structure, called Semantic Indexing Structure (SIS), is proposed, in which we utilize the semantic categories rather than clustering centers to create database partitions, such that the proposed index SIS can be combined with feature extractors without the restriction of dimensions. Besides, it is observed that the size of each semantic partition is positively correlated with the semantic distribution of database. Along this way, we found that when the partition number is normalized to five, the proposed algorithm performed very well in all the tests. Compared with state-of-the-art models, SIS achieves outstanding performance. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 255,203 |
2112.11691 | Comprehensive Visual Question Answering on Point Clouds through
Compositional Scene Manipulation | Visual Question Answering on 3D Point Cloud (VQA-3D) is an emerging yet challenging field that aims at answering various types of textual questions given an entire point cloud scene. To tackle this problem, we propose the CLEVR3D, a large-scale VQA-3D dataset consisting of 171K questions from 8,771 3D scenes. Specifically, we develop a question engine leveraging 3D scene graph structures to generate diverse reasoning questions, covering the questions of objects' attributes (i.e., size, color, and material) and their spatial relationships. Through such a manner, we initially generated 44K questions from 1,333 real-world scenes. Moreover, a more challenging setup is proposed to remove the confounding bias and adjust the context from a common-sense layout. Such a setup requires the network to achieve comprehensive visual understanding when the 3D scene is different from the general co-occurrence context (e.g., chairs always exist with tables). To this end, we further introduce the compositional scene manipulation strategy and generate 127K questions from 7,438 augmented 3D scenes, which can improve VQA-3D models for real-world comprehension. Built upon the proposed dataset, we baseline several VQA-3D models, where experimental results verify that the CLEVR3D can significantly boost other 3D scene understanding tasks. Our code and dataset will be made publicly available at https://github.com/yanx27/CLEVR3D. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 272,773 |
2203.05095 | Model-Architecture Co-Design for High Performance Temporal GNN Inference
on FPGA | Temporal Graph Neural Networks (TGNNs) are powerful models to capture temporal, structural, and contextual information on temporal graphs. The generated temporal node embeddings outperform other methods in many downstream tasks. Real-world applications require high performance inference on real-time streaming dynamic graphs. However, these models usually rely on complex attention mechanisms to capture relationships between temporal neighbors. In addition, maintaining vertex memory suffers from intrinsic temporal data dependency that hinders task-level parallelism, making it inefficient on general-purpose processors. In this work, we present a novel model-architecture co-design for inference in memory-based TGNNs on FPGAs. The key modeling optimizations we propose include a light-weight method to compute attention scores and a related temporal neighbor pruning strategy to further reduce computation and memory accesses. These are holistically coupled with key hardware optimizations that leverage FPGA hardware. We replace the temporal sampler with an on-chip FIFO based hardware sampler and the time encoder with a look-up-table. We train our simplified models using knowledge distillation to ensure similar accuracy vis-\'a-vis the original model. Taking advantage of the model optimizations, we propose a principled hardware architecture using batching, pipelining, and prefetching techniques to further improve the performance. We also propose a hardware mechanism to ensure the chronological vertex updating without sacrificing the computation parallelism. We evaluate the performance of the proposed hardware accelerator on three real-world datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 284,699 |
2311.08770 | A proof-of-concept online metadata catalogue service of Earth
observation datasets for human health research in exposomics | This article describes research carried out during 2023 under an International Society for Photogrammetry and Remote Sensing (ISPRS)-funded project to develop and disseminate a metadata catalogue of Earth observation data sources/products and types that are relevant to human health research in exposomics, as a free service to interested researchers worldwide. The proof-of-concept catalogue was informed by input from existing research literature on the subject (desk research), as well as online communications with, and relevant research publications collected from, a small panel (n = 5) of select experts from the academia in three countries (China, UK and USA). It has 90 metadata records of relevant Earth observation datasets (n = 40) and associated health-focused research publications (n = 50). The project's online portal offers a searchable version of the catalogue featuring a number of search modes and filtering options. It is hoped future, more comprehensive versions of this service will enable more researchers and studies to discover and use remote sensing data about population-level exposures to disease determinants (exposomic determinants of disease) in combination with other relevant data to reveal fresh insights that could improve our understanding of relevant diseases, and hence contribute to the development of better-optimized prevention and management plans to tackle them. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 407,863 |
2209.10866 | A One-shot Framework for Distributed Clustered Learning in Heterogeneous
Environments | The paper proposes a family of communication efficient methods for distributed learning in heterogeneous environments in which users obtain data from one of $K$ different distributions. In the proposed setup, the grouping of users (based on the data distributions they sample), as well as the underlying statistical properties of the distributions, are apriori unknown. A family of One-shot Distributed Clustered Learning methods (ODCL-$\mathcal{C}$) is proposed, parametrized by the set of admissible clustering algorithms $\mathcal{C}$, with the objective of learning the true model at each user. The admissible clustering methods include $K$-means (KM) and convex clustering (CC), giving rise to various one-shot methods within the proposed family, such as ODCL-KM and ODCL-CC. The proposed one-shot approach, based on local computations at the users and a clustering based aggregation step at the server is shown to provide strong learning guarantees. In particular, for strongly convex problems it is shown that, as long as the number of data points per user is above a threshold, the proposed approach achieves order-optimal mean-squared error (MSE) rates in terms of the sample size. An explicit characterization of the threshold is provided in terms of problem parameters. The trade-offs with respect to selecting various clustering methods (ODCL-CC, ODCL-KM) are discussed and significant improvements over state-of-the-art are demonstrated. Numerical experiments illustrate the findings and corroborate the performance of the proposed methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 318,997 |
0903.4386 | Error-and-Erasure Decoding for Block Codes with Feedback | Inner and outer bounds are derived on the optimal performance of fixed length block codes on discrete memoryless channels with feedback and errors-and-erasures decoding. First an inner bound is derived using a two phase encoding scheme with communication and control phases together with the optimal decoding rule for the given encoding scheme, among decoding rules that can be represented in terms of pairwise comparisons between the messages. Then an outer bound is derived using a generalization of the straight-line bound to errors-and-erasures decoders and the optimal error exponent trade off of a feedback encoder with two messages. In addition upper and lower bounds are derived, for the optimal erasure exponent of error free block codes in terms of the rate. Finally we present a proof of the fact that the optimal trade off between error exponents of a two message code does not increase with feedback on DMCs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,412 |
1901.09156 | Human Pose Estimation using Motion Priors and Ensemble Models | Human pose estimation in images and videos is one of key technologies for realizing a variety of human activity recognition tasks (e.g., human-computer interaction, gesture recognition, surveillance, and video summarization). This paper presents two types of human pose estimation methodologies; 1) 3D human pose tracking using motion priors and 2) 2D human pose estimation with ensemble modeling. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 119,665 |
2012.04218 | Evaluating Explainable Methods for Predictive Process Analytics: A
Functionally-Grounded Approach | Predictive process analytics focuses on predicting the future states of running instances of a business process. While advanced machine learning techniques have been used to increase accuracy of predictions, the resulting predictive models lack transparency. Current explainable machine learning methods, such as LIME and SHAP, can be used to interpret black box models. However, it is unclear how fit for purpose these methods are in explaining process predictive models. In this paper, we draw on evaluation measures used in the field of explainable AI and propose functionally-grounded evaluation metrics for assessing explainable methods in predictive process analytics. We apply the proposed metrics to evaluate the performance of LIME and SHAP in interpreting process predictive models built on XGBoost, which has been shown to be relatively accurate in process predictions. We conduct the evaluation using three open source, real-world event logs and analyse the evaluation results to derive insights. The research contributes to understanding the trustworthiness of explainable methods for predictive process analytics as a fundamental and key step towards human user-oriented evaluation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 210,384 |
2203.01722 | A Remark on Evolution Equation of Stochastic Logical Dynamic Systems | Modelling is an essential procedure in analyzing and controlling a given logical dynamic system (LDS). It has been proved that deterministic LDS can be modeled as a linear-like system using algebraic state space representation. However, due to the inherently non-linear, it is difficult to obtain the algebraic expression of a stochastic LDS. This paper provides a unified framework for transition analysis of LDSs with deterministic and stochastic dynamics. First, modelling of LDS with deterministic dynamics is reviewed. Then modeling of LDS with stochastic dynamics is considered, and non-equivalence between subsystems and global system is proposed. Next, the reason for the non-equivalence is provided. Finally, consistency condition is presented for independent model and conditional independent model. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 283,492 |
2004.08762 | RelSen: An Optimization-based Framework for Simultaneously Sensor
Reliability Monitoring and Data Cleaning | Recent advances in the Internet of Things (IoT) technology have led to a surge on the popularity of sensing applications. As a result, people increasingly rely on information obtained from sensors to make decisions in their daily life. Unfortunately, in most sensing applications, sensors are known to be error-prone and their measurements can become misleading at any unexpected time. Therefore, in order to enhance the reliability of sensing applications, apart from the physical phenomena/processes of interest, we believe it is also highly important to monitor the reliability of sensors and clean the sensor data before analysis on them being conducted. Existing studies often regard sensor reliability monitoring and sensor data cleaning as separate problems. In this work, we propose RelSen, a novel optimization-based framework to address the two problems simultaneously via utilizing the mutual dependence between them. Furthermore, RelSen is not application-specific as its implementation assumes a minimal prior knowledge of the process dynamics under monitoring. This significantly improves its generality and applicability in practice. In our experiments, we apply RelSen on an outdoor air pollution monitoring system and a condition monitoring system for a cement rotary kiln. Experimental results show that our framework can timely identify unreliable sensors and remove sensor measurement errors caused by three types of most commonly observed sensor faults. | false | false | false | false | true | true | false | false | false | false | false | false | true | false | false | false | false | true | 173,161 |
2111.05004 | Model Predictive Control for a Medium-head Hydropower Plant Hybridized
with Battery Energy Storage to Reduce Penstock Fatigue | A hybrid hydropower power plant is a conventional HydroPower Plant (HPP) augmented with a Battery Energy Storage System (BESS) to decrease the wear and tear of sensitive mechanical components and improve the reliability and regulation performance of the overall plant. A central task of controlling hybrid power plants is determining how the total power set-point should be split between the BESS and the hybridized unit (power set-point splitting) as a function of the operational objectives. This paper describes a Model Predictive Control (MPC) framework for hybrid medium- and high-head plants to determine the power set-point of the hydropower unit and the BESS. The splitting policy relies on an explicit formulation of the mechanical loads incurred by the HPP's penstock, which can be damaged due to fatigue when providing regulation services to the grid. By filtering out from the HPP's power set-point the components conducive to excess penstock fatigue and properly controlling the BESS, the proposed MPC is able to maintain the same level of regulation performance while significantly decreasing damages to the hydraulic conduits. A proof-of-concept by simulations is provided considering a 230 MW medium-head hydropower plant. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 265,672 |
2407.19517 | Evaluating LLMs for Text-to-SQL Generation With Complex SQL Workload | This study presents a comparative analysis of the a complex SQL benchmark, TPC-DS, with two existing text-to-SQL benchmarks, BIRD and Spider. Our findings reveal that TPC-DS queries exhibit a significantly higher level of structural complexity compared to the other two benchmarks. This underscores the need for more intricate benchmarks to simulate realistic scenarios effectively. To facilitate this comparison, we devised several measures of structural complexity and applied them across all three benchmarks. The results of this study can guide future research in the development of more sophisticated text-to-SQL benchmarks. We utilized 11 distinct Language Models (LLMs) to generate SQL queries based on the query descriptions provided by the TPC-DS benchmark. The prompt engineering process incorporated both the query description as outlined in the TPC-DS specification and the database schema of TPC-DS. Our findings indicate that the current state-of-the-art generative AI models fall short in generating accurate decision-making queries. We conducted a comparison of the generated queries with the TPC-DS gold standard queries using a series of fuzzy structure matching techniques based on query features. The results demonstrated that the accuracy of the generated queries is insufficient for practical real-world application. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 476,820 |
2003.10316 | Extended Existence Probability Using Digital Maps for Object
Verification | A main task for automated vehicles is an accurate and robust environment perception. Especially, an error-free detection and modeling of other traffic participants is of great importance to drive safely in any situation. For this purpose, multi-object tracking algorithms, based on object detections from raw sensor measurements, are commonly used. However, false object hypotheses can occur due to a high density of different traffic participants in complex, arbitrary scenarios. For this reason, the presented approach introduces a probabilistic model to verify the existence of a tracked object. Therefore, an object verification module is introduced, where the influences of multiple digital map elements on a track's existence are evaluated. Finally, a probabilistic model fuses the various influences and estimates an extended existence probability for every track. In addition, a Bayes Net is implemented as directed graphical model to highlight this work's expandability. The presented approach, reduces the number of false positives, while retaining true positives. Real world data is used to evaluate and to highlight the benefits of the presented approach, especially in urban scenarios. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 169,302 |
2307.00658 | Accelerating Relational Database Analytical Processing with Bulk-Bitwise
Processing-in-Memory | Online Analytical Processing (OLAP) for relational databases is a business decision support application. The application receives queries about the business database, usually requesting to summarize many database records, and produces few results. Existing OLAP requires transferring a large amount of data between the memory and the CPU, having a few operations per datum, and producing a small output. Hence, OLAP is a good candidate for processing-in-memory (PIM), where computation is performed where the data is stored, thus accelerating applications by reducing data movement between the memory and CPU. In particular, bulk-bitwise PIM, where the memory array is a bit-vector processing unit, seems a good match for OLAP. With the extensive inherent parallelism and minimal data movement of bulk-bitwise PIM, OLAP applications can process the entire database in parallel in memory, transferring only the results to the CPU. This paper shows a full stack adaptation of a bulk-bitwise PIM, from compiling SQL to hardware implementation, for supporting OLAP applications. Evaluating the Star Schema Benchmark (SSB), bulk-bitwise PIM achieves a 4.65X speedup over Monet-DB, a standard database system. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 377,081 |
1912.09913 | Hierarchical Character Embeddings: Learning Phonological and Semantic
Representations in Languages of Logographic Origin using Recursive Neural
Networks | Logographs (Chinese characters) have recursive structures (i.e. hierarchies of sub-units in logographs) that contain phonological and semantic information, as developmental psychology literature suggests that native speakers leverage on the structures to learn how to read. Exploiting these structures could potentially lead to better embeddings that can benefit many downstream tasks. We propose building hierarchical logograph (character) embeddings from logograph recursive structures using treeLSTM, a recursive neural network. Using recursive neural network imposes a prior on the mapping from logographs to embeddings since the network must read in the sub-units in logographs according to the order specified by the recursive structures. Based on human behavior in language learning and reading, we hypothesize that modeling logographs' structures using recursive neural network should be beneficial. To verify this claim, we consider two tasks (1) predicting logographs' Cantonese pronunciation from logographic structures and (2) language modeling. Empirical results show that the proposed hierarchical embeddings outperform baseline approaches. Diagnostic analysis suggests that hierarchical embeddings constructed using treeLSTM is less sensitive to distractors, thus is more robust, especially on complex logographs. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 158,179 |
2408.01838 | Tracking Emotional Dynamics in Chat Conversations: A Hybrid Approach
using DistilBERT and Emoji Sentiment Analysis | Computer-mediated communication has become more important than face-to-face communication in many contexts. Tracking emotional dynamics in chat conversations can enhance communication, improve services, and support well-being in various contexts. This paper explores a hybrid approach to tracking emotional dynamics in chat conversations by combining DistilBERT-based text emotion detection and emoji sentiment analysis. A Twitter dataset was analyzed using various machine learning algorithms, including SVM, Random Forest, and AdaBoost. We contrasted their performance with DistilBERT. Results reveal DistilBERT's superior performance in emotion recognition. Our approach accounts for emotive expressions conveyed through emojis to better understand participants' emotions during chats. We demonstrate how this approach can effectively capture and analyze emotional shifts in real-time conversations. Our findings show that integrating text and emoji analysis is an effective way of tracking chat emotion, with possible applications in customer service, work chats, and social media interactions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 478,387 |
2103.05661 | On complementing end-to-end human behavior predictors with planning | High capacity end-to-end approaches for human motion (behavior) prediction have the ability to represent subtle nuances in human behavior, but struggle with robustness to out of distribution inputs and tail events. Planning-based prediction, on the other hand, can reliably output decent-but-not-great predictions: it is much more stable in the face of distribution shift (as we verify in this work), but it has high inductive bias, missing important aspects that drive human decisions, and ignoring cognitive biases that make human behavior suboptimal. In this work, we analyze one family of approaches that strive to get the best of both worlds: use the end-to-end predictor on common cases, but do not rely on it for tail events / out-of-distribution inputs -- switch to the planning-based predictor there. We contribute an analysis of different approaches for detecting when to make this switch, using an autonomous driving domain. We find that promising approaches based on ensembling or generative modeling of the training distribution might not be reliable, but that there very simple methods which can perform surprisingly well -- including training a classifier to pick up on tell-tale issues in predicted trajectories. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 224,043 |
2303.10735 | SKED: Sketch-guided Text-based 3D Editing | Text-to-image diffusion models are gradually introduced into computer graphics, recently enabling the development of Text-to-3D pipelines in an open domain. However, for interactive editing purposes, local manipulations of content through a simplistic textual interface can be arduous. Incorporating user guided sketches with Text-to-image pipelines offers users more intuitive control. Still, as state-of-the-art Text-to-3D pipelines rely on optimizing Neural Radiance Fields (NeRF) through gradients from arbitrary rendering views, conditioning on sketches is not straightforward. In this paper, we present SKED, a technique for editing 3D shapes represented by NeRFs. Our technique utilizes as few as two guiding sketches from different views to alter an existing neural field. The edited region respects the prompt semantics through a pre-trained diffusion model. To ensure the generated output adheres to the provided sketches, we propose novel loss functions to generate the desired edits while preserving the density and radiance of the base instance. We demonstrate the effectiveness of our proposed method through several qualitative and quantitative experiments. https://sked-paper.github.io/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 352,566 |
1904.03897 | "Jam Me If You Can'': Defeating Jammer with Deep Dueling Neural Network
Architecture and Ambient Backscattering Augmented Communications | With conventional anti-jamming solutions like frequency hopping or spread spectrum, legitimate transceivers often tend to "escape" or "hide" themselves from jammers. These reactive anti-jamming approaches are constrained by the lack of timely knowledge of jamming attacks. Bringing together the latest advances in neural network architectures and ambient backscattering communications, this work allows wireless nodes to effectively "face" the jammer by first learning its jamming strategy, then adapting the rate or transmitting information right on the jamming signal. Specifically, to deal with unknown jamming attacks, existing work often relies on reinforcement learning algorithms, e.g., Q-learning. However, the Q-learning algorithm is notorious for its slow convergence to the optimal policy, especially when the system state and action spaces are large. This makes the Q-learning algorithm pragmatically inapplicable. To overcome this problem, we design a novel deep reinforcement learning algorithm using the recent dueling neural network architecture. Our proposed algorithm allows the transmitter to effectively learn about the jammer and attain the optimal countermeasures thousand times faster than that of the conventional Q-learning algorithm. Through extensive simulation results, we show that our design (using ambient backscattering and the deep dueling neural network architecture) can improve the average throughput by up to 426% and reduce the packet loss by 24%. By augmenting the ambient backscattering capability on devices and using our algorithm, it is interesting to observe that the (successful) transmission rate increases with the jamming power. Our proposed solution can find its applications in both civil (e.g., ultra-reliable and low-latency communications or URLLC) and military scenarios (to combat both inadvertent and deliberate jamming). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 126,873 |
2009.05750 | Multi-Spectral Image Synthesis for Crop/Weed Segmentation in Precision
Farming | An effective perception system is a fundamental component for farming robots, as it enables them to properly perceive the surrounding environment and to carry out targeted operations. The most recent methods make use of state-of-the-art machine learning techniques to learn a valid model for the target task. However, those techniques need a large amount of labeled data for training. A recent approach to deal with this issue is data augmentation through Generative Adversarial Networks (GANs), where entire synthetic scenes are added to the training data, thus enlarging and diversifying their informative content. In this work, we propose an alternative solution with respect to the common data augmentation methods, applying it to the fundamental problem of crop/weed segmentation in precision farming. Starting from real images, we create semi-artificial samples by replacing the most relevant object classes (i.e., crop and weeds) with their synthesized counterparts. To do that, we employ a conditional GAN (cGAN), where the generative model is trained by conditioning the shape of the generated object. Moreover, in addition to RGB data, we take into account also near-infrared (NIR) information, generating four channel multi-spectral synthetic images. Quantitative experiments, carried out on three publicly available datasets, show that (i) our model is capable of generating realistic multi-spectral images of plants and (ii) the usage of such synthetic images in the training process improves the segmentation performance of state-of-the-art semantic segmentation convolutional networks. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 195,413 |
2501.11087 | Leveraging counterfactual concepts for debugging and improving CNN model
performance | Counterfactual explanation methods have recently received significant attention for explaining CNN-based image classifiers due to their ability to provide easily understandable explanations that align more closely with human reasoning. However, limited attention has been given to utilizing explainability methods to improve model performance. In this paper, we propose to leverage counterfactual concepts aiming to enhance the performance of CNN models in image classification tasks. Our proposed approach utilizes counterfactual reasoning to identify crucial filters used in the decision-making process. Following this, we perform model retraining through the design of a novel methodology and loss functions that encourage the activation of class-relevant important filters and discourage the activation of irrelevant filters for each class. This process effectively minimizes the deviation of activation patterns of local predictions and the global activation patterns of their respective inferred classes. By incorporating counterfactual explanations, we validate unseen model predictions and identify misclassifications. The proposed methodology provides insights into potential weaknesses and biases in the model's learning process, enabling targeted improvements and enhanced performance. Experimental results on publicly available datasets have demonstrated an improvement of 1-2\%, validating the effectiveness of the approach. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 525,777 |
1907.04062 | Distributed Integer Balancing under Weight Constraints in the Presence
of Transmission Delays and Packet Drops | We consider the distributed weight balancing problem in networks of nodes that are interconnected via directed edges, each of which is able to admit a positive integer weight within a certain interval, captured by individual lower and upper limits. A digraph with positive integer weights on its (directed) edges is weight-balanced if, for each node, the sum of the weights of the incoming edges equals the sum of the weights of the outgoing edges. In this work, we develop a distributed iterative algorithm which solves the integer weight balancing problem in the presence of arbitrary (time-varying and inhomogeneous) time delays that might affect transmissions at particular links. We assume that communication between neighboring nodes is bidirectional, but unreliable since it may be affected from bounded or unbounded delays (packet drops), independently between different links and link directions. We show that, even when communication links are affected from bounded delays or occasional packet drops (but not permanent communication link failures), the proposed distributed algorithm allows the nodes to converge to a set of weight values that solves the integer weight balancing problem, after a finite number of iterations with probability one, as long as the necessary and sufficient circulation conditions on the lower and upper edge weight limits are satisfied. Finally, we provide examples to illustrate the operation and performance of the proposed algorithms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 138,011 |
1611.06987 | Sublabel-Accurate Discretization of Nonconvex Free-Discontinuity
Problems | In this work we show how sublabel-accurate multilabeling approaches can be derived by approximating a classical label-continuous convex relaxation of nonconvex free-discontinuity problems. This insight allows to extend these sublabel-accurate approaches from total variation to general convex and nonconvex regularizations. Furthermore, it leads to a systematic approach to the discretization of continuous convex relaxations. We study the relationship to existing discretizations and to discrete-continuous MRFs. Finally, we apply the proposed approach to obtain a sublabel-accurate and convex solution to the vectorial Mumford-Shah functional and show in several experiments that it leads to more precise solutions using fewer labels. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 64,292 |
2412.01168 | On the Surprising Effectiveness of Spectrum Clipping in Learning Stable
Linear Dynamics | When learning stable linear dynamical systems from data, three important properties are desirable: i) predictive accuracy, ii) provable stability, and iii) computational efficiency. Unconstrained minimization of reconstruction errors leads to high accuracy and efficiency but cannot guarantee stability. Existing methods to remedy this focus on enforcing stability while also ensuring accuracy, but do so only at the cost of increased computation. In this work, we investigate if a straightforward approach can simultaneously offer all three desiderata of learning stable linear systems. Specifically, we consider a post-hoc approach that manipulates the spectrum of the learned system matrix after it is learned in an unconstrained fashion. We call this approach spectrum clipping (SC) as it involves eigen decomposition and subsequent reconstruction of the system matrix after clipping all of its eigenvalues that are larger than one to one (without altering the eigenvectors). Through detailed experiments involving two different applications and publicly available benchmark datasets, we demonstrate that this simple technique can simultaneously learn highly accurate linear systems that are provably stable. Notably, we demonstrate that SC can achieve similar or better performance than strong baselines while being orders-of-magnitude faster. We also show that SC can be readily combined with Koopman operators to learn stable nonlinear dynamics, such as those underlying complex dexterous manipulation skills involving multi-fingered robotic hands. Further, we find that SC can learn stable robot policies even when the training data includes unsuccessful or truncated demonstrations. Our codes and dataset can be found at https://github.com/GT-STAR-Lab/spec_clip. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 512,987 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.