id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.03973 | DiscoPrompt: Path Prediction Prompt Tuning for Implicit Discourse
Relation Recognition | Implicit Discourse Relation Recognition (IDRR) is a sophisticated and challenging task to recognize the discourse relations between the arguments with the absence of discourse connectives. The sense labels for each discourse relation follow a hierarchical classification scheme in the annotation process (Prasad et al., 2008), forming a hierarchy structure. Most existing works do not well incorporate the hierarchy structure but focus on the syntax features and the prior knowledge of connectives in the manner of pure text classification. We argue that it is more effective to predict the paths inside the hierarchical tree (e.g., "Comparison -> Contrast -> however") rather than flat labels (e.g., Contrast) or connectives (e.g., however). We propose a prompt-based path prediction method to utilize the interactive information and intrinsic senses among the hierarchy in IDRR. This is the first work that injects such structure information into pre-trained language models via prompt tuning, and the performance of our solution shows significant and consistent improvement against competitive baselines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 362,587 |
2011.09314 | First-Order Rewritability of Frontier-Guarded Ontology-Mediated Queries | We focus on ontology-mediated queries (OMQs) based on (frontier-)guarded existential rules and (unions of) conjunctive queries, and we investigate the problem of FO-rewritability, i.e., whether an OMQ can be rewritten as a first-order query. We adopt two different approaches. The first approach employs standard two-way alternating parity tree automata. Although it does not lead to a tight complexity bound, it provides a transparent solution based on widely known tools. The second approach relies on a sophisticated automata model, known as cost automata. This allows us to show that our problem is 2ExpTime-complete. In both approaches, we provide semantic characterizations of FO-rewritability that are of independent interest. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 207,149 |
2207.00176 | End-to-end cell recognition by point annotation | Reliable quantitative analysis of immunohistochemical staining images requires accurate and robust cell detection and classification. Recent weakly-supervised methods usually estimate probability density maps for cell recognition. However, in dense cell scenarios, their performance can be limited by pre- and post-processing as it is impossible to find a universal parameter setting. In this paper, we introduce an end-to-end framework that applies direct regression and classification for preset anchor points. Specifically, we propose a pyramidal feature aggregation strategy to combine low-level features and high-level semantics simultaneously, which provides accurate cell recognition for our purely point-based model. In addition, an optimized cost function is designed to adapt our multi-task learning framework by matching ground truth and predicted points. The experimental results demonstrate the superior accuracy and efficiency of the proposed method, which reveals the high potentiality in assisting pathologist assessments. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 305,670 |
2109.04390 | 1-Bit MIMO for Terahertz Channels | This paper tackles the problem of single-user multiple-input multiple-output communication with 1-bit digital-to-analog and analog-to-digital converters. With the information-theoretic capacity as benchmark, the complementary strategies of beamforming and equiprobable signaling are contrasted in the regimes of operational interest, and the ensuing spectral efficiencies are characterized. Various canonical channel types are considered, with emphasis on line-of-sight settings under both spherical and planar wavefronts, respectively representative of short and long transmission ranges at mmWave and terahertz frequencies. In all cases, a judicious combination of beamforming and equiprobable signaling is shown to operate within a modest gap from capacity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 254,388 |
cs/0611053 | Capacity of a Class of Deterministic Relay Channels | The capacity of a class of deterministic relay channels with the transmitter input X, the receiver output Y, the relay output Y_1 = f(X, Y), and a separate communication link from the relay to the receiver with capacity R_0, is shown to be C(R_0) = \max_{p(x)} \min \{I(X;Y)+R_0, I(X;Y, Y_1) \}. Thus every bit from the relay is worth exactly one bit to the receiver. Two alternative coding schemes are presented that achieve this capacity. The first scheme, ``hash-and-forward'', is based on a simple yet novel use of random binning on the space of relay outputs, while the second scheme uses the usual ``compress-and-forward''. In fact, these two schemes can be combined together to give a class of optimal coding schemes. As a corollary, this relay capacity result confirms a conjecture by Ahlswede and Han on the capacity of a channel with rate-limited state information at the decoder in the special case when the channel state is recoverable from the channel input and the output. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 539,870 |
2305.14816 | Provable Offline Preference-Based Reinforcement Learning | In this paper, we investigate the problem of offline Preference-based Reinforcement Learning (PbRL) with human feedback where feedback is available in the form of preference between trajectory pairs rather than explicit rewards. Our proposed algorithm consists of two main steps: (1) estimate the implicit reward using Maximum Likelihood Estimation (MLE) with general function approximation from offline data and (2) solve a distributionally robust planning problem over a confidence set around the MLE. We consider the general reward setting where the reward can be defined over the whole trajectory and provide a novel guarantee that allows us to learn any target policy with a polynomial number of samples, as long as the target policy is covered by the offline data. This guarantee is the first of its kind with general function approximation. To measure the coverage of the target policy, we introduce a new single-policy concentrability coefficient, which can be upper bounded by the per-trajectory concentrability coefficient. We also establish lower bounds that highlight the necessity of such concentrability and the difference from standard RL, where state-action-wise rewards are directly observed. We further extend and analyze our algorithm when the feedback is given over action pairs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 367,274 |
2309.07926 | COMPASS: High-Efficiency Deep Image Compression with Arbitrary-scale
Spatial Scalability | Recently, neural network (NN)-based image compression studies have actively been made and has shown impressive performance in comparison to traditional methods. However, most of the works have focused on non-scalable image compression (single-layer coding) while spatially scalable image compression has drawn less attention although it has many applications. In this paper, we propose a novel NN-based spatially scalable image compression method, called COMPASS, which supports arbitrary-scale spatial scalability. Our proposed COMPASS has a very flexible structure where the number of layers and their respective scale factors can be arbitrarily determined during inference. To reduce the spatial redundancy between adjacent layers for arbitrary scale factors, our COMPASS adopts an inter-layer arbitrary scale prediction method, called LIFF, based on implicit neural representation. We propose a combined RD loss function to effectively train multiple layers. Experimental results show that our COMPASS achieves BD-rate gain of -58.33% and -47.17% at maximum compared to SHVC and the state-of-the-art NN-based spatially scalable image compression method, respectively, for various combinations of scale factors. Our COMPASS also shows comparable or even better coding efficiency than the single-layer coding for various scale factors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 391,962 |
0908.4413 | Multiple Retrieval Models and Regression Models for Prior Art Search | This paper presents the system called PATATRAS (PATent and Article Tracking, Retrieval and AnalysiS) realized for the IP track of CLEF 2009. Our approach presents three main characteristics: 1. The usage of multiple retrieval models (KL, Okapi) and term index definitions (lemma, phrase, concept) for the three languages considered in the present track (English, French, German) producing ten different sets of ranked results. 2. The merging of the different results based on multiple regression models using an additional validation set created from the patent collection. 3. The exploitation of patent metadata and of the citation structures for creating restricted initial working sets of patents and for producing a final re-ranking regression model. As we exploit specific metadata of the patent documents and the citation relations only at the creation of initial working sets and during the final post ranking step, our architecture remains generic and easy to extend. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 4,363 |
2212.11138 | QVIP: An ILP-based Formal Verification Approach for Quantized Neural
Networks | Deep learning has become a promising programming paradigm in software development, owing to its surprising performance in solving many challenging tasks. Deep neural networks (DNNs) are increasingly being deployed in practice, but are limited on resource-constrained devices owing to their demand for computational power. Quantization has emerged as a promising technique to reduce the size of DNNs with comparable accuracy as their floating-point numbered counterparts. The resulting quantized neural networks (QNNs) can be implemented energy-efficiently. Similar to their floating-point numbered counterparts, quality assurance techniques for QNNs, such as testing and formal verification, are essential but are currently less explored. In this work, we propose a novel and efficient formal verification approach for QNNs. In particular, we are the first to propose an encoding that reduces the verification problem of QNNs into the solving of integer linear constraints, which can be solved using off-the-shelf solvers. Our encoding is both sound and complete. We demonstrate the application of our approach on local robustness verification and maximum robustness radius computation. We implement our approach in a prototype tool QVIP and conduct a thorough evaluation. Experimental results on QNNs with different quantization bits confirm the effectiveness and efficiency of our approach, e.g., two orders of magnitude faster and able to solve more verification tasks in the same time limit than the state-of-the-art methods. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 337,708 |
2112.13366 | AIDA: An Active Inference-based Design Agent for Audio Processing
Algorithms | In this paper we present AIDA, which is an active inference-based agent that iteratively designs a personalized audio processing algorithm through situated interactions with a human client. The target application of AIDA is to propose on-the-spot the most interesting alternative values for the tuning parameters of a hearing aid (HA) algorithm, whenever a HA client is not satisfied with their HA performance. AIDA interprets searching for the "most interesting alternative" as an issue of optimal (acoustic) context-aware Bayesian trial design. In computational terms, AIDA is realized as an active inference-based agent with an Expected Free Energy criterion for trial design. This type of architecture is inspired by neuro-economic models on efficient (Bayesian) trial design in brains and implies that AIDA comprises generative probabilistic models for acoustic signals and user responses. We propose a novel generative model for acoustic signals as a sum of time-varying auto-regressive filters and a user response model based on a Gaussian Process Classifier. The full AIDA agent has been implemented in a factor graph for the generative model and all tasks (parameter learning, acoustic context classification, trial design, etc.) are realized by variational message passing on the factor graph. All verification and validation experiments and demonstrations are freely accessible at our GitHub repository. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,220 |
2007.04393 | Adaptive Regret for Control of Time-Varying Dynamics | We consider the problem of online control of systems with time-varying linear dynamics. This is a general formulation that is motivated by the use of local linearization in control of nonlinear dynamical systems. To state meaningful guarantees over changing environments, we introduce the metric of {\it adaptive regret} to the field of control. This metric, originally studied in online learning, measures performance in terms of regret against the best policy in hindsight on {\it any interval in time}, and thus captures the adaptation of the controller to changing dynamics. Our main contribution is a novel efficient meta-algorithm: it converts a controller with sublinear regret bounds into one with sublinear {\it adaptive regret} bounds in the setting of time-varying linear dynamical systems. The main technical innovation is the first adaptive regret bound for the more general framework of online convex optimization with memory. Furthermore, we give a lower bound showing that our attained adaptive regret bound is nearly tight for this general framework. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 186,334 |
cs/0006040 | Correlation over Decomposed Signals: A Non-Linear Approach to Fast and
Effective Sequences Comparison | A novel non-linear approach to fast and effective comparison of sequences is presented, compared to the traditional cross-correlation operator, and illustrated with respect to DNA sequences. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 537,142 |
2008.06447 | Self-adapting confidence estimation for stereo | Estimating the confidence of disparity maps inferred by a stereo algorithm has become a very relevant task in the years, due to the increasing number of applications leveraging such cue. Although self-supervised learning has recently spread across many computer vision tasks, it has been barely considered in the field of confidence estimation. In this paper, we propose a flexible and lightweight solution enabling self-adapting confidence estimation agnostic to the stereo algorithm or network. Our approach relies on the minimum information available in any stereo setup (i.e., the input stereo pair and the output disparity map) to learn an effective confidence measure. This strategy allows us not only a seamless integration with any stereo system, including consumer and industrial devices equipped with undisclosed stereo perception methods, but also, due to its self-adapting capability, for its out-of-the-box deployment in the field. Exhaustive experimental results with different standard datasets support our claims, showing how our solution is the first-ever enabling online learning of accurate confidence estimation for any stereo system and without any requirement for the end-user. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 191,806 |
2402.04203 | Human-Like Geometric Abstraction in Large Pre-trained Neural Networks | Humans possess a remarkable capacity to recognize and manipulate abstract structure, which is especially apparent in the domain of geometry. Recent research in cognitive science suggests neural networks do not share this capacity, concluding that human geometric abilities come from discrete symbolic structure in human mental representations. However, progress in artificial intelligence (AI) suggests that neural networks begin to demonstrate more human-like reasoning after scaling up standard architectures in both model size and amount of training data. In this study, we revisit empirical results in cognitive science on geometric visual processing and identify three key biases in geometric visual processing: a sensitivity towards complexity, regularity, and the perception of parts and relations. We test tasks from the literature that probe these biases in humans and find that large pre-trained neural network models used in AI demonstrate more human-like abstract geometric processing. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 427,359 |
1810.02653 | FingerVision Tactile Sensor Design and Slip Detection Using
Convolutional LSTM Network | Tactile sensing is essential to the human perception system, so as to robot. In this paper, we develop a novel optical-based tactile sensor "FingerVision" with effective signal processing algorithms. This sensor is composed of soft skin with embedded marker array bonded to rigid frame, and a web camera with a fisheye lens. While being excited with contact force, the camera tracks the movements of markers and deformation field is obtained. Compared to existing tactile sensors, our sensor features compact footprint, high resolution, and ease of fabrication. Besides, utilizing the deformation field estimation, we propose a slip classification framework based on convolution Long Short Term Memory (convolutional LSTM) networks. The data collection process takes advantage of the human sense of slip, during which human hand holds 12 daily objects, interacts with sensor skin and labels data with a slip or non-slip identity based on human feeling of slip. Our slip classification framework performs high accuracy of 97.62% on the test dataset. It is expected to be capable of enhancing the stability of robot grasping significantly, leading to better contact force control, finer object interaction and more active sensing manipulation. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 109,634 |
2412.09920 | Precision-Enhanced Human-Object Contact Detection via Depth-Aware
Perspective Interaction and Object Texture Restoration | Human-object contact (HOT) is designed to accurately identify the areas where humans and objects come into contact. Current methods frequently fail to account for scenarios where objects are frequently blocking the view, resulting in inaccurate identification of contact areas. To tackle this problem, we suggest using a perspective interaction HOT detector called PIHOT, which utilizes a depth map generation model to offer depth information of humans and objects related to the camera, thereby preventing false interaction detection. Furthermore, we use mask dilatation and object restoration techniques to restore the texture details in covered areas, improve the boundaries between objects, and enhance the perception of humans interacting with objects. Moreover, a spatial awareness perception is intended to concentrate on the characteristic features close to the points of contact. The experimental results show that the PIHOT algorithm achieves state-of-the-art performance on three benchmark datasets for HOT detection tasks. Compared to the most recent DHOT, our method enjoys an average improvement of 13%, 27.5%, 16%, and 18.5% on SC-Acc., C-Acc., mIoU, and wIoU metrics, respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 516,708 |
2401.16424 | Computer Vision for Primate Behavior Analysis in the Wild | Advances in computer vision as well as increasingly widespread video-based behavioral monitoring have great potential for transforming how we study animal cognition and behavior. However, there is still a fairly large gap between the exciting prospects and what can actually be achieved in practice today, especially in videos from the wild. With this perspective paper, we want to contribute towards closing this gap, by guiding behavioral scientists in what can be expected from current methods and steering computer vision researchers towards problems that are relevant to advance research in animal behavior. We start with a survey of the state-of-the-art methods for computer vision problems that are directly relevant to the video-based study of animal behavior, including object detection, multi-individual tracking, individual identification, and (inter)action recognition. We then review methods for effort-efficient learning, which is one of the biggest challenges from a practical perspective. Finally, we close with an outlook into the future of the emerging field of computer vision for animal behavior, where we argue that the field should develop approaches to unify detection, tracking, identification and (inter)action recognition in a single, video-based framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 424,820 |
2306.04111 | Quasi-Newton Updating for Large-Scale Distributed Learning | Distributed computing is critically important for modern statistical analysis. Herein, we develop a distributed quasi-Newton (DQN) framework with excellent statistical, computation, and communication efficiency. In the DQN method, no Hessian matrix inversion or communication is needed. This considerably reduces the computation and communication complexity of the proposed method. Notably, related existing methods only analyze numerical convergence and require a diverging number of iterations to converge. However, we investigate the statistical properties of the DQN method and theoretically demonstrate that the resulting estimator is statistically efficient over a small number of iterations under mild conditions. Extensive numerical analyses demonstrate the finite sample performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 371,607 |
1911.09509 | Deep Representations for Cross-spectral Ocular Biometrics | One of the major challenges in ocular biometrics is the cross-spectral scenario, i.e., how to match images acquired in different wavelengths (typically visible (VIS) against near-infrared (NIR)). This article designs and extensively evaluates cross-spectral ocular verification methods, for both the closed and open-world settings, using well known deep learning representations based on the iris and periocular regions. Using as inputs the bounding boxes of non-normalized iris/periocular regions, we fine-tune Convolutional Neural Network(CNN) models (based either on VGG16 or ResNet-50 architectures), originally trained for face recognition. Based on the experiments carried out in two publicly available cross-spectral ocular databases, we report results for intra-spectral and cross-spectral scenarios, with the best performance being observed when fusing ResNet-50 deep representations from both the periocular and iris regions. When compared to the state-of-the-art, we observed that the proposed solution consistently reduces the Equal Error Rate(EER) values by 90% / 93% / 96% and 61% / 77% / 83% on the cross-spectral scenario and in the PolyU Bi-spectral and Cross-eye-cross-spectral datasets. Lastly, we evaluate the effect that the "deepness" factor of feature representations has in recognition effectiveness, and - based on a subjective analysis of the most problematic pairwise comparisons - we point out further directions for this field of research. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 154,541 |
2502.13550 | STaR-SQL: Self-Taught Reasoner for Text-to-SQL | Generating step-by-step "chain-of-thought" rationales has proven effective for improving the performance of large language models on complex reasoning tasks. However, applying such techniques to structured tasks, such as text-to-SQL, remains largely unexplored. In this paper, we introduce Self-Taught Reasoner for text-to-SQL (STaR-SQL), a novel approach that reframes SQL query generation as a reasoning-driven process. Our method prompts the LLM to produce detailed reasoning steps for SQL queries and fine-tunes it on rationales that lead to correct outcomes. Unlike traditional methods, STaR-SQL dedicates additional test-time computation to reasoning, thereby positioning LLMs as spontaneous reasoners rather than mere prompt-based agents. To further scale the inference process, we incorporate an outcome-supervised reward model (ORM) as a verifier, which enhances SQL query accuracy. Experimental results on the challenging Spider benchmark demonstrate that STaR-SQL significantly improves text-to-SQL performance, achieving an execution accuracy of 86.6%. This surpasses a few-shot baseline by 31.6% and a baseline fine-tuned to predict answers directly by 18.0%. Additionally, STaR-SQL outperforms agent-like prompting methods that leverage more powerful yet closed-source models such as GPT-4. These findings underscore the potential of reasoning-augmented training for structured tasks and open the door to extending self-improving reasoning models to text-to-SQL generation and beyond. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 535,416 |
2401.08003 | Jewelry Recognition via Encoder-Decoder Models | Jewelry recognition is a complex task due to the different styles and designs of accessories. Precise descriptions of the various accessories is something that today can only be achieved by experts in the field of jewelry. In this work, we propose an approach for jewelry recognition using computer vision techniques and image captioning, trying to simulate this expert human behavior of analyzing accessories. The proposed methodology consist on using different image captioning models to detect the jewels from an image and generate a natural language description of the accessory. Then, this description is also utilized to classify the accessories at different levels of detail. The generated caption includes details such as the type of jewel, color, material, and design. To demonstrate the effectiveness of the proposed method in accurately recognizing different types of jewels, a dataset consisting of images of accessories belonging to jewelry stores in C\'ordoba (Spain) has been created. After testing the different image captioning architectures designed, the final model achieves a captioning accuracy of 95\%. The proposed methodology has the potential to be used in various applications such as jewelry e-commerce, inventory management or automatic jewels recognition to analyze people's tastes and social status. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 421,726 |
2306.15324 | Anomaly Detection in Networks via Score-Based Generative Models | Node outlier detection in attributed graphs is a challenging problem for which there is no method that would work well across different datasets. Motivated by the state-of-the-art results of score-based models in graph generative modeling, we propose to incorporate them into the aforementioned problem. Our method achieves competitive results on small-scale graphs. We provide an empirical analysis of the Dirichlet energy, and show that generative models might struggle to accurately reconstruct it. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 375,983 |
2309.13174 | Robust self-propulsion in sand using simply controlled vibrating cubes | Much of the Earth and many surfaces of extraterrestrial bodies are composed of in-cohesive particle matter. Locomoting on granular terrain is challenging for common robotic devices, either wheeled or legged. In this work, we discover a robust alternative locomotion mechanism on granular media -- generating movement via self-vibration. To demonstrate the effectiveness of this locomotion mechanism, we develop a cube-shaped robot with an embedded vibratory motor and conduct systematic experiments on diverse granular terrains of various particle properties. We investigate how locomotion changes as a function of vibration frequency/intensity on granular terrains. Compared to hard surfaces, we find such a vibratory locomotion mechanism enables the robot to move faster, and more stable on granular surfaces, facilitated by the interaction between the body and surrounding granules. The simplicity in structural design and controls of this robotic system indicates that vibratory locomotion can be a valuable alternative way to produce robust locomotion on granular terrains. We further demonstrate that such cube-shape robots can be used as modular units for morphologically structured vibratory robots with capabilities of maneuverable forward and turning motions, showing potential practical scenarios for robotic systems. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 394,086 |
2403.10662 | SwinMTL: A Shared Architecture for Simultaneous Depth Estimation and
Semantic Segmentation from Monocular Camera Images | This research paper presents an innovative multi-task learning framework that allows concurrent depth estimation and semantic segmentation using a single camera. The proposed approach is based on a shared encoder-decoder architecture, which integrates various techniques to improve the accuracy of the depth estimation and semantic segmentation task without compromising computational efficiency. Additionally, the paper incorporates an adversarial training component, employing a Wasserstein GAN framework with a critic network, to refine model's predictions. The framework is thoroughly evaluated on two datasets - the outdoor Cityscapes dataset and the indoor NYU Depth V2 dataset - and it outperforms existing state-of-the-art methods in both segmentation and depth estimation tasks. We also conducted ablation studies to analyze the contributions of different components, including pre-training strategies, the inclusion of critics, the use of logarithmic depth scaling, and advanced image augmentations, to provide a better understanding of the proposed framework. The accompanying source code is accessible at \url{https://github.com/PardisTaghavi/SwinMTL}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,291 |
2010.00505 | An Ultra Lightweight CNN for Low Resource Circuit Component Recognition | In this paper, we present an ultra lightweight system that can effectively recognize different circuit components in an image with very limited training data. Along with the system, we also release the data set we created for the task. A two-stage approach is employed by our system. Selective search was applied to find the location of each circuit component. Based on its result, we crop the original image into smaller pieces. The pieces are then fed to the Convolutional Neural Network (CNN) for classification to identify each circuit component. It is of engineering significance and works well in circuit component recognition in a low resource setting. The accuracy of our system reaches 93.4\%, outperforming the support vector machine (SVM) baseline (75.00%) and the existing state-of-the-art RetinaNet solutions (92.80%). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 198,307 |
2109.05182 | Speaker-Oriented Latent Structures for Dialogue-Based Relation
Extraction | Dialogue-based relation extraction (DiaRE) aims to detect the structural information from unstructured utterances in dialogues. Existing relation extraction models may be unsatisfactory under such a conversational setting, due to the entangled logic and information sparsity issues in utterances involving multiple speakers. To this end, we introduce SOLS, a novel model which can explicitly induce speaker-oriented latent structures for better DiaRE. Specifically, we learn latent structures to capture the relationships among tokens beyond the utterance boundaries, alleviating the entangled logic issue. During the learning process, our speaker-specific regularization method progressively highlights speaker-related key clues and erases the irrelevant ones, alleviating the information sparsity issue. Experiments on three public datasets demonstrate the effectiveness of our proposed approach. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 254,691 |
2303.10022 | Hierarchical-Hyperplane Kernels for Actively Learning Gaussian Process
Models of Nonstationary Systems | Learning precise surrogate models of complex computer simulations and physical machines often require long-lasting or expensive experiments. Furthermore, the modeled physical dependencies exhibit nonlinear and nonstationary behavior. Machine learning methods that are used to produce the surrogate model should therefore address these problems by providing a scheme to keep the number of queries small, e.g. by using active learning and be able to capture the nonlinear and nonstationary properties of the system. One way of modeling the nonstationarity is to induce input-partitioning, a principle that has proven to be advantageous in active learning for Gaussian processes. However, these methods either assume a known partitioning, need to introduce complex sampling schemes or rely on very simple geometries. In this work, we present a simple, yet powerful kernel family that incorporates a partitioning that: i) is learnable via gradient-based methods, ii) uses a geometry that is more flexible than previous ones, while still being applicable in the low data regime. Thus, it provides a good prior for active learning procedures. We empirically demonstrate excellent performance on various active learning tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 352,278 |
2303.08566 | Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning | Visual Parameter-Efficient Fine-Tuning (PEFT) has become a powerful alternative for full fine-tuning so as to adapt pre-trained vision models to downstream tasks, which only tunes a small number of parameters while freezing the vast majority ones to ease storage burden and optimization difficulty. However, existing PEFT methods introduce trainable parameters to the same positions across different tasks depending solely on human heuristics and neglect the domain gaps. To this end, we study where to introduce and how to allocate trainable parameters by proposing a novel Sensitivity-aware visual Parameter-efficient fine-Tuning (SPT) scheme, which adaptively allocates trainable parameters to task-specific important positions given a desired tunable parameter budget. Specifically, our SPT first quickly identifies the sensitive parameters that require tuning for a given task in a data-dependent way. Next, our SPT further boosts the representational capability for the weight matrices whose number of sensitive parameters exceeds a pre-defined threshold by utilizing existing structured tuning methods, e.g., LoRA [23] or Adapter [22], to replace directly tuning the selected sensitive parameters (unstructured tuning) under the budget. Extensive experiments on a wide range of downstream recognition tasks show that our SPT is complementary to the existing PEFT methods and largely boosts their performance, e.g., SPT improves Adapter with supervised pre-trained ViT-B/16 backbone by 4.2% and 1.4% mean Top-1 accuracy, reaching SOTA performance on FGVC and VTAB-1k benchmarks, respectively. Source code is at https://github.com/ziplab/SPT | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 351,692 |
1212.3669 | A metric for software vulnerabilities classification | Vulnerability discovery and exploits detection are two wide areas of study in software engineering. This preliminary work tries to combine existing methods with machine learning techniques to define a metric classification of vulnerable computer programs. First a feature set has been defined and later two models have been tested against real world vulnerabilities. A relation between the classifier choice and the features has also been outlined. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 20,415 |
1804.02531 | On the ratio of prefix codes to all uniquely decodable codes with a
given length distribution | We investigate the ratio $\rho_{n,L}$ of prefix codes to all uniquely decodable codes over an $n$-letter alphabet and with length distribution $L$. For any integers $n\geq 2$ and $m\geq 1$, we construct a lower bound and an upper bound for $\inf_L\rho_{n,L}$, the infimum taken over all sequences $L$ of length $m$ for which the set of uniquely decodable codes with length distribution $L$ is non-empty. As a result, we obtain that this infimum is always greater than zero. Moreover, for every $m\geq 1$ it tends to 1 when $n\to\infty$, and for every $n\geq 2$ it tends to 0 when $m\to\infty$. In the case $m=2$, we also obtain the exact value for this infimum. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 94,424 |
2212.14546 | HiTeA: Hierarchical Temporal-Aware Video-Language Pre-training | Video-language pre-training has advanced the performance of various downstream video-language tasks. However, most previous methods directly inherit or adapt typical image-language pre-training paradigms to video-language pre-training, thus not fully exploiting the unique characteristic of video, i.e., temporal. In this paper, we propose a Hierarchical Temporal-Aware video-language pre-training framework, HiTeA, with two novel pre-training tasks for modeling cross-modal alignment between moments and texts as well as the temporal relations of video-text pairs. Specifically, we propose a cross-modal moment exploration task to explore moments in videos, which results in detailed video moment representation. Besides, the inherent temporal relations are captured by aligning video-text pairs as a whole in different time resolutions with multi-modal temporal relation exploration task. Furthermore, we introduce the shuffling test to evaluate the temporal reliance of datasets and video-language pre-training models. We achieve state-of-the-art results on 15 well-established video-language understanding and generation tasks, especially on temporal-oriented datasets (e.g., SSv2-Template and SSv2-Label) with 8.6% and 11.1% improvement respectively. HiTeA also demonstrates strong generalization ability when directly transferred to downstream tasks in a zero-shot manner. Models and demo will be available on ModelScope. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | true | 338,650 |
2010.06835 | A Wrong Answer or a Wrong Question? An Intricate Relationship between
Question Reformulation and Answer Selection in Conversational Question
Answering | The dependency between an adequate question formulation and correct answer selection is a very intriguing but still underexplored area. In this paper, we show that question rewriting (QR) of the conversational context allows to shed more light on this phenomenon and also use it to evaluate robustness of different answer selection approaches. We introduce a simple framework that enables an automated analysis of the conversational question answering (QA) performance using question rewrites, and present the results of this analysis on the TREC CAsT and QuAC (CANARD) datasets. Our experiments uncover sensitivity to question formulation of the popular state-of-the-art models for reading comprehension and passage ranking. Our results demonstrate that the reading comprehension model is insensitive to question formulation, while the passage ranking changes dramatically with a little variation in the input question. The benefit of QR is that it allows us to pinpoint and group such cases automatically. We show how to use this methodology to verify whether QA models are really learning the task or just finding shortcuts in the dataset, and better understand the frequent types of error they make. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 200,614 |
2008.01796 | A Groebner-bases approach to syndrome-based fast Chase decoding of
Reed--Solomon codes | We present a simple syndrome-based fast Chase decoding algorithm for Reed--Solomon (RS) codes. Such an algorithm was initially presented by Wu (IEEE Trans. IT, Jan. 2012), building on properties of the Berlekamp--Massey (BM) algorithm. Wu devised a fast polynomial-update algorithm to construct the error-locator polynomial (ELP) as the solution of a certain linear-feedback shift register (LFSR) synthesis problem. This results in a conceptually complicated algorithm, divided into $8$ subtly different cases. Moreover, Wu's polynomial-update algorithm is not immediately suitable for working with vectors of evaluations. Therefore, complicated modifications were required in order to achieve a true "one-pass" Chase decoding algorithm, that is, a Chase decoding algorithm requiring $O(n)$ operations per modified coordinate, where $n$ is the RS code length. The main result of the current paper is a conceptually simple syndrome-based fast Chase decoding of RS codes. Instead of developing a theory from scratch, we use the well-established theory of Groebner bases for modules over $\mathbb{F}_q[X]$ (where $\mathbb{F}_q$ is the finite field of $q$ elements, for $q$ a prime power). The basic observation is that instead of Wu's LFSR synthesis problem, it is much simpler to consider "the right" minimization problem over a module. The solution to this minimization problem is a simple polynomial-update algorithm that avoids syndrome updates and works seamlessly with vectors of evaluations. As a result, we obtain a conceptually simple algorithm for one-pass Chase decoding of RS codes. Our algorithm is general enough to work with any algorithm that finds a Groebner basis for the solution module of the key equation as the initial algorithm (including the Euclidean algorithm), and it is not tied only to the BM algorithm. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 190,432 |
2210.10592 | DyTed: Disentangled Representation Learning for Discrete-time Dynamic
Graph | Unsupervised representation learning for dynamic graphs has attracted a lot of research attention in recent years. Compared with static graph, the dynamic graph is a comprehensive embodiment of both the intrinsic stable characteristics of nodes and the time-related dynamic preference. However, existing methods generally mix these two types of information into a single representation space, which may lead to poor explanation, less robustness, and a limited ability when applied to different downstream tasks. To solve the above problems, in this paper, we propose a novel disenTangled representation learning framework for discrete-time Dynamic graphs, namely DyTed. We specially design a temporal-clips contrastive learning task together with a structure contrastive learning to effectively identify the time-invariant and time-varying representations respectively. To further enhance the disentanglement of these two types of representation, we propose a disentanglement-aware discriminator under an adversarial learning framework from the perspective of information theory. Extensive experiments on Tencent and five commonly used public datasets demonstrate that DyTed, as a general framework that can be applied to existing methods, achieves state-of-the-art performance on various downstream tasks, as well as be more robust against noise. | false | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 324,978 |
2312.00210 | DREAM: Diffusion Rectification and Estimation-Adaptive Models | We present DREAM, a novel training framework representing Diffusion Rectification and Estimation Adaptive Models, requiring minimal code changes (just three lines) yet significantly enhancing the alignment of training with sampling in diffusion models. DREAM features two components: diffusion rectification, which adjusts training to reflect the sampling process, and estimation adaptation, which balances perception against distortion. When applied to image super-resolution (SR), DREAM adeptly navigates the tradeoff between minimizing distortion and preserving high image quality. Experiments demonstrate DREAM's superiority over standard diffusion-based SR methods, showing a $2$ to $3\times $ faster training convergence and a $10$ to $20\times$ reduction in sampling steps to achieve comparable results. We hope DREAM will inspire a rethinking of diffusion model training paradigms. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,959 |
1801.00940 | Secure communication over fully quantum Gel'fand-Pinsker wiretap channel | In this work we study the problem of secure communication over a fully quantum Gel'fand-Pinsker channel. The best known achievability rate for this channel model in the classical case was proven by Goldfeld, Cuff and Permuter in [Goldfeld, Cuff, Permuter, 2016]. We generalize the result of [Goldfeld, Cuff, Permuter, 2016]. One key feature of the results obtained in this work is that all the bounds obtained are in terms of error exponent. We obtain our achievability result via the technique of simultaneous pinching. This in turn allows us to show the existence of a simultaneous decoder. Further, to obtain our encoding technique and to prove the security feature of our coding scheme we prove a bivariate classical-quantum channel resolvability lemma and a conditional classical-quantum channel resolvability lemma. As a by product of the achievability result obtained in this work, we also obtain an achievable rate for a fully quantum Gel'fand-Pinsker channel in the absence of Eve. The form of this achievable rate matches with its classical counterpart. The Gel'fand-Pinsker channel model had earlier only been studied for the classical-quantum case and in the case where Alice (the sender) and Bob (the receiver) have shared entanglement between them. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 87,654 |
2502.03396 | Accurate AI-Driven Emergency Vehicle Location Tracking in Healthcare ITS
Digital Twin | Creating a Digital Twin (DT) for Healthcare Intelligent Transportation Systems (HITS) is a hot research trend focusing on enhancing HITS management, particularly in emergencies where ambulance vehicles must arrive at the crash scene on time and track their real-time location is crucial to the medical authorities. Despite the claim of real-time representation, a temporal misalignment persists between the physical and virtual domains, leading to discrepancies in the ambulance's location representation. This study proposes integrating AI predictive models, specifically Support Vector Regression (SVR) and Deep Neural Networks (DNN), within a constructed mock DT data pipeline framework to anticipate the medical vehicle's next location in the virtual world. These models align virtual representations with their physical counterparts, i.e., metaphorically offsetting the synchronization delay between the two worlds. Trained meticulously on a historical geospatial dataset, SVR and DNN exhibit exceptional prediction accuracy in MATLAB and Python environments. Through various testing scenarios, we visually demonstrate the efficacy of our methodology, showcasing SVR and DNN's key role in significantly reducing the witnessed gap within the HITS's DT. This transformative approach enhances real-time synchronization in emergency HITS by approximately 88% to 93%. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 530,707 |
2007.15553 | Bilevel Continual Learning | Continual learning aims to learn continuously from a stream of tasks and data in an online-learning fashion, being capable of exploiting what was learned previously to improve current and future tasks while still being able to perform well on the previous tasks. One common limitation of many existing continual learning methods is that they often train a model directly on all available training data without validation due to the nature of continual learning, thus suffering poor generalization at test time. In this work, we present a novel framework of continual learning named "Bilevel Continual Learning" (BCL) by unifying a {\it bilevel optimization} objective and a {\it dual memory management} strategy comprising both episodic memory and generalization memory to achieve effective knowledge transfer to future tasks and alleviate catastrophic forgetting on old tasks simultaneously. Our extensive experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods. Our implementation is available at https://github.com/phquang/bilevel-continual-learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 189,696 |
2210.00201 | Integrating Conventional Headway Control with Reinforcement Learning to
Avoid Bus Bunching | Bus bunching is a natural-occurring phenomenon that undermines the efficiency and stability of the public transportation system. The mainstream solutions control the bus to intentionally stay longer at certain stations. Existing control methods include conventional methods that provide a formula to calculate the control time and reinforcement learning (RL) methods that determine the control policy through repeated interactions with the system. In this paper, we propose an integrated proximal policy optimization model with dual-headway (IPPO-DH). IPPO-DH integrates the conventional headway control with reinforcement learning, so that it acquires the advantages of both algorithms -- it is more efficient in normal environments and more stable in harsh ones. To demonstrate such an advantage, we design a bus simulation environment and compare IPPO-DH with RL and several conventional methods. The results show that the proposed model maintains the application value of the conventional method by avoiding the instability of the RL method in certain environments, and improves the efficiency compared with the conventional control, shedding new light on real-world bus transit system optimization. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 320,778 |
1610.06819 | Multiscale Abstraction, Planning and Control using Diffusion Wavelets
for Stochastic Optimal Control Problems | This work presents a multiscale framework to solve a class of stochastic optimal control problems in the context of robot motion planning and control in a complex environment. In order to handle complications resulting from a large decision space and complex environmental geometry, two key concepts are adopted: (a) a diffusion wavelet representation of the Markov chain for hierarchical abstraction of the state space; and (b) a desirability function-based representation of the Markov decision process (MDP) to efficiently calculate the optimal policy. In the proposed framework, a global plan that compressively takes into account the long time/length-scale state transition is first obtained by approximately solving an MDP whose desirability function is represented by coarse scale bases in the hierarchical abstraction. Then, a detailed local plan is computed by solving an MDP that considers wavelet bases associated with a focused region of the state space, guided by the global plan. The resulting multiscale plan is utilized to finally compute a continuous-time optimal control policy within a receding horizon implementation. Two numerical examples are presented to demonstrate the applicability and validity of the proposed approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 62,702 |
2310.18605 | TorchDEQ: A Library for Deep Equilibrium Models | Deep Equilibrium (DEQ) Models, an emerging class of implicit models that maps inputs to fixed points of neural networks, are of growing interest in the deep learning community. However, training and applying DEQ models is currently done in an ad-hoc fashion, with various techniques spread across the literature. In this work, we systematically revisit DEQs and present TorchDEQ, an out-of-the-box PyTorch-based library that allows users to define, train, and infer using DEQs over multiple domains with minimal code and best practices. Using TorchDEQ, we build a ``DEQ Zoo'' that supports six published implicit models across different domains. By developing a joint framework that incorporates the best practices across all models, we have substantially improved the performance, training stability, and efficiency of DEQs on ten datasets across all six projects in the DEQ Zoo. TorchDEQ and DEQ Zoo are released as \href{https://github.com/locuslab/torchdeq}{open source}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 403,617 |
2110.05877 | OpenHands: Making Sign Language Recognition Accessible with Pose-based
Pretrained Models across Languages | AI technologies for Natural Languages have made tremendous progress recently. However, commensurate progress has not been made on Sign Languages, in particular, in recognizing signs as individual words or as complete sentences. We introduce OpenHands, a library where we take four key ideas from the NLP community for low-resource languages and apply them to sign languages for word-level recognition. First, we propose using pose extracted through pretrained models as the standard modality of data to reduce training time and enable efficient inference, and we release standardized pose datasets for 6 different sign languages - American, Argentinian, Chinese, Greek, Indian, and Turkish. Second, we train and release checkpoints of 4 pose-based isolated sign language recognition models across all 6 languages, providing baselines and ready checkpoints for deployment. Third, to address the lack of labelled data, we propose self-supervised pretraining on unlabelled data. We curate and release the largest pose-based pretraining dataset on Indian Sign Language (Indian-SL). Fourth, we compare different pretraining strategies and for the first time establish that pretraining is effective for sign language recognition by demonstrating (a) improved fine-tuning performance especially in low-resource settings, and (b) high crosslingual transfer from Indian-SL to few other sign languages. We open-source all models and datasets in OpenHands with a hope that it makes research in sign languages more accessible, available here at https://github.com/AI4Bharat/OpenHands . | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 260,437 |
2307.03567 | SpawnNet: Learning Generalizable Visuomotor Skills from Pre-trained
Networks | The existing internet-scale image and video datasets cover a wide range of everyday objects and tasks, bringing the potential of learning policies that generalize in diverse scenarios. Prior works have explored visual pre-training with different self-supervised objectives. Still, the generalization capabilities of the learned policies and the advantages over well-tuned baselines remain unclear from prior studies. In this work, we present a focused study of the generalization capabilities of the pre-trained visual representations at the categorical level. We identify the key bottleneck in using a frozen pre-trained visual backbone for policy learning and then propose SpawnNet, a novel two-stream architecture that learns to fuse pre-trained multi-layer representations into a separate network to learn a robust policy. Through extensive simulated and real experiments, we show significantly better categorical generalization compared to prior approaches in imitation learning settings. Open-sourced code and videos can be found on our website: https://xingyu-lin.github.io/spawnnet. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 378,081 |
2009.03586 | Hyperparameter Optimization via Sequential Uniform Designs | Hyperparameter optimization (HPO) plays a central role in the automated machine learning (AutoML). It is a challenging task as the response surfaces of hyperparameters are generally unknown, hence essentially a global optimization problem. This paper reformulates HPO as a computer experiment and proposes a novel sequential uniform design (SeqUD) strategy with three-fold advantages: a) the hyperparameter space is adaptively explored with evenly spread design points, without the need of expensive meta-modeling and acquisition optimization; b) the batch-by-batch design points are sequentially generated with parallel processing support; c) a new augmented uniform design algorithm is developed for the efficient real-time generation of follow-up design points. Extensive experiments are conducted on both global optimization tasks and HPO applications. The numerical results show that the proposed SeqUD strategy outperforms benchmark HPO methods, and it can be therefore a promising and competitive alternative to existing AutoML tools. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 194,841 |
1804.00140 | Generative Adversarial Networks (GANs): What it can generate and What it
cannot? | In recent years, Generative Adversarial Networks (GANs) have received significant attention from the research community. With a straightforward implementation and outstanding results, GANs have been used for numerous applications. Despite the success, GANs lack a proper theoretical explanation. These models suffer from issues like mode collapse, non-convergence, and instability during training. To address these issues, researchers have proposed theoretically rigorous frameworks inspired by varied fields of Game theory, Statistical theory, Dynamical systems, etc. In this paper, we propose to give an appropriate structure to study these contributions systematically. We essentially categorize the papers based on the issues they raise and the kind of novelty they introduce to address them. Besides, we provide insight into how each of the discussed articles solves the concerned problems. We compare and contrast different results and put forth a summary of theoretical contributions about GANs with focus on image/visual applications. We expect this summary paper to give a bird's eye view to a person wishing to understand the theoretical progress in GANs so far. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 93,951 |
2410.16325 | This Candidate is [MASK]. Letters of Reference and Job Market Outcomes
using LLMs | I implement a prompt-based learning strategy to extract measures of sentiment and other features from confidential reference letters. I show that the contents of reference letters is clearly reflected in the performance of job market candidates in the Economics academic job market. In contrast, applying traditional ``bag-of-words'' approaches produces measures of sentiment that, while positively correlated to my LLM-based measure, are not predictive of job market outcomes. Using a random forest, I show that both letter quality and length are predictive of success in the job market. Letters authored by advisers appear to be as important as those written by other referees. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 500,982 |
2305.00691 | Joint tone mapping and denoising of thermal infrared images via
multi-scale Retinex and multi-task learning | Cameras digitize real-world scenes as pixel intensity values with a limited value range given by the available bits per pixel (bpp). High Dynamic Range (HDR) cameras capture those luminance values in higher resolution through an increase in the number of bpp. Most displays, however, are limited to 8 bpp. Naive HDR compression methods lead to a loss of the rich information contained in those HDR images. In this paper, tone mapping algorithms for thermal infrared images with 16 bpp are investigated that can preserve this information. An optimized multi-scale Retinex algorithm sets the baseline. This algorithm is then approximated with a deep learning approach based on the popular U-Net architecture. The remaining noise in the images after tone mapping is reduced implicitly by utilizing a self-supervised deep learning approach that can be jointly trained with the tone mapping approach in a multi-task learning scheme. Further discussions are provided on denoising and deflickering for thermal infrared video enhancement in the context of tone mapping. Extensive experiments on the public FLIR ADAS Dataset prove the effectiveness of our proposed method in comparison with the state-of-the-art. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 361,428 |
2412.04571 | Dissociating Artificial Intelligence from Artificial Consciousness | Developments in machine learning and computing power suggest that artificial general intelligence is within reach. This raises the question of artificial consciousness: if a computer were to be functionally equivalent to a human, being able to do all we do, would it experience sights, sounds, and thoughts, as we do when we are conscious? Answering this question in a principled manner can only be done on the basis of a theory of consciousness that is grounded in phenomenology and that states the necessary and sufficient conditions for any system, evolved or engineered, to support subjective experience. Here we employ Integrated Information Theory (IIT), which provides principled tools to determine whether a system is conscious, to what degree, and the content of its experience. We consider pairs of systems constituted of simple Boolean units, one of which -- a basic stored-program computer -- simulates the other with full functional equivalence. By applying the principles of IIT, we demonstrate that (i) two systems can be functionally equivalent without being phenomenally equivalent, and (ii) that this conclusion is not dependent on the simulated system's function. We further demonstrate that, according to IIT, it is possible for a digital computer to simulate our behavior, possibly even by simulating the neurons in our brain, without replicating our experience. This contrasts sharply with computational functionalism, the thesis that performing computations of the right kind is necessary and sufficient for consciousness. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 514,467 |
2112.08507 | Algorithms for Adaptive Experiments that Trade-off Statistical Analysis
with Reward: Combining Uniform Random Assignment and Reward Maximization | Multi-armed bandit algorithms like Thompson Sampling (TS) can be used to conduct adaptive experiments, in which maximizing reward means that data is used to progressively assign participants to more effective arms. Such assignment strategies increase the risk of statistical hypothesis tests identifying a difference between arms when there is not one, and failing to conclude there is a difference in arms when there truly is one. We tackle this by introducing a novel heuristic algorithm, called TS-PostDiff (Posterior Probability of Difference). TS-PostDiff takes a Bayesian approach to mixing TS and Uniform Random (UR): the probability a participant is assigned using UR allocation is the posterior probability that the difference between two arms is 'small' (below a certain threshold), allowing for more UR exploration when there is little or no reward to be gained. We evaluate TS-PostDiff against state-of-the-art strategies. The empirical and simulation results help characterize the trade-offs of these approaches between reward, False Positive Rate (FPR), and statistical power, as well as under which circumstances each is effective. We quantify the advantage of TS-PostDiff in performing well across multiple differences in arm means (effect sizes), showing the benefits of adaptively changing randomization/exploration in TS in a "Statistically Considerate" manner: reducing FPR and increasing statistical power when differences are small or zero and there is less reward to be gained, while exploiting more when differences may be large. This highlights important considerations for future algorithm development and analysis to better balance reward and statistical analysis. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 271,813 |
1602.05531 | On the Use of Deep Learning for Blind Image Quality Assessment | In this work we investigate the use of deep learning for distortion-generic blind image quality assessment. We report on different design choices, ranging from the use of features extracted from pre-trained Convolutional Neural Networks (CNNs) as a generic image description, to the use of features extracted from a CNN fine-tuned for the image quality task. Our best proposal, named DeepBIQ, estimates the image quality by average pooling the scores predicted on multiple sub-regions of the original image. The score of each sub-region is computed using a Support Vector Regression (SVR) machine taking as input features extracted using a CNN fine-tuned for category-based image quality assessment. Experimental results on the LIVE In the Wild Image Quality Challenge Database and on the LIVE Image Quality Assessment Database show that DeepBIQ outperforms the state-of-the-art methods compared, having a Linear Correlation Coefficient (LCC) with human subjective scores of almost 0.91 and 0.98 respectively. Furthermore, in most of the cases, the quality score predictions of DeepBIQ are closer to the average observer than those of a generic human observer. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 52,267 |
1909.01818 | PISEP^2: Pseudo Image Sequence Evolution based 3D Pose Prediction | Pose prediction is to predict future poses given a window of previous poses. In this paper, we propose a new problem that predicts poses using 3D joint coordinate sequences. Different from the traditional pose prediction based on Mocap frames, this problem is convenient to use in real applications due to its simple sensors to capture data. We also present a new framework, PISEP^2 (Pseudo Image Sequence Evolution based 3D Pose Prediction), to address this new problem. Specifically, a skeletal representation is proposed by transforming the joint coordinate sequence into an image sequence, which can model the different correlations of different joints. With this image based skeletal representation, we model the pose prediction as the evolution of image sequence. Moreover, a novel inference network is proposed to predict all future poses in one step by decoupling the decoders in a non-recursive manner. Compared with the recursive sequence to sequence model, we can improve the computational efficiency and avoid error accumulation significantly. Extensive experiments are carried out on two benchmark datasets (e.g. G3D and FNTU). The proposed method achieves the state-of-the-art performance on both datasets, which demonstrates the effectiveness of our proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 144,020 |
2403.17175 | Engagement Measurement Based on Facial Landmarks and Spatial-Temporal
Graph Convolutional Networks | Engagement in virtual learning is crucial for a variety of factors including student satisfaction, performance, and compliance with learning programs, but measuring it is a challenging task. There is therefore considerable interest in utilizing artificial intelligence and affective computing to measure engagement in natural settings as well as on a large scale. This paper introduces a novel, privacy-preserving method for engagement measurement from videos. It uses facial landmarks, which carry no personally identifiable information, extracted from videos via the MediaPipe deep learning solution. The extracted facial landmarks are fed to Spatial-Temporal Graph Convolutional Networks (ST-GCNs) to output the engagement level of the student in the video. To integrate the ordinal nature of the engagement variable into the training process, ST-GCNs undergo training in a novel ordinal learning framework based on transfer learning. Experimental results on two video student engagement measurement datasets show the superiority of the proposed method compared to previous methods with improved state-of-the-art on the EngageNet dataset with a 3.1% improvement in four-class engagement level classification accuracy and on the Online Student Engagement dataset with a 1.5% improvement in binary engagement classification accuracy. Gradient-weighted Class Activation Mapping (Grad-CAM) was applied to the developed ST-GCNs to interpret the engagement measurements obtained by the proposed method in both the spatial and temporal domains. The relatively lightweight and fast ST-GCN and its integration with the real-time MediaPipe make the proposed approach capable of being deployed on virtual learning platforms and measuring engagement in real-time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 441,341 |
2306.00835 | Reconstructing Sea Surface Temperature Images: A Masked Autoencoder
Approach for Cloud Masking and Reconstruction | This thesis presents a new algorithm to mitigate cloud masking in the analysis of sea surface temperature (SST) data generated by remote sensing technologies, e.g., Clouds interfere with the analysis of all remote sensing data using wavelengths shorter than 12 microns, significantly limiting the quantity of usable data and creating a biased geographical distribution (towards equatorial and coastal regions). To address this issue, we propose an unsupervised machine learning algorithm called Enki which uses a Vision Transformer with Masked Autoencoding to reconstruct masked pixels. We train four different models of Enki with varying mask ratios (t) of 10%, 35%, 50%, and 75% on the generated Ocean General Circulation Model (OGCM) dataset referred to as LLC4320. To evaluate performance, we reconstruct a validation set of LLC4320 SST images with random ``clouds'' corrupting p=10%, 20%, 30%, 40%, 50% of the images with individual patches of 4x4 pixel^2. We consistently find that at all levels of p there is one or multiple models that reconstruct the images with a mean RMSE of less than 0.03K, i.e. lower than the estimated sensor error of VIIRS data. Similarly, at the individual patch level, the reconstructions have RMSE 8x smaller than the fluctuations in the patch. And, as anticipated, reconstruction errors are larger for images with a higher degree of complexity. Our analysis also reveals that patches along the image border have systematically higher reconstruction error; we recommend ignoring these in production. We conclude that Enki shows great promise to surpass in-painting as a means of reconstructing cloud masking. Future research will develop Enki to reconstruct real-world data. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 370,181 |
2306.02051 | A Comprehensive Survey on Relation Extraction: Recent Advances and New
Frontiers | Relation extraction (RE) involves identifying the relations between entities from underlying content. RE serves as the foundation for many natural language processing (NLP) and information retrieval applications, such as knowledge graph completion and question answering. In recent years, deep neural networks have dominated the field of RE and made noticeable progress. Subsequently, the large pre-trained language models have taken the state-of-the-art RE to a new level. This survey provides a comprehensive review of existing deep learning techniques for RE. First, we introduce RE resources, including datasets and evaluation metrics. Second, we propose a new taxonomy to categorize existing works from three perspectives, i.e., text representation, context encoding, and triplet prediction. Third, we discuss several important challenges faced by RE and summarize potential techniques to tackle these challenges. Finally, we outline some promising future directions and prospects in this field. This survey is expected to facilitate researchers' collaborative efforts to address the challenges of real-world RE systems. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 370,737 |
2111.10484 | Inter-Domain Fusion for Enhanced Intrusion Detection in Power Systems:
An Evidence Theoretic and Meta-Heuristic Approach | False alerts due to misconfigured/ compromised IDS in ICS networks can lead to severe economic and operational damage. To solve this problem, research has focused on leveraging deep learning techniques that help reduce false alerts. However, a shortcoming is that these works often require or implicitly assume the physical and cyber sensors to be trustworthy. Implicit trust of data is a major problem with using artificial intelligence or machine learning for CPS security, because during critical attack detection time they are more at risk, with greater likelihood and impact, of also being compromised. To address this shortcoming, the problem is reframed on how to make good decisions given uncertainty. Then, the decision is detection, and the uncertainty includes whether the data used for ML-based IDS is compromised. Thus, this work presents an approach for reducing false alerts in CPS power systems by dealing uncertainty without the knowledge of prior distribution of alerts. Specifically, an evidence theoretic based approach leveraging Dempster Shafer combination rules are proposed for reducing false alerts. A multi-hypothesis mass function model is designed that leverages probability scores obtained from various supervised-learning classifiers. Using this model, a location-cum-domain based fusion framework is proposed and evaluated with different combination rules, that fuse multiple evidence from inter-domain and intra-domain sensors. The approach is demonstrated in a cyber-physical power system testbed with Man-In-The-Middle attack emulation in a large-scale synthetic electric grid. For evaluating the performance, plausibility, belief, pignistic, etc. metrics as decision functions are considered. To improve the performance, a multi-objective based genetic algorithm is proposed for feature selection considering the decision metrics as the fitness function. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 267,328 |
1909.04403 | TH\"OR: Human-Robot Navigation Data Collection and Accurate Motion
Trajectories Dataset | Understanding human behavior is key for robots and intelligent systems that share a space with people. Accordingly, research that enables such systems to perceive, track, learn and predict human behavior as well as to plan and interact with humans has received increasing attention over the last years. The availability of large human motion datasets that contain relevant levels of difficulty is fundamental to this research. Existing datasets are often limited in terms of information content, annotation quality or variability of human behavior. In this paper, we present TH\"OR, a new dataset with human motion trajectory and eye gaze data collected in an indoor environment with accurate ground truth for position, head orientation, gaze direction, social grouping, obstacles map and goal coordinates. TH\"OR also contains sensor data collected by a 3D lidar and involves a mobile robot navigating the space. We propose a set of metrics to quantitatively analyze motion trajectory datasets such as the average tracking duration, ground truth noise, curvature and speed variation of the trajectories. In comparison to prior art, our dataset has a larger variety in human motion behavior, is less noisy, and contains annotations at higher frequencies. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 144,795 |
2308.05818 | Absorption-Based, Passive Range Imaging from Hyperspectral Thermal
Measurements | Passive hyperspectral longwave infrared measurements are remarkably informative about the surroundings. Remote object material and temperature determine the spectrum of thermal radiance, and range, air temperature, and gas concentrations determine how this spectrum is modified by propagation to the sensor. We introduce a passive range imaging method based on computationally separating these phenomena. Previous methods assume hot and highly emitting objects; ranging is more challenging when objects' temperatures do not deviate greatly from air temperature. Our method jointly estimates range and intrinsic object properties, with explicit consideration of air emission, though reflected light is assumed negligible. Inversion being underdetermined is mitigated by using a parametric model of atmospheric absorption and regularizing for smooth emissivity estimates. To assess where our estimate is likely accurate, we introduce a technique to detect which scene pixels are significantly influenced by reflected downwelling. Monte Carlo simulations demonstrate the importance of regularization, temperature differentials, and availability of many spectral bands. We apply our method to longwave infrared (8--13 $\mu$m) hyperspectral image data acquired from natural scenes with no active illumination. Range features from 15m to 150m are recovered, with good qualitative match to lidar data for pixels classified as having negligible reflected downwelling. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 384,918 |
2306.04926 | covLLM: Large Language Models for COVID-19 Biomedical Literature | The COVID-19 pandemic led to 1.1 million deaths in the United States, despite the explosion of coronavirus research. These new findings are slow to translate to clinical interventions, leading to poorer patient outcomes and unnecessary deaths. One reason is that clinicians, overwhelmed by patients, struggle to keep pace with the rate of new coronavirus literature. A potential solution is developing a tool for evaluating coronavirus literature using large language models (LLMs) -- neural networks that are deployed for natural language processing. LLMs can be used to summarize and extract user-specified information. The greater availability and advancement of LLMs and pre-processed coronavirus literature databases provide the opportunity to assist clinicians in evaluating coronavirus literature through a coronavirus literature specific LLM (covLLM), a tool that directly takes an inputted research article and a user query to return an answer. Using the COVID-19 Open Research Dataset (CORD-19), we produced two datasets: (1) synCovid, which uses a combination of handwritten prompts and synthetic prompts generated using OpenAI, and (2) real abstracts, which contains abstract and title pairs. covLLM was trained with LLaMA 7B as a baseline model to produce three models trained on (1) the Alpaca and synCovid datasets, (2) the synCovid dataset, and (3) the synCovid and real abstract datasets. These models were evaluated by two human evaluators and ChatGPT. Results demonstrate that training covLLM on the synCovid and abstract pairs datasets performs competitively with ChatGPT and outperforms covLLM trained primarily using the Alpaca dataset. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 371,967 |
2311.06928 | Attention for Causal Relationship Discovery from Biological Neural
Dynamics | This paper explores the potential of the transformer models for learning Granger causality in networks with complex nonlinear dynamics at every node, as in neurobiological and biophysical networks. Our study primarily focuses on a proof-of-concept investigation based on simulated neural dynamics, for which the ground-truth causality is known through the underlying connectivity matrix. For transformer models trained to forecast neuronal population dynamics, we show that the cross attention module effectively captures the causal relationship among neurons, with an accuracy equal or superior to that for the most popular Granger causality analysis method. While we acknowledge that real-world neurobiology data will bring further challenges, including dynamic connectivity and unobserved variability, this research offers an encouraging preliminary glimpse into the utility of the transformer model for causal representation learning in neuroscience. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 407,133 |
1811.08778 | Reconstruction of jointly sparse vectors via manifold optimization | In this paper, we consider the challenge of reconstructing jointly sparse vectors from linear measurements. Firstly, we show that by utilizing the rank of the output data matrix we can reduce the problem to a full column rank case. This result reveals a reduction in the computational complexity of the original problem and enables a simple implementation of joint sparse recovery algorithms for full-rank setting. Secondly, we propose a new method for joint sparse recovery in the form of a non-convex optimization problem on a non-compact Stiefel manifold. In our numerical experiments our method outperforms the commonly used $\ell_{2,1}$ minimization in the sense that much fewer measurements are required for accurate sparse reconstructions. We postulate this approach possesses the desirable rank aware property, that is, being able to take advantage of the rank of the unknown matrix to improve the recovery. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 114,125 |
2408.09348 | Hyperstroke: A Novel High-quality Stroke Representation for Assistive
Artistic Drawing | Assistive drawing aims to facilitate the creative process by providing intelligent guidance to artists. Existing solutions often fail to effectively model intricate stroke details or adequately address the temporal aspects of drawing. We introduce hyperstroke, a novel stroke representation designed to capture precise fine stroke details, including RGB appearance and alpha-channel opacity. Using a Vector Quantization approach, hyperstroke learns compact tokenized representations of strokes from real-life drawing videos of artistic drawing. With hyperstroke, we propose to model assistive drawing via a transformer-based architecture, to enable intuitive and user-friendly drawing applications, which are experimented in our exploratory evaluation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,392 |
2202.12707 | Benchmarking Generative Latent Variable Models for Speech | Stochastic latent variable models (LVMs) achieve state-of-the-art performance on natural image generation but are still inferior to deterministic models on speech. In this paper, we develop a speech benchmark of popular temporal LVMs and compare them against state-of-the-art deterministic models. We report the likelihood, which is a much used metric in the image domain, but rarely, or incomparably, reported for speech models. To assess the quality of the learned representations, we also compare their usefulness for phoneme recognition. Finally, we adapt the Clockwork VAE, a state-of-the-art temporal LVM for video generation, to the speech domain. Despite being autoregressive only in latent space, we find that the Clockwork VAE can outperform previous LVMs and reduce the gap to deterministic models by using a hierarchy of latent variables. | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 282,338 |
2403.16516 | Visually Guided Generative Text-Layout Pre-training for Document
Intelligence | Prior study shows that pre-training techniques can boost the performance of visual document understanding (VDU), which typically requires models to gain abilities to perceive and reason both document texts and layouts (e.g., locations of texts and table-cells). To this end, we propose visually guided generative text-layout pre-training, named ViTLP. Given a document image, the model optimizes hierarchical language and layout modeling objectives to generate the interleaved text and layout sequence. In addition, to address the limitation of processing long documents by Transformers, we introduce a straightforward yet effective multi-segment generative pre-training scheme, facilitating ViTLP to process word-intensive documents of any length. ViTLP can function as a native OCR model to localize and recognize texts of document images. Besides, ViTLP can be effectively applied to various downstream VDU tasks. Extensive experiments show that ViTLP achieves competitive performance over existing baselines on benchmark VDU tasks, including information extraction, document classification, and document question answering. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 441,083 |
2101.06749 | A Layer-Wise Information Reinforcement Approach to Improve Learning in
Deep Belief Networks | With the advent of deep learning, the number of works proposing new methods or improving existent ones has grown exponentially in the last years. In this scenario, "very deep" models were emerging, once they were expected to extract more intrinsic and abstract features while supporting a better performance. However, such models suffer from the gradient vanishing problem, i.e., backpropagation values become too close to zero in their shallower layers, ultimately causing learning to stagnate. Such an issue was overcome in the context of convolution neural networks by creating "shortcut connections" between layers, in a so-called deep residual learning framework. Nonetheless, a very popular deep learning technique called Deep Belief Network still suffers from gradient vanishing when dealing with discriminative tasks. Therefore, this paper proposes the Residual Deep Belief Network, which considers the information reinforcement layer-by-layer to improve the feature extraction and knowledge retaining, that support better discriminative performance. Experiments conducted over three public datasets demonstrate its robustness concerning the task of binary image classification. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 215,819 |
2112.04373 | Stochastic Bounded Confidence Opinion Dynamics: How Far Apart Do
Opinions Drift? | In this era of fast and large-scale opinion formation, a mathematical understanding of opinion evolution, a.k.a. opinion dynamics, is especially important. Linear graph-based dynamics and bounded confidence dynamics are the two most popular models for opinion dynamics in social networks. Recently, stochastic bounded confidence opinion dynamics were proposed as a general framework that incorporates both these dynamics as special cases and also captures the inherent stochasticity and noise (errors) in real-life social exchanges. Although these dynamics are quite general and realistic, their analysis is particularly challenging compared to other opinion dynamics models. This is because these dynamics are nonlinear and stochastic, and belong to the class of Markov processes that have asymptotically zero drift and unbounded jumps. The asymptotic behavior of these dynamics was characterized in prior works. However, they do not shed light on their finite-time behavior, which is often of interest in practice. We take a stride in this direction by analyzing the finite time behavior of a two-agent system, which is fundamental to the understanding of multi-agent dynamics. In particular, we show that the opinion difference between the two agents is well concentrated around zero under the conditions that lead to asymptotic stability of the dynamics. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 270,506 |
2011.11789 | Object-centered image stitching | Image stitching is typically decomposed into three phases: registration, which aligns the source images with a common target image; seam finding, which determines for each target pixel the source image it should come from; and blending, which smooths transitions over the seams. As described in [1], the seam finding phase attempts to place seams between pixels where the transition between source images is not noticeable. Here, we observe that the most problematic failures of this approach occur when objects are cropped, omitted, or duplicated. We therefore take an object-centered approach to the problem, leveraging recent advances in object detection [2,3,4]. We penalize candidate solutions with this class of error by modifying the energy function used in the seam finding stage. This produces substantially more realistic stitching results on challenging imagery. In addition, these methods can be used to determine when there is non-recoverable occlusion in the input data, and also suggest a simple evaluation metric that can be used to evaluate the output of stitching algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 207,942 |
2011.12542 | Wasserstein k-means with sparse simplex projection | This paper presents a proposal of a faster Wasserstein $k$-means algorithm for histogram data by reducing Wasserstein distance computations and exploiting sparse simplex projection. We shrink data samples, centroids, and the ground cost matrix, which leads to considerable reduction of the computations used to solve optimal transport problems without loss of clustering quality. Furthermore, we dynamically reduced the computational complexity by removing lower-valued data samples and harnessing sparse simplex projection while keeping the degradation of clustering quality lower. We designate this proposed algorithm as sparse simplex projection based Wasserstein $k$-means, or SSPW $k$-means. Numerical evaluations conducted with comparison to results obtained using Wasserstein $k$-means algorithm demonstrate the effectiveness of the proposed SSPW $k$-means for real-world datasets | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 208,198 |
1909.11651 | Matching Embeddings for Domain Adaptation | In this work we address the problem of transferring knowledge obtained from a vast annotated source domain to a low labeled target domain. We propose Adversarial Variational Domain Adaptation (AVDA), a semi-supervised domain adaptation method based on deep variational embedded representations. We use approximate inference and domain adversarial methods to map samples from source and target domains into an aligned class-dependent embedding defined as a Gaussian Mixture Model. AVDA works as a classifier and considers a generative model that helps this classification. We used digits dataset for experimentation. Our results show that on a semi-supervised few-shot scenario our model outperforms previous methods in most of the adaptation tasks, even using a fewer number of labeled samples per class on target domain. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 146,876 |
1504.03777 | Near-Optimal Hybrid Processing for Massive MIMO Systems via Matrix
Decomposition | For the practical implementation of massive multiple-input multiple-output (MIMO) systems, the hybrid processing (precoding/combining) structure is promising to reduce the high cost rendered by large number of RF chains of the traditional processing structure. The hybrid processing is performed through low-dimensional digital baseband processing combined with analog RF processing enabled by phase shifters. We propose to design hybrid RF and baseband precoders/combiners for multi-stream transmission in point-to-point massive MIMO systems, by directly decomposing the pre-designed unconstrained digital precoder/combiner of a large dimension. The constant amplitude constraint of analog RF processing results in the matrix decomposition problem non-convex. Based on an alternate optimization technique, the non-convex matrix decomposition problem can be decoupled into a series of convex sub-problems and effectively solved by restricting the phase increment of each entry in the RF precoder/combiner within a small vicinity of its preceding iterate. A singular value decomposition based technique is proposed to secure an initial point sufficiently close to the global solution of the original non-convex problem. Through simulation, the convergence of the alternate optimization for such a matrix decomposition based hybrid processing (MD-HP) scheme is examined, and the performance of the MD-HP scheme is demonstrated to be near-optimal. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 42,068 |
2409.01635 | PMLBmini: A Tabular Classification Benchmark Suite for Data-Scarce
Applications | In practice, we are often faced with small-sized tabular data. However, current tabular benchmarks are not geared towards data-scarce applications, making it very difficult to derive meaningful conclusions from empirical comparisons. We introduce PMLBmini, a tabular benchmark suite of 44 binary classification datasets with sample sizes $\leq$ 500. We use our suite to thoroughly evaluate current automated machine learning (AutoML) frameworks, off-the-shelf tabular deep neural networks, as well as classical linear models in the low-data regime. Our analysis reveals that state-of-the-art AutoML and deep learning approaches often fail to appreciably outperform even a simple logistic regression baseline, but we also identify scenarios where AutoML and deep learning methods are indeed reasonable to apply. Our benchmark suite, available on https://github.com/RicardoKnauer/TabMini , allows researchers and practitioners to analyze their own methods and challenge their data efficiency. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 485,420 |
2310.03125 | Shielding the Unseen: Privacy Protection through Poisoning NeRF with
Spatial Deformation | In this paper, we introduce an innovative method of safeguarding user privacy against the generative capabilities of Neural Radiance Fields (NeRF) models. Our novel poisoning attack method induces changes to observed views that are imperceptible to the human eye, yet potent enough to disrupt NeRF's ability to accurately reconstruct a 3D scene. To achieve this, we devise a bi-level optimization algorithm incorporating a Projected Gradient Descent (PGD)-based spatial deformation. We extensively test our approach on two common NeRF benchmark datasets consisting of 29 real-world scenes with high-quality images. Our results compellingly demonstrate that our privacy-preserving method significantly impairs NeRF's performance across these benchmark datasets. Additionally, we show that our method is adaptable and versatile, functioning across various perturbation strengths and NeRF architectures. This work offers valuable insights into NeRF's vulnerabilities and emphasizes the need to account for such potential privacy risks when developing robust 3D scene reconstruction algorithms. Our study contributes to the larger conversation surrounding responsible AI and generative machine learning, aiming to protect user privacy and respect creative ownership in the digital age. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 397,143 |
2307.07057 | Leveraging Pretrained ASR Encoders for Effective and Efficient
End-to-End Speech Intent Classification and Slot Filling | We study speech intent classification and slot filling (SICSF) by proposing to use an encoder pretrained on speech recognition (ASR) to initialize an end-to-end (E2E) Conformer-Transformer model, which achieves the new state-of-the-art results on the SLURP dataset, with 90.14% intent accuracy and 82.27% SLURP-F1. We compare our model with encoders pretrained on self-supervised learning (SSL), and show that ASR pretraining is much more effective than SSL for SICSF. To explore parameter efficiency, we freeze the encoder and add Adapter modules, and show that parameter efficiency is only achievable with an ASR-pretrained encoder, while the SSL encoder needs full finetuning to achieve comparable results. In addition, we provide an in-depth comparison on end-to-end models versus cascading models (ASR+NLU), and show that E2E models are better than cascaded models unless an oracle ASR model is provided. Last but not least, our model is the first E2E model that achieves the same performance as cascading models with oracle ASR. Code, checkpoints and configs are available. | false | false | true | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 379,266 |
2105.11418 | Cost-Accuracy Aware Adaptive Labeling for Active Learning | Conventional active learning algorithms assume a single labeler that produces noiseless label at a given, fixed cost, and aim to achieve the best generalization performance for given classifier under a budget constraint. However, in many real settings, different labelers have different labeling costs and can yield different labeling accuracies. Moreover, a given labeler may exhibit different labeling accuracies for different instances. This setting can be referred to as active learning with diverse labelers with varying costs and accuracies, and it arises in many important real settings. It is therefore beneficial to understand how to effectively trade-off between labeling accuracy for different instances, labeling costs, as well as the informativeness of training instances, so as to achieve the best generalization performance at the lowest labeling cost. In this paper, we propose a new algorithm for selecting instances, labelers (and their corresponding costs and labeling accuracies), that employs generalization bound of learning with label noise to select informative instances and labelers so as to achieve higher generalization accuracy at a lower cost. Our proposed algorithm demonstrates state-of-the-art performance on five UCI and a real crowdsourcing dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 236,696 |
1710.01504 | Discourse Structure in Machine Translation Evaluation | In this article, we explore the potential of using sentence-level discourse structure for machine translation evaluation. We first design discourse-aware similarity measures, which use all-subtree kernels to compare discourse parse trees in accordance with the Rhetorical Structure Theory (RST). Then, we show that a simple linear combination with these measures can help improve various existing machine translation evaluation metrics regarding correlation with human judgments both at the segment- and at the system-level. This suggests that discourse information is complementary to the information used by many of the existing evaluation metrics, and thus it could be taken into account when developing richer evaluation metrics, such as the WMT-14 winning combined metric DiscoTKparty. We also provide a detailed analysis of the relevance of various discourse elements and relations from the RST parse trees for machine translation evaluation. In particular we show that: (i) all aspects of the RST tree are relevant, (ii) nuclearity is more useful than relation type, and (iii) the similarity of the translation RST tree to the reference tree is positively correlated with translation quality. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 82,030 |
2010.13073 | Fast and Accurate Light Field Saliency Detection through Deep Encoding | Light field saliency detection -- important due to utility in many vision tasks -- still lacks speed and can improve in accuracy. Due to the formulation of the saliency detection problem in light fields as a segmentation task or a memorizing task, existing approaches consume unnecessarily large amounts of computational resources for training, and have longer execution times for testing. We solve this by aggressively reducing the large light field images to a much smaller three-channel feature map appropriate for saliency detection using an RGB image saliency detector with attention mechanisms. We achieve this by introducing a novel convolutional neural network based features extraction and encoding module. Our saliency detector takes $0.4$ s to process a light field of size $9\times9\times512\times375$ in a CPU and is significantly faster than state-of-the-art light field saliency detectors, with better or comparable accuracy. Furthermore, model size of our architecture is significantly lower compared to state-of-the-art light field saliency detectors. Our work shows that extracting features from light fields through aggressive size reduction and the attention mechanism results in a faster and accurate light field saliency detector leading to near real-time light field processing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 202,994 |
2312.08800 | Evaluating Large Language Models for Health-related Queries with
Presuppositions | As corporations rush to integrate large language models (LLMs) to their search offerings, it is critical that they provide factually accurate information that is robust to any presuppositions that a user may express. In this work, we introduce UPHILL, a dataset consisting of health-related queries with varying degrees of presuppositions. Using UPHILL, we evaluate the factual accuracy and consistency of InstructGPT, ChatGPT, and BingChat models. We find that while model responses rarely disagree with true health claims (posed as questions), they often fail to challenge false claims: responses from InstructGPT agree with 32% of the false claims, ChatGPT 26% and BingChat 23%. As we increase the extent of presupposition in input queries, the responses from InstructGPT and ChatGPT agree with the claim considerably more often, regardless of its veracity. Responses from BingChat, which rely on retrieved webpages, are not as susceptible. Given the moderate factual accuracy, and the inability of models to consistently correct false assumptions, our work calls for a careful assessment of current LLMs for use in high-stakes scenarios. | true | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 415,453 |
1403.5142 | Interactive Debugging of ASP Programs | Broad application of answer set programming (ASP) for declarative problem solving requires the development of tools supporting the coding process. Program debugging is one of the crucial activities within this process. Recently suggested ASP debugging approaches allow efficient computation of possible explanations of a fault. However, even for a small program a debugger might return a large number of possible explanations and selection of the correct one must be done manually. In this paper we present an interactive query-based ASP debugging method which extends previous approaches and finds a preferred explanation by means of observations. The system queries a programmer whether a set of ground atoms must be true in all (cautiously) or some (bravely) answer sets of the program. Since some queries can be more informative than the others, we discuss query selection strategies which, given user's preferences for an explanation, can find the best query. That is, the query an answer of which reduces the overall number of queries required for the identification of a preferred explanation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 31,701 |
2205.01823 | Symmetry and Uncertainty-Aware Object SLAM for 6DoF Object Pose
Estimation | We propose a keypoint-based object-level SLAM framework that can provide globally consistent 6DoF pose estimates for symmetric and asymmetric objects alike. To the best of our knowledge, our system is among the first to utilize the camera pose information from SLAM to provide prior knowledge for tracking keypoints on symmetric objects -- ensuring that new measurements are consistent with the current 3D scene. Moreover, our semantic keypoint network is trained to predict the Gaussian covariance for the keypoints that captures the true error of the prediction, and thus is not only useful as a weight for the residuals in the system's optimization problems, but also as a means to detect harmful statistical outliers without choosing a manual threshold. Experiments show that our method provides competitive performance to the state of the art in 6DoF object pose estimation, and at a real-time speed. Our code, pre-trained models, and keypoint labels are available https://github.com/rpng/suo_slam. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 294,730 |
2412.13134 | Practicable Black-box Evasion Attacks on Link Prediction in Dynamic
Graphs -- A Graph Sequential Embedding Method | Link prediction in dynamic graphs (LPDG) has been widely applied to real-world applications such as website recommendation, traffic flow prediction, organizational studies, etc. These models are usually kept local and secure, with only the interactive interface restrictively available to the public. Thus, the problem of the black-box evasion attack on the LPDG model, where model interactions and data perturbations are restricted, seems to be essential and meaningful in practice. In this paper, we propose the first practicable black-box evasion attack method that achieves effective attacks against the target LPDG model, within a limited amount of interactions and perturbations. To perform effective attacks under limited perturbations, we develop a graph sequential embedding model to find the desired state embedding of the dynamic graph sequences, under a deep reinforcement learning framework. To overcome the scarcity of interactions, we design a multi-environment training pipeline and train our agent for multiple instances, by sharing an aggregate interaction buffer. Finally, we evaluate our attack against three advanced LPDG models on three real-world graph datasets of different scales and compare its performance with related methods under the interaction and perturbation constraints. Experimental results show that our attack is both effective and practicable. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 518,172 |
1407.2538 | Learning Deep Structured Models | Many problems in real-world applications involve predicting several random variables which are statistically related. Markov random fields (MRFs) are a great mathematical tool to encode such relationships. The goal of this paper is to combine MRFs with deep learning algorithms to estimate complex representations while taking into account the dependencies between the output random variables. Towards this goal, we propose a training algorithm that is able to learn structured models jointly with deep features that form the MRF potentials. Our approach is efficient as it blends learning and inference and makes use of GPU acceleration. We demonstrate the effectiveness of our algorithm in the tasks of predicting words from noisy images, as well as multi-class classification of Flickr photographs. We show that joint learning of the deep features and the MRF parameters results in significant performance gains. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 34,536 |
1904.06972 | Efficient Feature Selection of Power Quality Events using Two
Dimensional (2D) Particle Swarms | A novel two-dimensional (2D) learning framework has been proposed to address the feature selection problem in Power Quality (PQ) events. Unlike the existing feature selection approaches, the proposed 2D learning explicitly incorporates the information about the subset cardinality (i.e., the number of features) as an additional learning dimension to effectively guide the search process. The efficacy of this approach has been demonstrated considering fourteen distinct classes of PQ events which conform to the IEEE Standard 1159. The search performance of the 2D learning approach has been compared to the other six well-known feature selection wrappers by considering two induction algorithms: Naive Bayes (NB) and k-Nearest Neighbors (k-NN). Further, the robustness of the selected/reduced feature subsets has been investigated considering seven different levels of noise. The results of this investigation convincingly demonstrate that the proposed 2D learning can identify significantly better and robust feature subsets for PQ events. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 127,681 |
1605.01855 | Resource allocation using metaheuristic search | This research is focused on solving problems in the area of software project management using metaheuristic search algorithms and as such is research in the field of search based software engineering. The main aim of this research is to evaluate the performance of different metaheuristic search techniques in resource allocation and scheduling problems that would be typical of software development projects. This paper reports a set of experiments which evaluate the performance of three algorithms, namely simulated annealing, tabu search and genetic algorithms. The experimental results indicate that all of the metaheuristics search techniques can be used to solve problems in resource allocation and scheduling within a software project. Finally, a comparative analysis suggests that overall the genetic algorithm had performed better than simulated annealing and tabu search. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 55,542 |
2311.07600 | Polarimetric PatchMatch Multi-View Stereo | PatchMatch Multi-View Stereo (PatchMatch MVS) is one of the popular MVS approaches, owing to its balanced accuracy and efficiency. In this paper, we propose Polarimetric PatchMatch multi-view Stereo (PolarPMS), which is the first method exploiting polarization cues to PatchMatch MVS. The key of PatchMatch MVS is to generate depth and normal hypotheses, which form local 3D planes and slanted stereo matching windows, and efficiently search for the best hypothesis based on the consistency among multi-view images. In addition to standard photometric consistency, our PolarPMS evaluates polarimetric consistency to assess the validness of a depth and normal hypothesis, motivated by the physical property that the polarimetric information is related to the object's surface normal. Experimental results demonstrate that our PolarPMS can improve the accuracy and the completeness of reconstructed 3D models, especially for texture-less surfaces, compared with state-of-the-art PatchMatch MVS methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 407,401 |
2307.13101 | Contrastive Example-Based Control | While many real-world problems that might benefit from reinforcement learning, these problems rarely fit into the MDP mold: interacting with the environment is often expensive and specifying reward functions is challenging. Motivated by these challenges, prior work has developed data-driven approaches that learn entirely from samples from the transition dynamics and examples of high-return states. These methods typically learn a reward function from high-return states, use that reward function to label the transitions, and then apply an offline RL algorithm to these transitions. While these methods can achieve good results on many tasks, they can be complex, often requiring regularization and temporal difference updates. In this paper, we propose a method for offline, example-based control that learns an implicit model of multi-step transitions, rather than a reward function. We show that this implicit model can represent the Q-values for the example-based control problem. Across a range of state-based and image-based offline control tasks, our method outperforms baselines that use learned reward functions; additional experiments demonstrate improved robustness and scaling with dataset size. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 381,472 |
1511.04145 | A Continuous-time Mutually-Exciting Point Process Framework for
Prioritizing Events in Social Media | The overwhelming amount and rate of information update in online social media is making it increasingly difficult for users to allocate their attention to their topics of interest, thus there is a strong need for prioritizing news feeds. The attractiveness of a post to a user depends on many complex contextual and temporal features of the post. For instance, the contents of the post, the responsiveness of a third user, and the age of the post may all have impact. So far, these static and dynamic features has not been incorporated in a unified framework to tackle the post prioritization problem. In this paper, we propose a novel approach for prioritizing posts based on a feature modulated multi-dimensional point process. Our model is able to simultaneously capture textual and sentiment features, and temporal features such as self-excitation, mutual-excitation and bursty nature of social interaction. As an evaluation, we also curated a real-world conversational benchmark dataset crawled from Facebook. In our experiments, we demonstrate that our algorithm is able to achieve the-state-of-the-art performance in terms of analyzing, predicting, and prioritizing events. In terms of interpretability of our method, we observe that features indicating individual user profile and linguistic characteristics of the events work best for prediction and prioritization of new events. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 48,851 |
2301.12057 | Object Preserving Siamese Network for Single Object Tracking on Point
Clouds | Obviously, the object is the key factor of the 3D single object tracking (SOT) task. However, previous Siamese-based trackers overlook the negative effects brought by randomly dropped object points during backbone sampling, which hinder trackers to predict accurate bounding boxes (BBoxes). Exploring an approach that seeks to maximize the preservation of object points and their object-aware features is of particular significance. Motivated by this, we propose an Object Preserving Siamese Network (OPSNet), which can significantly maintain object integrity and boost tracking performance. Firstly, the object highlighting module enhances the object-aware features and extracts discriminative features from template and search area. Then, the object-preserved sampling selects object candidates to obtain object-preserved search area seeds and drop the background points that contribute less to tracking. Finally, the object localization network precisely locates 3D BBoxes based on the object-preserved search area seeds. Extensive experiments demonstrate our method outperforms the state-of-the-art performance (9.4% and 2.5% success gain on KITTI and Waymo Open Dataset respectively). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 342,377 |
1902.10339 | How Large a Vocabulary Does Text Classification Need? A Variational
Approach to Vocabulary Selection | With the rapid development in deep learning, deep neural networks have been widely adopted in many real-life natural language applications. Under deep neural networks, a pre-defined vocabulary is required to vectorize text inputs. The canonical approach to select pre-defined vocabulary is based on the word frequency, where a threshold is selected to cut off the long tail distribution. However, we observed that such simple approach could easily lead to under-sized vocabulary or over-sized vocabulary issues. Therefore, we are interested in understanding how the end-task classification accuracy is related to the vocabulary size and what is the minimum required vocabulary size to achieve a specific performance. In this paper, we provide a more sophisticated variational vocabulary dropout (VVD) based on variational dropout to perform vocabulary selection, which can intelligently select the subset of the vocabulary to achieve the required performance. To evaluate different algorithms on the newly proposed vocabulary selection problem, we propose two new metrics: Area Under Accuracy-Vocab Curve and Vocab Size under X\% Accuracy Drop. Through extensive experiments on various NLP classification tasks, our variational framework is shown to significantly outperform the frequency-based and other selection baselines on these metrics. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 122,653 |
2401.11471 | LR-CNN: Lightweight Row-centric Convolutional Neural Network Training
for Memory Reduction | In the last decade, Convolutional Neural Network with a multi-layer architecture has advanced rapidly. However, training its complex network is very space-consuming, since a lot of intermediate data are preserved across layers, especially when processing high-dimension inputs with a big batch size. That poses great challenges to the limited memory capacity of current accelerators (e.g., GPUs). Existing efforts mitigate such bottleneck by external auxiliary solutions with additional hardware costs, and internal modifications with potential accuracy penalty. Differently, our analysis reveals that computations intra- and inter-layers exhibit the spatial-temporal weak dependency and even complete independency features. That inspires us to break the traditional layer-by-layer (column) dataflow rule. Now operations are novelly re-organized into rows throughout all convolution layers. This lightweight design allows a majority of intermediate data to be removed without any loss of accuracy. We particularly study the weak dependency between two consecutive rows. For the resulting skewed memory consumption, we give two solutions with different favorite scenarios. Evaluations on two representative networks confirm the effectiveness. We also validate that our middle dataflow optimization can be smoothly embraced by existing works for better memory reduction. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 423,018 |
2312.00252 | PyNeRF: Pyramidal Neural Radiance Fields | Neural Radiance Fields (NeRFs) can be dramatically accelerated by spatial grid representations. However, they do not explicitly reason about scale and so introduce aliasing artifacts when reconstructing scenes captured at different camera distances. Mip-NeRF and its extensions propose scale-aware renderers that project volumetric frustums rather than point samples but such approaches rely on positional encodings that are not readily compatible with grid methods. We propose a simple modification to grid-based models by training model heads at different spatial grid resolutions. At render time, we simply use coarser grids to render samples that cover larger volumes. Our method can be easily applied to existing accelerated NeRF methods and significantly improves rendering quality (reducing error rates by 20-90% across synthetic and unbounded real-world scenes) while incurring minimal performance overhead (as each model head is quick to evaluate). Compared to Mip-NeRF, we reduce error rates by 20% while training over 60x faster. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 411,974 |
2502.14112 | To Stand on the Shoulders of Giants: Should We Protect Initial
Discoveries in Multi-Agent Exploration? | Exploring new ideas is a fundamental aspect of research and development (R\&D), which often occurs in competitive environments. Most ideas are subsequent, i.e. one idea today leads to more ideas tomorrow. According to one approach, the best way to encourage exploration is by granting protection on discoveries to the first innovator. Correspondingly, only the one who made the first discovery can use the new knowledge and benefit from subsequent discoveries, which in turn should increase the initial motivation to explore. An alternative approach to promote exploration favors the \emph{sharing of knowledge} from discoveries among researchers allowing explorers to use each others' discoveries to develop further knowledge, as in the open-source community. With no protection, all explorers have access to all existing discoveries and new directions are explored faster. We present a game theoretic analysis of an abstract research-and-application game which clarifies the expected advantages and disadvantages of the two approaches under full information. We then compare the theoretical predictions with the observed behavior of actual players in the lab who operate under partial information conditions in both worlds. Our main experimental finding is that the no protection approach leads to \emph{more} investment efforts overall, in contrast to theoretical prediction and common economic wisdom, but in line with a familiar cognitive bias known as `underweighting of rare events'. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 535,662 |
2312.12937 | Robust Loss Functions for Training Decision Trees with Noisy Labels | We consider training decision trees using noisily labeled data, focusing on loss functions that can lead to robust learning algorithms. Our contributions are threefold. First, we offer novel theoretical insights on the robustness of many existing loss functions in the context of decision tree learning. We show that some of the losses belong to a class of what we call conservative losses, and the conservative losses lead to an early stopping behavior during training and noise-tolerant predictions during testing. Second, we introduce a framework for constructing robust loss functions, called distribution losses. These losses apply percentile-based penalties based on an assumed margin distribution, and they naturally allow adapting to different noise rates via a robustness parameter. In particular, we introduce a new loss called the negative exponential loss, which leads to an efficient greedy impurity-reduction learning algorithm. Lastly, our experiments on multiple datasets and noise settings validate our theoretical insight and the effectiveness of our adaptive negative exponential loss. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 417,161 |
2502.09900 | Thompson Sampling for Repeated Newsvendor | In this paper, we investigate the performance of Thompson Sampling (TS) for online learning with censored feedback, focusing primarily on the classic repeated newsvendor model--a foundational framework in inventory management--and demonstrating how our techniques can be naturally extended to a broader class of problems. We model demand using a Weibull distribution and initialize TS with a Gamma prior to dynamically adjust order quantities. Our analysis establishes optimal (up to logarithmic factors) frequentist regret bounds for TS without imposing restrictive prior assumptions. More importantly, it yields novel and highly interpretable insights on how TS addresses the exploration-exploitation trade-off in the repeated newsvendor setting. Specifically, our results show that when past order quantities are sufficiently large to overcome censoring, TS accurately estimates the unknown demand parameters, leading to near-optimal ordering decisions. Conversely, when past orders are relatively small, TS automatically increases future order quantities to gather additional demand information. Extensive numerical simulations further demonstrate that TS outperforms more conservative and widely-used approaches such as online convex optimization, upper confidence bounds, and myopic Bayesian dynamic programming. This study also lays the foundation for exploring general online learning problems with censored feedback. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 533,652 |
2408.05888 | Integrative Approaches in Cybersecurity and AI | In recent years, the convergence of cybersecurity, artificial intelligence (AI), and data management has emerged as a critical area of research, driven by the increasing complexity and interdependence of modern technological ecosystems. This paper provides a comprehensive review and analysis of integrative approaches that harness AI techniques to enhance cybersecurity frameworks and optimize data management practices. By exploring the synergies between these domains, we identify key trends, challenges, and future directions that hold the potential to revolutionize the way organizations protect, analyze, and leverage their data. Our findings highlight the necessity of cross-disciplinary strategies that incorporate AI-driven automation, real-time threat detection, and advanced data analytics to build more resilient and adaptive security architectures. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 479,985 |
2401.15773 | Evaluation of k-means time series clustering based on z-normalization
and NP-Free | Despite the widespread use of k-means time series clustering in various domains, there exists a gap in the literature regarding its comprehensive evaluation with different time series normalization approaches. This paper seeks to fill this gap by conducting a thorough performance evaluation of k-means time series clustering on real-world open-source time series datasets. The evaluation focuses on two distinct normalization techniques: z-normalization and NP-Free. The former is one of the most commonly used normalization approach for time series. The latter is a real-time time series representation approach, which can serve as a time series normalization approach. The primary objective of this paper is to assess the impact of these two normalization techniques on k-means time series clustering in terms of its clustering quality. The experiments employ the silhouette score, a well-established metric for evaluating the quality of clusters in a dataset. By systematically investigating the performance of k-means time series clustering with these two normalization techniques, this paper addresses the current gap in k-means time series clustering evaluation and contributes valuable insights to the development of time series clustering. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 424,580 |
1812.07324 | Predicting user intent from search queries using both CNNs and RNNs | Predicting user behaviour on a website is a difficult task, which requires the integration of multiple sources of information, such as geo-location, user profile or web surfing history. In this paper we tackle the problem of predicting the user intent, based on the queries that were used to access a certain webpage. We make no additional assumptions, such as domain detection, device used or location, and only use the word information embedded in the given query. In order to build competitive classifiers, we label a small fraction of the EDI query intent prediction dataset \cite{edi-challenge-dataset}, which is used as ground truth. Then, using various rule-based approaches, we automatically label the rest of the dataset, train the classifiers and evaluate the quality of the automatic labeling on the ground truth dataset. We used both recurrent and convolutional networks as the models, while representing the words in the query with multiple embedding methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 116,792 |
2309.05784 | Grey-box Bayesian Optimization for Sensor Placement in Assisted Living
Environments | Optimizing the configuration and placement of sensors is crucial for reliable fall detection, indoor localization, and activity recognition in assisted living spaces. We propose a novel, sample-efficient approach to find a high-quality sensor placement in an arbitrary indoor space based on grey-box Bayesian optimization and simulation-based evaluation. Our key technical contribution lies in capturing domain-specific knowledge about the spatial distribution of activities and incorporating it into the iterative selection of query points in Bayesian optimization. Considering two simulated indoor environments and a real-world dataset containing human activities and sensor triggers, we show that our proposed method performs better compared to state-of-the-art black-box optimization techniques in identifying high-quality sensor placements, leading to accurate activity recognition in terms of F1-score, while also requiring a significantly lower (51.3% on average) number of expensive function queries. | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 391,184 |
2308.12845 | Implicit Obstacle Map-driven Indoor Navigation Model for Robust Obstacle
Avoidance | Robust obstacle avoidance is one of the critical steps for successful goal-driven indoor navigation tasks.Due to the obstacle missing in the visual image and the possible missed detection issue, visual image-based obstacle avoidance techniques still suffer from unsatisfactory robustness. To mitigate it, in this paper, we propose a novel implicit obstacle map-driven indoor navigation framework for robust obstacle avoidance, where an implicit obstacle map is learned based on the historical trial-and-error experience rather than the visual image. In order to further improve the navigation efficiency, a non-local target memory aggregation module is designed to leverage a non-local network to model the intrinsic relationship between the target semantic and the target orientation clues during the navigation process so as to mine the most target-correlated object clues for the navigation decision. Extensive experimental results on AI2-Thor and RoboTHOR benchmarks verify the excellent obstacle avoidance and navigation efficiency of our proposed method. The core source code is available at https://github.com/xwaiyy123/object-navigation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 387,691 |
1205.3231 | Survey on Distributed Data Mining in P2P Networks | The exponential increase of availability of digital data and the necessity to process it in business and scientific fields has literally forced upon us the need to analyze and mine useful knowledge from it. Traditionally data mining has used a data warehousing model of gathering all data into a central site, and then running an algorithm upon that data. Such a centralized approach is fundamentally inappropriate due to many reasons like huge amount of data, infeasibility to centralize data stored at multiple sites, bandwidth limitation and privacy concerns. To solve these problems, Distributed Data Mining (DDM) has emerged as a hot research area. Careful attention in the usage of distributed resources of data, computing, communication, and human factors in a near optimal fashion are paid by distributed data mining. DDM is gaining attention in peer-to-peer (P2P) systems which are emerging as a choice of solution for applications such as file sharing, collaborative movie and song scoring, electronic commerce, and surveillance using sensor networks. The main intension of this draft paper is to provide an overview of DDM and P2P Data Mining. The paper discusses the need for DDM, taxonomy of DDM architectures, various DDM approaches, DDM related works in P2P systems and issues and challenges in P2P data mining. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 16,008 |
1502.07977 | R\'enyi generalizations of quantum information measures | Quantum information measures such as the entropy and the mutual information find applications in physics, e.g., as correlation measures. Generalizing such measures based on the R\'enyi entropies is expected to enhance their scope in applications. We prescribe R\'enyi generalizations for any quantum information measure which consists of a linear combination of von Neumann entropies with coefficients chosen from the set {-1,0,1}. As examples, we describe R\'enyi generalizations of the conditional quantum mutual information, some quantum multipartite information measures, and the topological entanglement entropy. Among these, we discuss the various properties of the R\'enyi conditional quantum mutual information and sketch some potential applications. We conjecture that the proposed R\'enyi conditional quantum mutual informations are monotone increasing in the R\'enyi parameter, and we have proofs of this conjecture for some special cases. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 40,631 |
1005.0072 | HyberLoc: Providing Physical Layer Location Privacy in Hybrid Sensor
Networks | In many hybrid wireless sensor networks' applications, sensor nodes are deployed in hostile environments where trusted and un-trusted nodes co-exist. In anchor-based hybrid networks, it becomes important to allow trusted nodes to gain full access to the location information transmitted in beacon frames while, at the same time, prevent un-trusted nodes from using this information. The main challenge is that un-trusted nodes can measure the physical signal transmitted from anchor nodes, even if these nodes encrypt their transmission. Using the measured signal strength, un-trusted nodes can still tri-laterate the location of anchor nodes. In this paper, we propose HyberLoc, an algorithm that provides anchor physical layer location privacy in anchor-based hybrid sensor networks. The idea is for anchor nodes to dynamically change their transmission power following a certain probability distribution, degrading the localization accuracy at un-trusted nodes while maintaining high localization accuracy at trusted nodes. Given an average power constraint, our analysis shows that the discretized exponential distribution is the distribution that maximizes location uncertainty at the untrusted nodes. Detailed evaluation through analysis, simulation, and implementation shows that HyberLoc gives trusted nodes up to 3.5 times better localization accuracy as compared to untrusted nodes. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 6,364 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.