id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.18007 | Conditional Score Guidance for Text-Driven Image-to-Image Translation | We present a novel algorithm for text-driven image-to-image translation based on a pretrained text-to-image diffusion model. Our method aims to generate a target image by selectively editing the regions of interest in a source image, defined by a modifying text, while preserving the remaining parts. In contrast to existing techniques that solely rely on a target prompt, we introduce a new score function that additionally considers both the source image and the source text prompt, tailored to address specific translation tasks. To this end, we derive the conditional score function in a principled manner, decomposing it into the standard score and a guiding term for target image generation. For the gradient computation of the guiding term, we assume a Gaussian distribution of the posterior distribution and estimate its mean and variance to adjust the gradient without additional training. In addition, to improve the quality of the conditional score guidance, we incorporate a simple yet effective mixup technique, which combines two cross-attention maps derived from the source and target latents. This strategy is effective for promoting a desirable fusion of the invariant parts in the source image and the edited regions aligned with the target prompt, leading to high-fidelity target image generation. Through comprehensive experiments, we demonstrate that our approach achieves outstanding image-to-image translation performance on various tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 368,823 |
2207.04399 | Horizontal and Vertical Attention in Transformers | Transformers are built upon multi-head scaled dot-product attention and positional encoding, which aim to learn the feature representations and token dependencies. In this work, we focus on enhancing the distinctive representation by learning to augment the feature maps with the self-attention mechanism in Transformers. Specifically, we propose the horizontal attention to re-weight the multi-head output of the scaled dot-product attention before dimensionality reduction, and propose the vertical attention to adaptively re-calibrate channel-wise feature responses by explicitly modelling inter-dependencies among different channels. We demonstrate the Transformer models equipped with the two attentions have a high generalization capability across different supervised learning tasks, with a very minor additional computational cost overhead. The proposed horizontal and vertical attentions are highly modular, which can be inserted into various Transformer models to further improve the performance. Our code is available in the supplementary material. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 307,189 |
1906.00216 | Robust Learning Under Label Noise With Iterative Noise-Filtering | We consider the problem of training a model under the presence of label noise. Current approaches identify samples with potentially incorrect labels and reduce their influence on the learning process by either assigning lower weights to them or completely removing them from the training set. In the first case the model however still learns from noisy labels; in the latter approach, good training data can be lost. In this paper, we propose an iterative semi-supervised mechanism for robust learning which excludes noisy labels but is still able to learn from the corresponding samples. To this end, we add an unsupervised loss term that also serves as a regularizer against the remaining label noise. We evaluate our approach on common classification tasks with different noise ratios. Our robust models outperform the state-of-the-art methods by a large margin. Especially for very large noise ratios, we achieve up to 20 % absolute improvement compared to the previous best model. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 133,309 |
2410.07616 | The Plug-in Approach for Average-Reward and Discounted MDPs: Optimal
Sample Complexity Analysis | We study the sample complexity of the plug-in approach for learning $\varepsilon$-optimal policies in average-reward Markov decision processes (MDPs) with a generative model. The plug-in approach constructs a model estimate then computes an average-reward optimal policy in the estimated model. Despite representing arguably the simplest algorithm for this problem, the plug-in approach has never been theoretically analyzed. Unlike the more well-studied discounted MDP reduction method, the plug-in approach requires no prior problem information or parameter tuning. Our results fill this gap and address the limitations of prior approaches, as we show that the plug-in approach is optimal in several well-studied settings without using prior knowledge. Specifically it achieves the optimal diameter- and mixing-based sample complexities of $\widetilde{O}\left(SA \frac{D}{\varepsilon^2}\right)$ and $\widetilde{O}\left(SA \frac{\tau_{\mathrm{unif}}}{\varepsilon^2}\right)$, respectively, without knowledge of the diameter $D$ or uniform mixing time $\tau_{\mathrm{unif}}$. We also obtain span-based bounds for the plug-in approach, and complement them with algorithm-specific lower bounds suggesting that they are unimprovable. Our results require novel techniques for analyzing long-horizon problems which may be broadly useful and which also improve results for the discounted plug-in approach, removing effective-horizon-related sample size restrictions and obtaining the first optimal complexity bounds for the full range of sample sizes without reward perturbation. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 496,715 |
2107.06146 | Ontology-Based Process Modelling -- Will we live to see it? | In theory, ontology-based process modelling (OBPM) bares great potential to extend business process management. Many works have studied OBPM and are clear on the potential amenities, such as eliminating ambiguities or enabling advanced reasoning over company processes. However, despite this approval in academia, a widespread industry adoption is still nowhere to be seen. This can be mainly attributed to the fact, that it still requires high amounts of manual labour to initially create ontologies and annotations to process models. As long as these problems are not addressed, implementing OBPM seems unfeasible in practice. In this work, we therefore identify requirements needed for a successful implementation of OBPM and assess the current state of research w.r.t. these requirements. Our results indicate that the research progress for means to facilitate OBPM are still alarmingly low and there needs to be urgent work on extending existing approaches. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 246,000 |
2409.05565 | On the Convergence of Sigmoid and tanh Fuzzy General Grey Cognitive Maps | Fuzzy General Grey Cognitive Map (FGGCM) and Fuzzy Grey Cognitive Map (FGCM) are extensions of Fuzzy Cognitive Map (FCM) in terms of uncertainty. FGGCM allows for the processing of general grey number with multiple intervals, enabling FCM to better address uncertain situations. Although the convergence of FCM and FGCM has been discussed in many literature, the convergence of FGGCM has not been thoroughly explored. This paper aims to fill this research gap. First, metrics for the general grey number space and its vector space is given and proved using the Minkowski inequality. By utilizing the characteristic that Cauchy sequences are convergent sequences, the completeness of these two space is demonstrated. On this premise, utilizing Banach fixed point theorem and Browder-Gohde-Kirk fixed point theorem, combined with Lagrange's mean value theorem and Cauchy's inequality, deduces the sufficient conditions for FGGCM to converge to a unique fixed point when using tanh and sigmoid functions as activation functions. The sufficient conditions for the kernels and greyness of FGGCM to converge to a unique fixed point are also provided separately. Finally, based on Web Experience and Civil engineering FCM, designed corresponding FGGCM with sigmoid and tanh as activation functions by modifying the weights to general grey numbers. By comparing with the convergence theorems of FCM and FGCM, the effectiveness of the theorems proposed in this paper was verified. It was also demonstrated that the convergence theorems of FCM are special cases of the theorems proposed in this paper. The study for convergence of FGGCM is of great significance for guiding the learning algorithm of FGGCM, which is needed for designing FGGCM with specific fixed points, lays a solid theoretical foundation for the application of FGGCM in fields such as control, prediction, and decision support systems. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 486,821 |
2101.10357 | Regret-Optimal Filtering for Prediction and Estimation | The filtering problem of causally estimating a desired signal from a related observation signal is investigated through the lens of regret optimization. Classical filter designs, such as $\mathcal H_2$ (Kalman) and $\mathcal H_\infty$, minimize the average and worst-case estimation errors, respectively. As a result $\mathcal H_2$ filters are sensitive to inaccuracies in the underlying statistical model, and $\mathcal H_\infty$ filters are overly conservative since they safeguard against the worst-case scenario. We propose instead to minimize the \emph{regret} in order to design filters that perform well in different noise regimes by comparing their performance with that of a clairvoyant filter. More explicitly, we minimize the largest deviation of the squared estimation error of a causal filter from that of a non-causal filter that has access to future observations. In this sense, the regret-optimal filter will have the best competitive performance with respect to the non-causal benchmark filter no matter what the true signal and the observation process are. For the important case of signals that can be described with a time-invariant state-space, we provide an explicit construction for the regret optimal filter in the estimation (causal) and the prediction (strictly-causal) regimes. These solutions are obtained by reducing the regret filtering problem to a Nehari problem, i.e., approximating a non-causal operator by a causal one in spectral norm. The regret-optimal filters bear some resemblance to Kalman and $H_\infty$ filters: they are expressed as state-space models, inherit the finite dimension of the original state-space, and their solutions require solving algebraic Riccati equations. Numerical simulations demonstrate that regret minimization inherently interpolates between the performances of the $H_2$ and $H_\infty$ filters and is thus a viable approach for filter design. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 216,900 |
2010.10334 | DLWIoT: Deep Learning-based Watermarking for Authorized IoT Onboarding | The onboarding of IoT devices by authorized users constitutes both a challenge and a necessity in a world, where the number of IoT devices and the tampering attacks against them continuously increase. Commonly used onboarding techniques today include the use of QR codes, pin codes, or serial numbers. These techniques typically do not protect against unauthorized device access-a QR code is physically printed on the device, while a pin code may be included in the device packaging. As a result, any entity that has physical access to a device can onboard it onto their network and, potentially, tamper it (e.g.,install malware on the device). To address this problem, in this paper, we present a framework, called Deep Learning-based Watermarking for authorized IoT onboarding (DLWIoT), featuring a robust and fully automated image watermarking scheme based on deep neural networks. DLWIoT embeds user credentials into carrier images (e.g., QR codes printed on IoT devices), thus enables IoT onboarding only by authorized users. Our experimental results demonstrate the feasibility of DLWIoT, indicating that authorized users can onboard IoT devices with DLWIoT within 2.5-3sec. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 201,855 |
2010.12188 | Generating Long Financial Report using Conditional Variational
Autoencoders with Knowledge Distillation | Automatically generating financial report from a piece of news is quite a challenging task. Apparently, the difficulty of this task lies in the lack of sufficient background knowledge to effectively generate long financial report. To address this issue, this paper proposes the conditional variational autoencoders (CVAE) based approach which distills external knowledge from a corpus of news-report data. Particularly, we choose Bi-GRU as the encoder and decoder component of CVAE, and learn the latent variable distribution from input news. A higher level latent variable distribution is learnt from a corpus set of news-report data, respectively extr acted for each input news, to provide background knowledge to previously learnt latent variable distribution. Then, a teacher-student network is employed to distill knowledge to refine theoutput of the decoder component. To evaluate the model performance of the proposed approach, extensive experiments are preformed on a public dataset and two widely adopted evaluation criteria, i.e., BLEU and ROUGE, are chosen in the experiment. The promising experimental results demonstrate that the proposed approach is superior to the rest compared methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 202,599 |
2407.01095 | Trajectory Tracking for UAVs: An Interpolating Control Approach | Building on our previous work, this paper investigates the effectiveness of interpolating control (IC) for real-time trajectory tracking. Unlike prior studies that focused on trajectory tracking itself or UAV stabilization control in simulation, we evaluate the performance of a modified extended IC (eIC) controller compared to Model Predictive Control (MPC) through both simulated and laboratory experiments with a remotely controlled UAV. The evaluation focuses on the computational efficiency and control quality of real-time UAV trajectory tracking compared to previous IC applications. The results demonstrate that the eIC controller achieves competitive performance compared to MPC while significantly reducing computational complexity, making it a promising alternative for resource-constrained platforms. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 469,151 |
2111.01058 | Learning to Assimilate in Chaotic Dynamical Systems | The accuracy of simulation-based forecasting in chaotic systems is heavily dependent on high-quality estimates of the system state at the time the forecast is initialized. Data assimilation methods are used to infer these initial conditions by systematically combining noisy, incomplete observations and numerical models of system dynamics to produce effective estimation schemes. We introduce amortized assimilation, a framework for learning to assimilate in dynamical systems from sequences of noisy observations with no need for ground truth data. We motivate the framework by extending powerful results from self-supervised denoising to the dynamical systems setting through the use of differentiable simulation. Experimental results across several benchmark systems highlight the improved effectiveness of our approach over widely-used data assimilation methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 264,446 |
1809.01678 | Measures of Cluster Informativeness for Medical Evidence Aggregation and
Dissemination | The largest collection of medical evidence in the world is PubMed. However, the significant barrier in accessing and extracting information is information organization. A factor that contributes towards this barrier is managing medical controlled vocabularies that allow us to systematically and consistently organize, index, and search biomedical literature. Additionally, from users' perspective, to ultimately improve access, visualization is likely to play a powerful role. There is a strong link between information organization and information visualization, as many powerful visualizations depend on clustering methods. To improve visualization, therefore, one has to develop concrete and scalable measures for vocabularies used in indexing and their impact on document clustering. The focus of this study is on the development and evaluation of clustering methods. The paper concludes with demonstration of downstream network visualizations and their impact on discovering potentially valuable and latent genetic and molecular associations. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 106,864 |
2410.00896 | Outage-Constrained Sum Secrecy Rate Maximization for STAR-RIS with
Energy-Harvesting Eavesdroppers | This article proposes a novel strategy for enhancing secure wireless communication through the use of a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS) in a multiple-input single-output system. In the presence of energy-harvesting eavesdroppers, the study aims to maximize the secrecy rate while adhering to strict energy harvesting constraints. By dynamically manipulating the wireless environment with the STAR-RIS, the research examines the balance between harvested energy and secrecy rate under two key protocols: energy splitting and mode selection. The study addresses both imperfect and perfect channel state information (CSI) and formulates a complex non-convex optimization problem, which is solved using a penalty concave convex procedure combined with an alternating optimization algorithm. The method optimizes beamforming and STAR-RIS transmission and reflection coefficients to achieve a optimal balance between secure communication and energy harvesting constraints. Numerical simulations show that the proposed approach is effective, even with imperfect CSI, and outperforms conventional RIS methods in terms of robust security and energy performance. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 493,528 |
2107.02279 | Design Smells in Deep Learning Programs: An Empirical Study | Nowadays, we are witnessing an increasing adoption of Deep Learning (DL) based software systems in many industries. Designing a DL program requires constructing a deep neural network (DNN) and then training it on a dataset. This process requires that developers make multiple architectural (e.g., type, size, number, and order of layers) and configuration (e.g., optimizer, regularization methods, and activation functions) choices that affect the quality of the DL models, and consequently software quality. An under-specified or poorly-designed DL model may train successfully but is likely to perform poorly when deployed in production. Design smells in DL programs are poor design and-or configuration decisions taken during the development of DL components, that are likely to have a negative impact on the performance (i.e., prediction accuracy) and then quality of DL based software systems. In this paper, we present a catalogue of 8 design smells for a popular DL architecture, namely deep Feedforward Neural Networks which is widely employed in industrial applications. The design smells were identified through a review of the existing literature on DL design and a manual inspection of 659 DL programs with performance issues and design inefficiencies. The smells are specified by describing their context, consequences, and recommended refactorings. To provide empirical evidence on the relevance and perceived impact of the proposed design smells, we conducted a survey with 81 DL developers. In general, the developers perceived the proposed design smells as reflective of design or implementation problems, with agreement levels varying between 47\% and 68\%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 244,749 |
1604.05813 | Sherlock: Sparse Hierarchical Embeddings for Visually-aware One-class
Collaborative Filtering | Building successful recommender systems requires uncovering the underlying dimensions that describe the properties of items as well as users' preferences toward them. In domains like clothing recommendation, explaining users' preferences requires modeling the visual appearance of the items in question. This makes recommendation especially challenging, due to both the complexity and subtlety of people's 'visual preferences,' as well as the scale and dimensionality of the data and features involved. Ultimately, a successful model should be capable of capturing considerable variance across different categories and styles, while still modeling the commonalities explained by `global' structures in order to combat the sparsity (e.g. cold-start), variability, and scale of real-world datasets. Here, we address these challenges by building such structures to model the visual dimensions across different product categories. With a novel hierarchical embedding architecture, our method accounts for both high-level (colorfulness, darkness, etc.) and subtle (e.g. casualness) visual characteristics simultaneously. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 54,864 |
1808.08447 | Deep Emotion: A Computational Model of Emotion Using Deep Neural
Networks | Emotions are very important for human intelligence. For example, emotions are closely related to the appraisal of the internal bodily state and external stimuli. This helps us to respond quickly to the environment. Another important perspective in human intelligence is the role of emotions in decision-making. Moreover, the social aspect of emotions is also very important. Therefore, if the mechanism of emotions were elucidated, we could advance toward the essential understanding of our natural intelligence. In this study, a model of emotions is proposed to elucidate the mechanism of emotions through the computational model. Furthermore, from the viewpoint of partner robots, the model of emotions may help us to build robots that can have empathy for humans. To understand and sympathize with people's feelings, the robots need to have their own emotions. This may allow robots to be accepted in human society. The proposed model is implemented using deep neural networks consisting of three modules, which interact with each other. Simulation results reveal that the proposed model exhibits reasonable behavior as the basic mechanism of emotion. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 105,952 |
2110.05841 | Relative Molecule Self-Attention Transformer | Self-supervised learning holds promise to revolutionize molecule property prediction - a central task to drug discovery and many more industries - by enabling data efficient learning from scarce experimental data. Despite significant progress, non-pretrained methods can be still competitive in certain settings. We reason that architecture might be a key bottleneck. In particular, enriching the backbone architecture with domain-specific inductive biases has been key for the success of self-supervised learning in other domains. In this spirit, we methodologically explore the design space of the self-attention mechanism tailored to molecular data. We identify a novel variant of self-attention adapted to processing molecules, inspired by the relative self-attention layer, which involves fusing embedded graph and distance relationships between atoms. Our main contribution is Relative Molecule Attention Transformer (R-MAT): a novel Transformer-based model based on the developed self-attention layer that achieves state-of-the-art or very competitive results across a~wide range of molecule property prediction tasks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 260,421 |
1911.01721 | Adversarial dictionary learning for a robust analysis and modelling of
spontaneous neuronal activity | The field of neuroscience is experiencing rapid growth in the complexity and quantity of the recorded neural activity allowing us unprecedented access to its dynamics in different brain areas. The objective of this work is to discover directly from the experimental data rich and comprehensible models for brain function that will be concurrently robust to noise. Considering this task from the perspective of dimensionality reduction, we develop an innovative, robust to noise dictionary learning framework based on adversarial training methods for the identification of patterns of synchronous firing activity as well as within a time lag. We employ real-world binary datasets describing the spontaneous neuronal activity of laboratory mice over time, and we aim to their efficient low-dimensional representation. The results on the classification accuracy for the discrimination between the clean and the adversarial-noisy activation patterns obtained by an SVM classifier highlight the efficacy of the proposed scheme compared to other methods, and the visualization of the dictionary's distribution demonstrates the multifarious information that we obtain from it. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 152,188 |
2408.13289 | Optimal Dispatch Strategy for a Multi-microgrid Cooperative Alliance
Using a Two-Stage Pricing Mechanism | To coordinate resources among multi-level stakeholders and enhance the integration of electric vehicles (EVs) into multi-microgrids, this study proposes an optimal dispatch strategy within a multi-microgrid cooperative alliance using a nuanced two-stage pricing mechanism. Initially, the strategy assesses electric energy interactions between microgrids and distribution networks to establish a foundation for collaborative scheduling. The two-stage pricing mechanism initiates with a leader-follower game, wherein the microgrid operator acts as the leader and users as followers. Subsequently, it adjusts EV tariffs based on the game's equilibrium, taking into account factors such as battery degradation and travel needs to optimize EVs' electricity consumption. Furthermore, a bi-level optimization model refines power interactions and pricing strategies across the network, significantly enhancing demand response capabilities and economic outcomes. Simulation results demonstrate that this strategy not only increases renewable energy consumption but also reduces energy costs, thereby improving the overall efficiency and sustainability of the system. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 483,090 |
2403.15363 | Cascading Blackout Severity Prediction with Statistically-Augmented
Graph Neural Networks | Higher variability in grid conditions, resulting from growing renewable penetration and increased incidence of extreme weather events, has increased the difficulty of screening for scenarios that may lead to catastrophic cascading failures. Traditional power-flow-based tools for assessing cascading blackout risk are too slow to properly explore the space of possible failures and load/generation patterns. We add to the growing literature of faster graph-neural-network (GNN)-based techniques, developing two novel techniques for the estimation of blackout magnitude from initial grid conditions. First we propose several methods for employing an initial classification step to filter out safe "non blackout" scenarios prior to magnitude estimation. Second, using insights from the statistical properties of cascading blackouts, we propose a method for facilitating non-local message passing in our GNN models. We validate these two approaches on a large simulated dataset, and show the potential of both to increase blackout size estimation performance. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 440,510 |
2409.00724 | BUET Multi-disease Heart Sound Dataset: A Comprehensive Auscultation
Dataset for Developing Computer-Aided Diagnostic Systems | Cardiac auscultation, an integral tool in diagnosing cardiovascular diseases (CVDs), often relies on the subjective interpretation of clinicians, presenting a limitation in consistency and accuracy. Addressing this, we introduce the BUET Multi-disease Heart Sound (BMD-HS) dataset - a comprehensive and meticulously curated collection of heart sound recordings. This dataset, encompassing 864 recordings across five distinct classes of common heart sounds, represents a broad spectrum of valvular heart diseases, with a focus on diagnostically challenging cases. The standout feature of the BMD-HS dataset is its innovative multi-label annotation system, which captures a diverse range of diseases and unique disease states. This system significantly enhances the dataset's utility for developing advanced machine learning models in automated heart sound classification and diagnosis. By bridging the gap between traditional auscultation practices and contemporary data-driven diagnostic methods, the BMD-HS dataset is poised to revolutionize CVD diagnosis and management, providing an invaluable resource for the advancement of cardiac health research. The dataset is publicly available at this link: https://github.com/mHealthBuet/BMD-HS-Dataset. | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 485,046 |
2008.02457 | Graph Convolutional Networks for Hyperspectral Image Classification | To read the final version please go to IEEE TGRS on IEEE Xplore. Convolutional neural networks (CNNs) have been attracting increasing attention in hyperspectral (HS) image classification, owing to their ability to capture spatial-spectral feature representations. Nevertheless, their ability in modeling relations between samples remains limited. Beyond the limitations of grid sampling, graph convolutional networks (GCNs) have been recently proposed and successfully applied in irregular (or non-grid) data representation and analysis. In this paper, we thoroughly investigate CNNs and GCNs (qualitatively and quantitatively) in terms of HS image classification. Due to the construction of the adjacency matrix on all the data, traditional GCNs usually suffer from a huge computational cost, particularly in large-scale remote sensing (RS) problems. To this end, we develop a new mini-batch GCN (called miniGCN hereinafter) which allows to train large-scale GCNs in a mini-batch fashion. More significantly, our miniGCN is capable of inferring out-of-sample data without re-training networks and improving classification performance. Furthermore, as CNNs and GCNs can extract different types of HS features, an intuitive solution to break the performance bottleneck of a single model is to fuse them. Since miniGCNs can perform batch-wise network training (enabling the combination of CNNs and GCNs) we explore three fusion strategies: additive fusion, element-wise multiplicative fusion, and concatenation fusion to measure the obtained performance gain. Extensive experiments, conducted on three HS datasets, demonstrate the advantages of miniGCNs over GCNs and the superiority of the tested fusion strategies with regards to the single CNN or GCN models. The codes of this work will be available at https://github.com/danfenghong/IEEE_TGRS_GCN for the sake of reproducibility. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 190,623 |
2402.02781 | Dual Knowledge Distillation for Efficient Sound Event Detection | Sound event detection (SED) is essential for recognizing specific sounds and their temporal locations within acoustic signals. This becomes challenging particularly for on-device applications, where computational resources are limited. To address this issue, we introduce a novel framework referred to as dual knowledge distillation for developing efficient SED systems in this work. Our proposed dual knowledge distillation commences with temporal-averaging knowledge distillation (TAKD), utilizing a mean student model derived from the temporal averaging of the student model's parameters. This allows the student model to indirectly learn from a pre-trained teacher model, ensuring a stable knowledge distillation. Subsequently, we introduce embedding-enhanced feature distillation (EEFD), which involves incorporating an embedding distillation layer within the student model to bolster contextual learning. On DCASE 2023 Task 4A public evaluation dataset, our proposed SED system with dual knowledge distillation having merely one-third of the baseline model's parameters, demonstrates superior performance in terms of PSDS1 and PSDS2. This highlights the importance of proposed dual knowledge distillation for compact SED systems, which can be ideal for edge devices. | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 426,725 |
1805.09621 | Backpropagation with N-D Vector-Valued Neurons Using Arbitrary Bilinear
Products | Vector-valued neural learning has emerged as a promising direction in deep learning recently. Traditionally, training data for neural networks (NNs) are formulated as a vector of scalars; however, its performance may not be optimal since associations among adjacent scalars are not modeled. In this paper, we propose a new vector neural architecture called the Arbitrary BIlinear Product Neural Network (ABIPNN), which processes information as vectors in each neuron, and the feedforward projections are defined using arbitrary bilinear products. Such bilinear products can include circular convolution, seven-dimensional vector product, skew circular convolution, reversed- time circular convolution, or other new products not seen in previous work. As a proof-of-concept, we apply our proposed network to multispectral image denoising and singing voice sepa- ration. Experimental results show that ABIPNN gains substantial improvements when compared to conventional NNs, suggesting that associations are learned during training. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 98,460 |
2310.11597 | The Efficacy of Transformer-based Adversarial Attacks in Security
Domains | Today, the security of many domains rely on the use of Machine Learning to detect threats, identify vulnerabilities, and safeguard systems from attacks. Recently, transformer architectures have improved the state-of-the-art performance on a wide range of tasks such as malware detection and network intrusion detection. But, before abandoning current approaches to transformers, it is crucial to understand their properties and implications on cybersecurity applications. In this paper, we evaluate the robustness of transformers to adversarial samples for system defenders (i.e., resiliency to adversarial perturbations generated on different types of architectures) and their adversarial strength for system attackers (i.e., transferability of adversarial samples generated by transformers to other target models). To that effect, we first fine-tune a set of pre-trained transformer, Convolutional Neural Network (CNN), and hybrid (an ensemble of transformer and CNN) models to solve different downstream image-based tasks. Then, we use an attack algorithm to craft 19,367 adversarial examples on each model for each task. The transferability of these adversarial examples is measured by evaluating each set on other models to determine which models offer more adversarial strength, and consequently, more robustness against these attacks. We find that the adversarial examples crafted on transformers offer the highest transferability rate (i.e., 25.7% higher than the average) onto other models. Similarly, adversarial examples crafted on other models have the lowest rate of transferability (i.e., 56.7% lower than the average) onto transformers. Our work emphasizes the importance of studying transformer architectures for attacking and defending models in security domains, and suggests using them as the primary architecture in transfer attack settings. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 400,695 |
2011.06673 | Symbolically Solving Partial Differential Equations using Deep Learning | We describe a neural-based method for generating exact or approximate solutions to differential equations in the form of mathematical expressions. Unlike other neural methods, our system returns symbolic expressions that can be interpreted directly. Our method uses a neural architecture for learning mathematical expressions to optimize a customizable objective, and is scalable, compact, and easily adaptable for a variety of tasks and configurations. The system has been shown to effectively find exact or approximate symbolic solutions to various differential equations with applications in natural sciences. In this work, we highlight how our method applies to partial differential equations over multiple variables and more complex boundary and initial value conditions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 206,301 |
1808.03240 | User-Guided Deep Anime Line Art Colorization with Conditional
Adversarial Networks | Scribble colors based line art colorization is a challenging computer vision problem since neither greyscale values nor semantic information is presented in line arts, and the lack of authentic illustration-line art training pairs also increases difficulty of model generalization. Recently, several Generative Adversarial Nets (GANs) based methods have achieved great success. They can generate colorized illustrations conditioned on given line art and color hints. However, these methods fail to capture the authentic illustration distributions and are hence perceptually unsatisfying in the sense that they often lack accurate shading. To address these challenges, we propose a novel deep conditional adversarial architecture for scribble based anime line art colorization. Specifically, we integrate the conditional framework with WGAN-GP criteria as well as the perceptual loss to enable us to robustly train a deep network that makes the synthesized images more natural and real. We also introduce a local features network that is independent of synthetic data. With GANs conditioned on features from such network, we notably increase the generalization capability over "in the wild" line arts. Furthermore, we collect two datasets that provide high-quality colorful illustrations and authentic line arts for training and benchmarking. With the proposed model trained on our illustration dataset, we demonstrate that images synthesized by the presented approach are considerably more realistic and precise than alternative approaches. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,898 |
2406.07272 | Integrated Near Field Sensing and Communications Using Unitary
Approximate Message Passing Based Matrix Factorization | Due to the utilization of large antenna arrays at base stations (BSs) and the operations of wireless communications in high frequency bands, mobile terminals often find themselves in the near-field of the array aperture. In this work, we address the signal processing challenges of integrated near-field localization and communication in uplink transmission of an integrated sensing and communication (ISAC) system, where the BS performs joint near-field localization and signal detection (JNFLSD). We show that JNFLSD can be formulated as a matrix factorization (MF) problem with proper structures imposed on the factor matrices. Then, leveraging the variational inference (VI) and unitary approximate message passing (UAMP), we develop a low complexity Bayesian approach to MF, called UAMP-MF, to handle a generic MF problem. We then apply the UAMP-MF algorithm to solve the JNFLSD problem, where the factor matrix structures are fully exploited. Extensive simulation results are provided to demonstrate the superior performance of the proposed method. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 462,976 |
0804.2808 | Robust Precoder for Multiuser MISO Downlink with SINR Constraints | In this paper, we consider linear precoding with SINR constraints for the downlink of a multiuser MISO (multiple-input single-output) communication system in the presence of imperfect channel state information (CSI). The base station is equipped with multiple transmit antennas and each user terminal is equipped with a single receive antenna. We propose a robust design of linear precoder which transmits minimum power to provide the required SINR at the user terminals when the true channel state lies in a region of a given size around the channel state available at the transmitter. We show that this design problem can be formulated as a Second Order Cone Program (SOCP) which can be solved efficiently. We compare the performance of the proposed design with some of the robust designs reported in the literature. Simulation results show that the proposed robust design provides better performance with reduced complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,592 |
2405.12444 | Reach and hold flexibility characterization and trade-off analysis for
aggregations of thermostatically controlled loads | Thermostatically controlled loads (TCLs) have the potential to be flexible and responsive loads to be used in demand response (DR) schemes. With increasing renewable penetration, DR is playing an increasingly important role in enhancing power grid reliability. The aggregate demand of a population of TCLs can be modulated by changing their temperature setpoint. When and/or what proportion of the population sees the setpoint change determines the change in aggregate demand. However, since the TCL population is finite, not all changes in aggregate demand can be maintained for arbitrarily long periods of time. In this paper, the dynamic behavior of a TCL fleet is modeled and used to characterize the set possible changes in aggregate demand that can be reached and the corresponding time for which the demand change can be held, for a given change in setpoint. This set is referred to, in this paper, as the reach and hold set of a TCL fleet. Furthermore, the effect of the setpoint change and ambient temperature on the reach and hold are analyzed. The characterized set is then validated through simulation using both the population TCL models and individual TCL micro-models. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 455,523 |
1809.06364 | Generalizing Across Multi-Objective Reward Functions in Deep
Reinforcement Learning | Many reinforcement-learning researchers treat the reward function as a part of the environment, meaning that the agent can only know the reward of a state if it encounters that state in a trial run. However, we argue that this is an unnecessary limitation and instead, the reward function should be provided to the learning algorithm. The advantage is that the algorithm can then use the reward function to check the reward for states that the agent hasn't even encountered yet. In addition, the algorithm can simultaneously learn policies for multiple reward functions. For each state, the algorithm would calculate the reward using each of the reward functions and add the rewards to its experience replay dataset. The Hindsight Experience Replay algorithm developed by Andrychowicz et al. (2017) does just this, and learns to generalize across a distribution of sparse, goal-based rewards. We extend this algorithm to linearly-weighted, multi-objective rewards and learn a single policy that can generalize across all linear combinations of the multi-objective reward. Whereas other multi-objective algorithms teach the Q-function to generalize across the reward weights, our algorithm enables the policy to generalize, and can thus be used with continuous actions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 108,040 |
2310.18986 | Controllable Group Choreography using Contrastive Diffusion | Music-driven group choreography poses a considerable challenge but holds significant potential for a wide range of industrial applications. The ability to generate synchronized and visually appealing group dance motions that are aligned with music opens up opportunities in many fields such as entertainment, advertising, and virtual performances. However, most of the recent works are not able to generate high-fidelity long-term motions, or fail to enable controllable experience. In this work, we aim to address the demand for high-quality and customizable group dance generation by effectively governing the consistency and diversity of group choreographies. In particular, we utilize a diffusion-based generative approach to enable the synthesis of flexible number of dancers and long-term group dances, while ensuring coherence to the input music. Ultimately, we introduce a Group Contrastive Diffusion (GCD) strategy to enhance the connection between dancers and their group, presenting the ability to control the consistency or diversity level of the synthesized group animation via the classifier-guidance sampling technique. Through intensive experiments and evaluation, we demonstrate the effectiveness of our approach in producing visually captivating and consistent group dance motions. The experimental results show the capability of our method to achieve the desired levels of consistency and diversity, while maintaining the overall quality of the generated group choreography. The source code can be found at https://aioz-ai.github.io/GCD | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 403,798 |
2008.01572 | Model Reduction of Shallow CNN Model for Reliable Deployment of
Information Extraction from Medical Reports | Shallow Convolution Neural Network (CNN) is a time-tested tool for the information extraction from cancer pathology reports. Shallow CNN performs competitively on this task to other deep learning models including BERT, which holds the state-of-the-art for many NLP tasks. The main insight behind this eccentric phenomenon is that the information extraction from cancer pathology reports require only a small number of domain-specific text segments to perform the task, thus making the most of the texts and contexts excessive for the task. Shallow CNN model is well-suited to identify these key short text segments from the labeled training set; however, the identified text segments remain obscure to humans. In this study, we fill this gap by developing a model reduction tool to make a reliable connection between CNN filters and relevant text segments by discarding the spurious connections. We reduce the complexity of shallow CNN representation by approximating it with a linear transformation of n-gram presence representation with a non-negativity and sparsity prior on the transformation weights to obtain an interpretable model. Our approach bridge the gap between the conventionally perceived trade-off boundary between accuracy on the one side and explainability on the other by model reduction. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 190,378 |
2405.20058 | Enhancing Plant Disease Detection: A Novel CNN-Based Approach with
Tensor Subspace Learning and HOWSVD-MD | Machine learning has revolutionized the field of agricultural science, particularly in the early detection and management of plant diseases, which are crucial for maintaining crop health and productivity. Leveraging advanced algorithms and imaging technologies, researchers are now able to identify and classify plant diseases with unprecedented accuracy and speed. Effective management of tomato diseases is crucial for enhancing agricultural productivity. The development and application of tomato disease classification methods are central to this objective. This paper introduces a cutting-edge technique for the detection and classification of tomato leaf diseases, utilizing insights from the latest pre-trained Convolutional Neural Network (CNN) models. We propose a sophisticated approach within the domain of tensor subspace learning, known as Higher-Order Whitened Singular Value Decomposition (HOWSVD), designed to boost the discriminatory power of the system. Our approach to Tensor Subspace Learning is methodically executed in two phases, beginning with HOWSVD and culminating in Multilinear Discriminant Analysis (MDA). The efficacy of this innovative method was rigorously tested through comprehensive experiments on two distinct datasets, namely PlantVillage and the Taiwan dataset. The findings reveal that HOWSVD-MDA outperforms existing methods, underscoring its capability to markedly enhance the precision and dependability of diagnosing tomato leaf diseases. For instance, up to 98.36\% and 89.39\% accuracy scores have been achieved under PlantVillage and the Taiwan datasets, respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 459,178 |
2310.03278 | Mitigating Pilot Contamination and Enabling IoT Scalability in Massive
MIMO Systems | Massive MIMO is expected to play an important role in the development of 5G networks. This paper addresses the issue of pilot contamination and scalability in massive MIMO systems. The current practice of reusing orthogonal pilot sequences in adjacent cells leads to difficulty in differentiating incoming inter- and intra-cell pilot sequences. One possible solution is to increase the number of orthogonal pilot sequences, which results in dedicating more space of coherence block to pilot transmission than data transmission. This, in turn, also hinders the scalability of massive MIMO systems, particularly in accommodating a large number of IoT devices within a cell. To overcome these challenges, this paper devises an innovative pilot allocation scheme based on the data transfer patterns of IoT devices. The scheme assigns orthogonal pilot sequences to clusters of devices instead of individual devices, allowing multiple devices to utilize the same pilot for periodically transmitting data. Moreover, we formulate the pilot assignment problem as a graph coloring problem and use the max k-cut graph partitioning approach to overcome the pilot contamination in a multicell massive MIMO system. The proposed scheme significantly improves the spectral efficiency and enables the scalability of massive MIMO systems; for instance, by using ten orthogonal pilot sequences, we are able to accommodate 200 devices with only a 12.5% omission rate. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 397,212 |
1901.00707 | Feature reinforcement with word embedding and parsing information in
neural TTS | In this paper, we propose a feature reinforcement method under the sequence-to-sequence neural text-to-speech (TTS) synthesis framework. The proposed method utilizes the multiple input encoder to take three levels of text information, i.e., phoneme sequence, pre-trained word embedding, and grammatical structure of sentences from parser as the input feature for the neural TTS system. The added word and sentence level information can be viewed as the feature based pre-training strategy, which clearly enhances the model generalization ability. The proposed method not only improves the system robustness significantly but also improves the synthesized speech to near recording quality in our experiments for out-of-domain text. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 117,834 |
1907.05550 | Faster Neural Network Training with Data Echoing | In the twilight of Moore's law, GPUs and other specialized hardware accelerators have dramatically sped up neural network training. However, earlier stages of the training pipeline, such as disk I/O and data preprocessing, do not run on accelerators. As accelerators continue to improve, these earlier stages will increasingly become the bottleneck. In this paper, we introduce "data echoing," which reduces the total computation used by earlier pipeline stages and speeds up training whenever computation upstream from accelerators dominates the training time. Data echoing reuses (or "echoes") intermediate outputs from earlier pipeline stages in order to reclaim idle capacity. We investigate the behavior of different data echoing algorithms on various workloads, for various amounts of echoing, and for various batch sizes. We find that in all settings, at least one data echoing algorithm can match the baseline's predictive performance using less upstream computation. We measured a factor of 3.25 decrease in wall-clock time for ResNet-50 on ImageNet when reading training data over a network. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 138,390 |
1105.5939 | Airborne TDMA for High Throughput and Fast Weather Conditions
Notification | As air traffic grows significantly, aircraft accidents increase. Many aviation accidents could be prevented if the precise aircraft positions and weather conditions on the aircraft's route were known. Existing studies propose determining the precise aircraft positions via a VHF channel with an air-to-air radio relay system that is based on mobile ad-hoc networks. However, due to the long propagation delay, the existing TDMA MAC schemes underutilize the networks. The existing TDMA MAC sends data and receives ACK in one time slot, which requires two guard times in one time slot. Since aeronautical communications spans a significant distance, the guard time occupies a significantly large portion of the slot. To solve this problem, we propose a piggybacking mechanism ACK. Our proposed MAC has one guard time in one time slot, which enables the transmission of more data. Using this additional data, we can send weather conditions that pertain to the aircraft's current position. Our analysis shows that this proposed MAC performs better than the existing MAC, since it offers better throughput and network utilization. In addition, our weather condition notification model achieves a much lower transmission delay than a HF (high frequency) voice communication. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,576 |
2006.13135 | Estimation of Causal Effects in the Presence of Unobserved Confounding
in the Alzheimer's Continuum | Studying the relationship between neuroanatomy and cognitive decline due to Alzheimer's has been a major research focus in the last decade. However, to infer cause-effect relationships rather than simple associations from observational data, we need to (i) express the causal relationships leading to cognitive decline in a graphical model, and (ii) ensure the causal effect of interest is identifiable from the collected data. We derive a causal graph from the current clinical knowledge on cause and effect in the Alzheimer's disease continuum, and show that identifiability of the causal effect requires all confounders to be known and measured. However, in complex neuroimaging studies, we neither know all potential confounders nor do we have data on them. To alleviate this requirement, we leverage the dependencies among multiple causes by deriving a substitute confounder via a probabilistic latent factor model. In our theoretical analysis, we prove that using the substitute confounder enables identifiability of the causal effect of neuroanatomy on cognition. We quantitatively evaluate the effectiveness of our approach on semi-synthetic data, where we know the true causal effects, and illustrate its use on real data on the Alzheimer's disease continuum, where it reveals important causes that otherwise would have been missed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 183,804 |
1805.05999 | Agent Based Rumor Spreading in a scale-free network | In the last years, the study of rumor spreading on social networks produced a lot of interest among the scientific community, expecially due to the role of social networks in the last political events. The goal of this work is to reproduce real-like diffusions of information and misinformation in a scale-free network using a multi-agent-based model. The data concerning the virtual spreading are easily obtainable, in particular the diffusion of information during the announcement for the discovery of the Higgs Boson on Twitter was recorded and investigated in detail. We made some assumptions on the micro behavior of our agents and registered the effects in a statistical analysis replying the real data diffusion. Then, we studied an hypotetical response to a misinformation diffusion adding debunking agents and trying to model a critic response from the agents using real data from a hoax regarding the Occupy Wall Street movement. After tuning our model to reproduce these results, we measured some network properties and proved the emergence of substantially separated structures like echochambers, independently from the network size scale, i.e. with one hundred, one thousand and ten thousand agents. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 97,514 |
0809.1043 | On Unique Decodability | In this paper we propose a revisitation of the topic of unique decodability and of some fundamental theorems of lossless coding. It is widely believed that, for any discrete source X, every "uniquely decodable" block code satisfies E[l(X_1 X_2 ... X_n)]>= H(X_1,X_2,...,X_n), where X_1, X_2,...,X_n are the first n symbols of the source, E[l(X_1 X_2 ... X_n)] is the expected length of the code for those symbols and H(X_1,X_2,...,X_n) is their joint entropy. We show that, for certain sources with memory, the above inequality only holds when a limiting definition of "uniquely decodable code" is considered. In particular, the above inequality is usually assumed to hold for any "practical code" due to a debatable application of McMillan's theorem to sources with memory. We thus propose a clarification of the topic, also providing an extended version of McMillan's theorem to be used for Markovian sources. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 2,298 |
2408.00253 | Saving Money for Analytical Workloads in the Cloud | As users migrate their analytical workloads to cloud databases, it is becoming just as important to reduce monetary costs as it is to optimize query runtime. In the cloud, a query is billed based on either its compute time or the amount of data it processes. We observe that analytical queries are either compute- or IO-bound and each query type executes cheaper in a different pricing model. We exploit this opportunity and propose methods to build cheaper execution plans across pricing models that complete within user-defined runtime constraints. We implement these methods and produce execution plans spanning multiple pricing models that reduce the monetary cost for workloads by as much as 56%. We reduce individual query costs by as much as 90%. The prices chosen by cloud vendors for cloud services also impact savings opportunities. To study this effect, we simulate our proposed methods with different cloud prices and observe that multi-cloud savings are robust to changes in cloud vendor prices. These results indicate the massive opportunity to save money by executing workloads across multiple pricing models. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 477,759 |
1710.01691 | Context Embedding Networks | Low dimensional embeddings that capture the main variations of interest in collections of data are important for many applications. One way to construct these embeddings is to acquire estimates of similarity from the crowd. However, similarity is a multi-dimensional concept that varies from individual to individual. Existing models for learning embeddings from the crowd typically make simplifying assumptions such as all individuals estimate similarity using the same criteria, the list of criteria is known in advance, or that the crowd workers are not influenced by the data that they see. To overcome these limitations we introduce Context Embedding Networks (CENs). In addition to learning interpretable embeddings from images, CENs also model worker biases for different attributes along with the visual context i.e. the visual attributes highlighted by a set of images. Experiments on two noisy crowd annotated datasets show that modeling both worker bias and visual context results in more interpretable embeddings compared to existing approaches. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 82,043 |
1903.11202 | Kernel based regression with robust loss function via iteratively
reweighted least squares | Least squares kernel based methods have been widely used in regression problems due to the simple implementation and good generalization performance. Among them, least squares support vector regression (LS-SVR) and extreme learning machine (ELM) are popular techniques. However, the noise sensitivity is a major bottleneck. To address this issue, a generalized loss function, called $\ell_s$-loss, is proposed in this paper. With the support of novel loss function, two kernel based regressors are constructed by replacing the $\ell_2$-loss in LS-SVR and ELM with the proposed $\ell_s$-loss for better noise robustness. Important properties of $\ell_s$-loss, including robustness, asymmetry and asymptotic approximation behaviors, are verified theoretically. Moreover, iteratively reweighted least squares (IRLS) is utilized to optimize and interpret the proposed methods from a weighted viewpoint. The convergence of the proposal are proved, and detailed analyses of robustness are given. Experiments on both artificial and benchmark datasets confirm the validity of the proposed methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 125,447 |
2405.14312 | Improving Gloss-free Sign Language Translation by Reducing
Representation Density | Gloss-free sign language translation (SLT) aims to develop well-performing SLT systems with no requirement for the costly gloss annotations, but currently still lags behind gloss-based approaches significantly. In this paper, we identify a representation density problem that could be a bottleneck in restricting the performance of gloss-free SLT. Specifically, the representation density problem describes that the visual representations of semantically distinct sign gestures tend to be closely packed together in feature space, which makes gloss-free methods struggle with distinguishing different sign gestures and suffer from a sharp performance drop. To address the representation density problem, we introduce a simple but effective contrastive learning strategy, namely SignCL, which encourages gloss-free models to learn more discriminative feature representation in a self-supervised manner. Our experiments demonstrate that the proposed SignCL can significantly reduce the representation density and improve performance across various translation frameworks. Specifically, SignCL achieves a significant improvement in BLEU score for the Sign Language Transformer and GFSLT-VLP on the CSL-Daily dataset by 39% and 46%, respectively, without any increase of model parameters. Compared to Sign2GPT, a state-of-the-art method based on large-scale pre-trained vision and language models, SignCL achieves better performance with only 35% of its parameters. Implementation and Checkpoints are available at https://github.com/JinhuiYE/SignCL. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | true | 456,365 |
2208.04322 | Learning-Based Client Selection for Federated Learning Services Over
Wireless Networks with Constrained Monetary Budgets | We investigate a data quality-aware dynamic client selection problem for multiple federated learning (FL) services in a wireless network, where each client offers dynamic datasets for the simultaneous training of multiple FL services, and each FL service demander has to pay for the clients under constrained monetary budgets. The problem is formalized as a non-cooperative Markov game over the training rounds. A multi-agent hybrid deep reinforcement learning-based algorithm is proposed to optimize the joint client selection and payment actions, while avoiding action conflicts. Simulation results indicate that our proposed algorithm can significantly improve training performance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 312,075 |
2502.12893 | H-CoT: Hijacking the Chain-of-Thought Safety Reasoning Mechanism to
Jailbreak Large Reasoning Models, Including OpenAI o1/o3, DeepSeek-R1, and
Gemini 2.0 Flash Thinking | Large Reasoning Models (LRMs) have recently extended their powerful reasoning capabilities to safety checks-using chain-of-thought reasoning to decide whether a request should be answered. While this new approach offers a promising route for balancing model utility and safety, its robustness remains underexplored. To address this gap, we introduce Malicious-Educator, a benchmark that disguises extremely dangerous or malicious requests beneath seemingly legitimate educational prompts. Our experiments reveal severe security flaws in popular commercial-grade LRMs, including OpenAI o1/o3, DeepSeek-R1, and Gemini 2.0 Flash Thinking. For instance, although OpenAI's o1 model initially maintains a high refusal rate of about 98%, subsequent model updates significantly compromise its safety; and attackers can easily extract criminal strategies from DeepSeek-R1 and Gemini 2.0 Flash Thinking without any additional tricks. To further highlight these vulnerabilities, we propose Hijacking Chain-of-Thought (H-CoT), a universal and transferable attack method that leverages the model's own displayed intermediate reasoning to jailbreak its safety reasoning mechanism. Under H-CoT, refusal rates sharply decline-dropping from 98% to below 2%-and, in some instances, even transform initially cautious tones into ones that are willing to provide harmful content. We hope these findings underscore the urgent need for more robust safety mechanisms to preserve the benefits of advanced reasoning capabilities without compromising ethical standards. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 535,093 |
1810.05369 | Regularization Matters: Generalization and Optimization of Neural Nets
v.s. their Induced Kernel | Recent works have shown that on sufficiently over-parametrized neural nets, gradient descent with relatively large initialization optimizes a prediction function in the RKHS of the Neural Tangent Kernel (NTK). This analysis leads to global convergence results but does not work when there is a standard $\ell_2$ regularizer, which is useful to have in practice. We show that sample efficiency can indeed depend on the presence of the regularizer: we construct a simple distribution in d dimensions which the optimal regularized neural net learns with $O(d)$ samples but the NTK requires $\Omega(d^2)$ samples to learn. To prove this, we establish two analysis tools: i) for multi-layer feedforward ReLU nets, we show that the global minimizer of a weakly-regularized cross-entropy loss is the max normalized margin solution among all neural nets, which generalizes well; ii) we develop a new technique for proving lower bounds for kernel methods, which relies on showing that the kernel cannot focus on informative features. Motivated by our generalization results, we study whether the regularized global optimum is attainable. We prove that for infinite-width two-layer nets, noisy gradient descent optimizes the regularized neural net loss to a global minimum in polynomial iterations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 110,214 |
2206.01826 | The Gamma Generalized Normal Distribution: A Descriptor of SAR Imagery | We propose a new four-parameter distribution for modeling synthetic aperture radar (SAR) imagery named the gamma generalized normal (GGN) by combining the gamma and generalized normal distributions. A mathematical characterization of the new distribution is provided by identifying the limit behavior and by calculating the density and moment expansions. The GGN model performance is evaluated on both synthetic and actual data and, for that, maximum likelihood estimation and random number generation are discussed. The proposed distribution is compared with the beta generalized normal distribution (BGN), which has already shown to appropriately represent SAR imagery. The performance of these two distributions are measured by means of statistics which provide evidence that the GGN can outperform the BGN distribution in some contexts. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 300,621 |
1611.06455 | Time Series Classification from Scratch with Deep Neural Networks: A
Strong Baseline | We propose a simple but strong baseline for time series classification from scratch with deep neural networks. Our proposed baseline models are pure end-to-end without any heavy preprocessing on the raw data or feature crafting. The proposed Fully Convolutional Network (FCN) achieves premium performance to other state-of-the-art approaches and our exploration of the very deep neural networks with the ResNet structure is also competitive. The global average pooling in our convolutional model enables the exploitation of the Class Activation Map (CAM) to find out the contributing region in the raw data for the specific labels. Our models provides a simple choice for the real world application and a good starting point for the future research. An overall analysis is provided to discuss the generalization capability of our models, learned features, network structures and the classification semantics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 64,192 |
2403.06107 | Textureless Object Recognition: An Edge-based Approach | Textureless object recognition has become a significant task in Computer Vision with the advent of Robotics and its applications in manufacturing sector. It has been challenging to obtain good accuracy in real time because of its lack of discriminative features and reflectance properties which makes the techniques for textured object recognition insufficient for textureless objects. A lot of work has been done in the last 20 years, especially in the recent 5 years after the TLess and other textureless dataset were introduced. In this project, by applying image processing techniques we created a robust augmented dataset from initial imbalanced smaller dataset. We extracted edge features, feature combinations and RGB images enhanced with feature/feature combinations to create 15 datasets, each with a size of ~340,000. We then trained four classifiers on these 15 datasets to arrive at a conclusion as to which dataset performs the best overall and whether edge features are important for textureless objects. Based on our experiments and analysis, RGB images enhanced with combination of 3 edge features performed the best compared to all others. Model performance on dataset with HED edges performed comparatively better than other edge detectors like Canny or Prewitt. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 436,306 |
1212.3998 | Online Learning for Ground Trajectory Prediction | This paper presents a model based on an hybrid system to numerically simulate the climbing phase of an aircraft. This model is then used within a trajectory prediction tool. Finally, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization algorithm is used to tune five selected parameters, and thus improve the accuracy of the model. Incorporated within a trajectory prediction tool, this model can be used to derive the order of magnitude of the prediction error over time, and thus the domain of validity of the trajectory prediction. A first validation experiment of the proposed model is based on the errors along time for a one-time trajectory prediction at the take off of the flight with respect to the default values of the theoretical BADA model. This experiment, assuming complete information, also shows the limit of the model. A second experiment part presents an on-line trajectory prediction, in which the prediction is continuously updated based on the current aircraft position. This approach raises several issues, for which improvements of the basic model are proposed, and the resulting trajectory prediction tool shows statistically significantly more accurate results than those of the default model. | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | 20,449 |
2404.04302 | CBR-RAG: Case-Based Reasoning for Retrieval Augmented Generation in LLMs
for Legal Question Answering | Retrieval-Augmented Generation (RAG) enhances Large Language Model (LLM) output by providing prior knowledge as context to input. This is beneficial for knowledge-intensive and expert reliant tasks, including legal question-answering, which require evidence to validate generated text outputs. We highlight that Case-Based Reasoning (CBR) presents key opportunities to structure retrieval as part of the RAG process in an LLM. We introduce CBR-RAG, where CBR cycle's initial retrieval stage, its indexing vocabulary, and similarity knowledge containers are used to enhance LLM queries with contextually relevant cases. This integration augments the original LLM query, providing a richer prompt. We present an evaluation of CBR-RAG, and examine different representations (i.e. general and domain-specific embeddings) and methods of comparison (i.e. inter, intra and hybrid similarity) on the task of legal question-answering. Our results indicate that the context provided by CBR's case reuse enforces similarity between relevant components of the questions and the evidence base leading to significant improvements in the quality of generated answers. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 444,594 |
2005.10572 | Computationally efficient stochastic MPC: a probabilistic scaling
approach | In recent years, the increasing interest in Stochastic model predictive control (SMPC) schemes has highlighted the limitation arising from their inherent computational demand, which has restricted their applicability to slow-dynamics and high-performing systems. To reduce the computational burden, in this paper we extend the probabilistic scaling approach to obtain low-complexity inner approximation of chance-constrained sets. This approach provides probabilistic guarantees at a lower computational cost than other schemes for which the sample complexity depends on the design space dimension. To design candidate simple approximating sets, which approximate the shape of the probabilistic set, we introduce two possibilities: i) fixed-complexity polytopes, and ii) $\ell_p$-norm based sets. Once the candidate approximating set is obtained, it is scaled around its center so to enforce the expected probabilistic guarantees. The resulting scaled set is then exploited to enforce constraints in the classical SMPC framework. The computational gain obtained with the proposed approach with respect to the scenario one is demonstrated via simulations, where the objective is the control of a fixed-wing UAV performing a monitoring mission over a sloped vineyard. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 178,221 |
2305.16353 | Betray Oneself: A Novel Audio DeepFake Detection Model via
Mono-to-Stereo Conversion | Audio Deepfake Detection (ADD) aims to detect the fake audio generated by text-to-speech (TTS), voice conversion (VC) and replay, etc., which is an emerging topic. Traditionally we take the mono signal as input and focus on robust feature extraction and effective classifier design. However, the dual-channel stereo information in the audio signal also includes important cues for deepfake, which has not been studied in the prior work. In this paper, we propose a novel ADD model, termed as M2S-ADD, that attempts to discover audio authenticity cues during the mono-to-stereo conversion process. We first projects the mono to a stereo signal using a pretrained stereo synthesizer, then employs a dual-branch neural architecture to process the left and right channel signals, respectively. In this way, we effectively reveal the artifacts in the fake audio, thus improve the ADD performance. The experiments on the ASVspoof2019 database show that M2S-ADD outperforms all baselines that input mono. We release the source code at \url{https://github.com/AI-S2-Lab/M2S-ADD}. | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 368,045 |
2210.10993 | A Magnetic Framelet-Based Convolutional Neural Network for Directed
Graphs | Spectral Graph Convolutional Networks (spectral GCNNs), a powerful tool for analyzing and processing graph data, typically apply frequency filtering via Fourier transform to obtain representations with selective information. Although research shows that spectral GCNNs can be enhanced by framelet-based filtering, the massive majority of such research only considers undirected graphs. In this paper, we introduce Framelet-MagNet, a magnetic framelet-based spectral GCNN for directed graphs (digraphs). The model applies the framelet transform to digraph signals to form a more sophisticated representation for filtering. Digraph framelets are constructed with the complex-valued magnetic Laplacian, simultaneously leading to signal processing in both real and complex domains. We empirically validate the predictive power of Framelet-MagNet over a range of state-of-the-art models in node classification, link prediction, and denoising. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 325,136 |
1712.04748 | Ergodic Capacity Analysis of Wireless Powered AF Relaying Systems over
$\alpha$-$\mu$ Fading Channels | In this paper, we consider a two-hop amplify-and-forward (AF) relaying system, where the relay node is energy-constrained and harvests energy from the source node. In the literature, there are three main energy-harvesting (EH) protocols, namely, time-switching relaying (TSR), power-splitting (PS) relaying (PSR) and ideal relaying receiver (IRR). Unlike the existing studies, in this paper, we consider $\alpha$-$\mu$ fading channels. In this respect, we derive accurate unified analytical expressions for the ergodic capacity for the aforementioned protocols over independent but not identically distributed (i.n.i.d) $\alpha$-$\mu$ fading channels. Three special cases of the $\alpha$-$\mu$ model, namely, Rayleigh, Nakagami-mandWeibull fading channels were investigated. Our analysis is verified through numerical and simulation results. It is shown that finding the optimal value of the PS factor for the PSR protocol and the EH time fraction for the TSR protocol is a crucial step in achieving the best network performance. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 86,651 |
1811.00323 | Taylor-based Optimized Recursive Extended Exponential Smoothed Neural
Networks Forecasting Method | A newly introduced method called Taylor-based Optimized Recursive Extended Exponential Smoothed Neural Networks Forecasting method is applied and extended in this study to forecast numerical values. Unlike traditional forecasting techniques which forecast only future values, our proposed method provides a new extension to correct the predicted values which is done by forecasting the estimated error. Experimental results demonstrated that the proposed method has a high accuracy both in training and testing data and outperform the state-of-the-art RNN models on Mackey-Glass, NARMA, Lorenz and Henon map datasets. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 112,073 |
1003.4836 | Automating Fine Concurrency Control in Object-Oriented Databases | Several propositions were done to provide adapted concurrency control to object-oriented databases. However, most of these proposals miss the fact that considering solely read and write access modes on instances may lead to less parallelism than in relational databases! This paper cope with that issue, and advantages are numerous: (1) commutativity of methods is determined a priori and automatically by the compiler, without measurable overhead, (2) run-time checking of commutativity is as efficient as for compatibility, (3) inverse operations need not be specified for recovery, (4) this scheme does not preclude more sophisticated approaches, and, last but not least, (5) relational and object-oriented concurrency control schemes with read and write access modes are subsumed under this proposition. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 6,002 |
2205.13525 | On the Inconsistency of Kernel Ridgeless Regression in Fixed Dimensions | ``Benign overfitting'', the ability of certain algorithms to interpolate noisy training data and yet perform well out-of-sample, has been a topic of considerable recent interest. We show, using a fixed design setup, that an important class of predictors, kernel machines with translation-invariant kernels, does not exhibit benign overfitting in fixed dimensions. In particular, the estimated predictor does not converge to the ground truth with increasing sample size, for any non-zero regression function and any (even adaptive) bandwidth selection. To prove these results, we give exact expressions for the generalization error, and its decomposition in terms of an approximation error and an estimation error that elicits a trade-off based on the selection of the kernel bandwidth. Our results apply to commonly used translation-invariant kernels such as Gaussian, Laplace, and Cauchy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 298,966 |
1603.08514 | Deriving Application Level Relationships by Analysing the Run Time
Behaviour of Simple and Complex SQL Queries | This paper describes a unique approach to perform application behavioral analysis for identifying how tables might be related to each other. The analysis techniques are based on the properties of primary and foreign keys and also the data present in their respective columns. We have also implemented the idea using JAVA and presented experimental results in Demo Section.This paper introduces a unique approach to predict the possible application level relationships in databases with the help of the application relationship analysis of simple and complex SQL queries. Complex SQL queries are those which contain multiple constraints at different levels of the database. In the process of deriving relations, we first parse the SQL statements and then analyse the parsed information to extract related information. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 53,790 |
2303.06868 | Deep Learning-based Eye-Tracking Analysis for Diagnosis of Alzheimer's
Disease Using 3D Comprehensive Visual Stimuli | Alzheimer's Disease (AD) causes a continuous decline in memory, thinking, and judgment. Traditional diagnoses are usually based on clinical experience, which is limited by some realistic factors. In this paper, we focus on exploiting deep learning techniques to diagnose AD based on eye-tracking behaviors. Visual attention, as typical eye-tracking behavior, is of great clinical value to detect cognitive abnormalities in AD patients. To better analyze the differences in visual attention between AD patients and normals, we first conduct a 3D comprehensive visual task on a non-invasive eye-tracking system to collect visual attention heatmaps. We then propose a multi-layered comparison convolution neural network (MC-CNN) to distinguish the visual attention differences between AD patients and normals. In MC-CNN, the multi-layered representations of heatmaps are obtained by hierarchical convolution to better encode eye-movement behaviors, which are further integrated into a distance vector to benefit the comprehensive visual task. Extensive experimental results on the collected dataset demonstrate that MC-CNN achieves consistent validity in classifying AD patients and normals with eye-tracking data. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 351,028 |
2404.07557 | Towards Secure and Reliable Heterogeneous Real-time Telemetry
Communication in Autonomous UAV Swarms | In the era of cutting-edge autonomous systems, Unmanned Aerial Vehicles (UAVs) are becoming an essential part of the solutions for numerous complex challenges. This paper evaluates UAV peer-to-peer telemetry communication, highlighting its security vulnerabilities and explores a transition to a het-erogeneous multi-hop mesh all-to-all communication architecture to increase inter-swarm connectivity and reliability. Additionally, we suggest a symmetric key agreement and data encryption mechanism implementation for inter - swarm communication, to ensure data integrity and confidentiality without compromising performance. | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | 445,886 |
2408.11172 | SubgoalXL: Subgoal-based Expert Learning for Theorem Proving | Formal theorem proving, a field at the intersection of mathematics and computer science, has seen renewed interest with advancements in large language models (LLMs). This paper introduces SubgoalXL, a novel approach that synergizes subgoal-based proofs with expert learning to enhance LLMs' capabilities in formal theorem proving within the Isabelle environment. SubgoalXL addresses two critical challenges: the scarcity of specialized mathematics and theorem-proving data, and the need for improved multi-step reasoning abilities in LLMs. By optimizing data efficiency and employing subgoal-level supervision, SubgoalXL extracts richer information from limited human-generated proofs. The framework integrates subgoal-oriented proof strategies with an expert learning system, iteratively refining formal statement, proof, and subgoal generators. Leveraging the Isabelle environment's advantages in subgoal-based proofs, SubgoalXL achieves a new state-of-the-art performance of 56.1\% in Isabelle on the standard miniF2F dataset, marking an absolute improvement of 4.9\%. Notably, SubgoalXL successfully solves 41 AMC12, 9 AIME, and 3 IMO problems from miniF2F. These results underscore the effectiveness of maximizing limited data utility and employing targeted guidance for complex reasoning in formal theorem proving, contributing to the ongoing advancement of AI reasoning capabilities. The implementation is available at \url{https://github.com/zhaoxlpku/SubgoalXL}. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | true | 482,163 |
2206.00924 | FACM: Intermediate Layer Still Retain Effective Features against
Adversarial Examples | In strong adversarial attacks against deep neural networks (DNN), the generated adversarial example will mislead the DNN-implemented classifier by destroying the output features of the last layer. To enhance the robustness of the classifier, in our paper, a \textbf{F}eature \textbf{A}nalysis and \textbf{C}onditional \textbf{M}atching prediction distribution (FACM) model is proposed to utilize the features of intermediate layers to correct the classification. Specifically, we first prove that the intermediate layers of the classifier can still retain effective features for the original category, which is defined as the correction property in our paper. According to this, we propose the FACM model consisting of \textbf{F}eature \textbf{A}nalysis (FA) correction module, \textbf{C}onditional \textbf{M}atching \textbf{P}rediction \textbf{D}istribution (CMPD) correction module and decision module. The FA correction module is the fully connected layers constructed with the output of the intermediate layers as the input to correct the classification of the classifier. The CMPD correction module is a conditional auto-encoder, which can not only use the output of intermediate layers as the condition to accelerate convergence but also mitigate the negative effect of adversarial example training with the Kullback-Leibler loss to match prediction distribution. Through the empirically verified diversity property, the correction modules can be implemented synergistically to reduce the adversarial subspace. Hence, the decision module is proposed to integrate the correction modules to enhance the DNN classifier's robustness. Specially, our model can be achieved by fine-tuning and can be combined with other model-specific defenses. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 300,305 |
2210.05953 | Classification by estimating the cumulative distribution function for
small data | In this paper, we study the classification problem by estimating the conditional probability function of the given data. Different from the traditional expected risk estimation theory on empirical data, we calculate the probability via Fredholm equation, this leads to estimate the distribution of the data. Based on the Fredholm equation, a new expected risk estimation theory by estimating the cumulative distribution function is presented. The main characteristics of the new expected risk estimation is to measure the risk on the distribution of the input space. The corresponding empirical risk estimation is also presented, and an $\varepsilon$-insensitive $L_{1}$ cumulative support vector machines ($\varepsilon$-$L_{1}VSVM$) is proposed by introducing an insensitive loss. It is worth mentioning that the classification models and the classification evaluation indicators based on the new mechanism are different from the traditional one. Experimental results show the effectiveness of the proposed $\varepsilon$-$L_{1}VSVM$ and the corresponding cumulative distribution function indicator on validity and interpretability of small data classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 323,075 |
1806.01526 | Leolani: a reference machine with a theory of mind for social
communication | Our state of mind is based on experiences and what other people tell us. This may result in conflicting information, uncertainty, and alternative facts. We present a robot that models relativity of knowledge and perception within social interaction following principles of the theory of mind. We utilized vision and speech capabilities on a Pepper robot to build an interaction model that stores the interpretations of perceptions and conversations in combination with provenance on its sources. The robot learns directly from what people tell it, possibly in relation to its perception. We demonstrate how the robot's communication is driven by hunger to acquire more knowledge from and on people and objects, to resolve uncertainties and conflicts, and to share awareness of the per- ceived environment. Likewise, the robot can make reference to the world and its knowledge about the world and the encounters with people that yielded this knowledge. | true | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 99,575 |
2010.07754 | EnCoD: Distinguishing Compressed and Encrypted File Fragments | Reliable identification of encrypted file fragments is a requirement for several security applications, including ransomware detection, digital forensics, and traffic analysis. A popular approach consists of estimating high entropy as a proxy for randomness. However, many modern content types (e.g. office documents, media files, etc.) are highly compressed for storage and transmission efficiency. Compression algorithms also output high-entropy data, thus reducing the accuracy of entropy-based encryption detectors. Over the years, a variety of approaches have been proposed to distinguish encrypted file fragments from high-entropy compressed fragments. However, these approaches are typically only evaluated over a few, select data types and fragment sizes, which makes a fair assessment of their practical applicability impossible. This paper aims to close this gap by comparing existing statistical tests on a large, standardized dataset. Our results show that current approaches cannot reliably tell apart encryption and compression, even for large fragment sizes. To address this issue, we design EnCoD, a learning-based classifier which can reliably distinguish compressed and encrypted data, starting with fragments as small as 512 bytes. We evaluate EnCoD against current approaches over a large dataset of different data types, showing that it outperforms current state-of-the-art for most considered fragment sizes and data types. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 200,928 |
1911.00313 | Deep Bidirectional Transformers for Relation Extraction without
Supervision | We present a novel framework to deal with relation extraction tasks in cases where there is complete lack of supervision, either in the form of gold annotations, or relations from a knowledge base. Our approach leverages syntactic parsing and pre-trained word embeddings to extract few but precise relations,which are then used to annotate a larger cor-pus, in a manner identical to distant supervision. The resulting data set is employed to fine tune a pre-trained BERT model in order to perform relation extraction. Empirical evaluation on four data sets from the biomedical domain shows that our method significantly outperforms two simple baselines for unsupervised relation extraction and, even if not using any supervision at all, achieves slightly worse results than the state-of-the-art in three out of four data sets. Importantly, we show that it is possible to successfully fine tune a large pre-trained language model with noisy data, as op-posed to previous works that rely on gold data for fine tuning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 151,787 |
1807.10511 | Global and local evaluation of link prediction tasks with neural
embeddings | We focus our attention on the link prediction problem for knowledge graphs, which is treated herein as a binary classification task on neural embeddings of the entities. By comparing, combining and extending different methodologies for link prediction on graph-based data coming from different domains, we formalize a unified methodology for the quality evaluation benchmark of neural embeddings for knowledge graphs. This benchmark is then used to empirically investigate the potential of training neural embeddings globally for the entire graph, as opposed to the usual way of training embeddings locally for a specific relation. This new way of testing the quality of the embeddings evaluates the performance of binary classifiers for scalable link prediction with limited data. Our evaluation pipeline is made open source, and with this we aim to draw more attention of the community towards an important issue of transparency and reproducibility of the neural embeddings evaluations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 103,950 |
1911.05490 | The Potential Gains of Macrodiversity in mmWave Cellular Networks with
Correlated Blocking | At millimeter wave (mmWave) frequencies, signals are prone to blocking by objects in the environment, which causes paths to go from line-of-sight (LOS) to non-LOS (NLOS). We consider macrodiversity as a strategy to improve the performance of mmWave cellular systems, where the user attempts to connect with two or more base stations. An accurate analysis of macrodiversity must account for the possibility of correlated blocking, which occurs when a single blockage simultaneously blocks the paths to two base stations. In this paper, we analyze the macrodiverity gain in the presence of correlated random blocking and interference. To do so, we develop a framework to determine distributions for the LOS probability, SNR, and SINR by taking into account correlated blocking. We consider a cellular uplink with both diversity combining and selection combining schemes. We also study the impact of blockage size and blockage density. We show that blocking can be both a blessing and a curse. On the one hand, the signal from the source transmitter could be blocked, and on the other hand, interfering signals tend to also be blocked, which leads to a completely different effect on macrodiversity gains. We also show that the assumption of independent blocking can lead to an incorrect evaluation of macrodiversity gain, as the correlation tends to decrease macrodiversity gain. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 153,280 |
2104.01704 | Safe Control Synthesis via Input Constrained Control Barrier Functions | This paper introduces the notion of an Input Constrained Control Barrier Function (ICCBF), as a method to synthesize safety-critical controllers for non-linear control affine systems with input constraints. The method identifies a subset of the safe set of states, and constructs a controller to render the subset forward invariant. The feedback controller is represented as the solution to a quadratic program, which can be solved efficiently for real-time implementation. Furthermore, we show that ICCBFs are a generalization of Higher Order Control Barrier Functions, and thus are applicable to systems of non-uniform relative degree. Simulation results are presented for the adaptive cruise control problem, and a spacecraft rendezvous problem. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 228,440 |
2309.10269 | Using an Uncrewed Surface Vehicle to Create a Volumetric Model of
Non-Navigable Rivers and Other Shallow Bodies of Water | Non-navigable rivers and retention ponds play important roles in buffering communities from flooding, yet emergency planners often have no data as to the volume of water that they can carry before flooding the surrounding. This paper describes a practical approach for using an uncrewed marine surface vehicle (USV) to collect and merge bathymetric maps with digital surface maps of the banks of shallow bodies of water into a unified volumetric model. The below-waterline mesh is developed by applying the Poisson surface reconstruction algorithm to the sparse sonar depth readings of the underwater surface. Dense above-waterline meshes of the banks are created using commercial structure from motion (SfM) packages. Merging is challenging for many reasons, the most significant is gaps in sensor coverage, i.e., the USV cannot collect sonar depth data or visually see sandy beaches leading to a bank thus the two meshes may not intersect. The approach is demonstrated on a Hydronalix EMILY USV with a Humminbird single beam echosounder and Teledyne FLIR camera at Lake ESTI at the Texas A&M Engineering Extension Service Disaster City complex. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 392,933 |
1512.04817 | Principled Evaluation of Differentially Private Algorithms using DPBench | Differential privacy has become the dominant standard in the research community for strong privacy protection. There has been a flood of research into query answering algorithms that meet this standard. Algorithms are becoming increasingly complex, and in particular, the performance of many emerging algorithms is {\em data dependent}, meaning the distribution of the noise added to query answers may change depending on the input data. Theoretical analysis typically only considers the worst case, making empirical study of average case performance increasingly important. In this paper we propose a set of evaluation principles which we argue are essential for sound evaluation. Based on these principles we propose DPBench, a novel evaluation framework for standardized evaluation of privacy algorithms. We then apply our benchmark to evaluate algorithms for answering 1- and 2-dimensional range queries. The result is a thorough empirical study of 15 published algorithms on a total of 27 datasets that offers new insights into algorithm behavior---in particular the influence of dataset scale and shape---and a more complete characterization of the state of the art. Our methodology is able to resolve inconsistencies in prior empirical studies and place algorithm performance in context through comparison to simple baselines. Finally, we pose open research questions which we hope will guide future algorithm design. | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | 50,163 |
2001.02712 | Latent Factor Analysis of Gaussian Distributions under Graphical
Constraints | We explore the algebraic structure of the solution space of convex optimization problem Constrained Minimum Trace Factor Analysis (CMTFA), when the population covariance matrix $\Sigma_x$ has an additional latent graphical constraint, namely, a latent star topology. In particular, we have shown that CMTFA can have either a rank $ 1 $ or a rank $ n-1 $ solution and nothing in between. The special case of a rank $ 1 $ solution, corresponds to the case where just one latent variable captures all the dependencies among the observables, giving rise to a star topology. We found explicit conditions for both rank $ 1 $ and rank $n- 1$ solutions for CMTFA solution of $\Sigma_x$. As a basic attempt towards building a more general Gaussian tree, we have found a necessary and a sufficient condition for multiple clusters, each having rank $ 1 $ CMTFA solution, to satisfy a minimum probability to combine together to build a Gaussian tree. To support our analytical findings we have presented some numerical demonstrating the usefulness of the contributions of our work. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 159,790 |
1406.3170 | Compact Indexes for Flexible Top-k Retrieval | We engineer a self-index based retrieval system capable of rank-safe evaluation of top-k queries. The framework generalizes the GREEDY approach of Culpepper et al. (ESA 2010) to handle multi-term queries, including over phrases. We propose two techniques which significantly reduce the ranking time for a wide range of popular Information Retrieval (IR) relevance measures, such as TFxIDF and BM25. First, we reorder elements in the document array according to document weight. Second, we introduce the repetition array, which generalizes Sadakane's (JDA 2007) document frequency structure to document subsets. Combining document and repetition array, we achieve attractive functionality-space trade-offs. We provide an extensive evaluation of our system on terabyte-sized IR collections. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 33,820 |
2308.05665 | Exploring Deep Learning Approaches to Predict Person and Vehicle Trips:
An Analysis of NHTS Data | Modern transportation planning relies heavily on accurate predictions of person and vehicle trips. However, traditional planning models often fail to account for the intricacies and dynamics of travel behavior, leading to less-than-optimal accuracy in these predictions. This study explores the potential of deep learning techniques to transform the way we approach trip predictions, and ultimately, transportation planning. Utilizing a comprehensive dataset from the National Household Travel Survey (NHTS), we developed and trained a deep learning model for predicting person and vehicle trips. The proposed model leverages the vast amount of information in the NHTS data, capturing complex, non-linear relationships that were previously overlooked by traditional models. As a result, our deep learning model achieved an impressive accuracy of 98% for person trip prediction and 96% for vehicle trip estimation. This represents a significant improvement over the performances of traditional transportation planning models, thereby demonstrating the power of deep learning in this domain. The implications of this study extend beyond just more accurate predictions. By enhancing the accuracy and reliability of trip prediction models, planners can formulate more effective, data-driven transportation policies, infrastructure, and services. As such, our research underscores the need for the transportation planning field to embrace advanced techniques like deep learning. The detailed methodology, along with a thorough discussion of the results and their implications, are presented in the subsequent sections of this paper. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 384,871 |
2302.01188 | Best Possible Q-Learning | Fully decentralized learning, where the global information, i.e., the actions of other agents, is inaccessible, is a fundamental challenge in cooperative multi-agent reinforcement learning. However, the convergence and optimality of most decentralized algorithms are not theoretically guaranteed, since the transition probabilities are non-stationary as all agents are updating policies simultaneously. To tackle this challenge, we propose best possible operator, a novel decentralized operator, and prove that the policies of agents will converge to the optimal joint policy if each agent independently updates its individual state-action value by the operator. Further, to make the update more efficient and practical, we simplify the operator and prove that the convergence and optimality still hold with the simplified one. By instantiating the simplified operator, the derived fully decentralized algorithm, best possible Q-learning (BQL), does not suffer from non-stationarity. Empirically, we show that BQL achieves remarkable improvement over baselines in a variety of cooperative multi-agent tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 343,513 |
1710.02174 | A study of Thompson Sampling with Parameter h | Thompson Sampling algorithm is a well known Bayesian algorithm for solving stochastic multi-armed bandit. At each time step the algorithm chooses each arm with probability proportional to it being the current best arm. We modify the strategy by introducing a paramter h which alters the importance of the probability of an arm being the current best arm. We show that the optimality of Thompson sampling is robust to this perturbation within a range of parameter values for two arm bandits. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 82,123 |
2305.04808 | CAT: A Contextualized Conceptualization and Instantiation Framework for
Commonsense Reasoning | Commonsense reasoning, aiming at endowing machines with a human-like ability to make situational presumptions, is extremely challenging to generalize. For someone who barely knows about "meditation," while is knowledgeable about "singing," he can still infer that "meditation makes people relaxed" from the existing knowledge that "singing makes people relaxed" by first conceptualizing "singing" as a "relaxing event" and then instantiating that event to "meditation." This process, known as conceptual induction and deduction, is fundamental to commonsense reasoning while lacking both labeled data and methodologies to enhance commonsense modeling. To fill such a research gap, we propose CAT (Contextualized ConceptuAlization and InsTantiation), a semi-supervised learning framework that integrates event conceptualization and instantiation to conceptualize commonsense knowledge bases at scale. Extensive experiments show that our framework achieves state-of-the-art performances on two conceptualization tasks, and the acquired abstract commonsense knowledge can significantly improve commonsense inference modeling. Our code, data, and fine-tuned models are publicly available at https://github.com/HKUST-KnowComp/CAT. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 362,909 |
2409.01763 | FC-KAN: Function Combinations in Kolmogorov-Arnold Networks | In this paper, we introduce FC-KAN, a Kolmogorov-Arnold Network (KAN) that leverages combinations of popular mathematical functions such as B-splines, wavelets, and radial basis functions on low-dimensional data through element-wise operations. We explore several methods for combining the outputs of these functions, including sum, element-wise product, the addition of sum and element-wise product, representations of quadratic and cubic functions, concatenation, linear transformation of the concatenated output, and others. In our experiments, we compare FC-KAN with a multi-layer perceptron network (MLP) and other existing KANs, such as BSRBF-KAN, EfficientKAN, FastKAN, and FasterKAN, on the MNIST and Fashion-MNIST datasets. Two variants of FC-KAN, which use a combination of outputs from B-splines and Difference of Gaussians (DoG) and from B-splines and linear transformations in the form of a quadratic function, outperformed overall other models on the average of 5 independent training runs. We expect that FC-KAN can leverage function combinations to design future KANs. Our repository is publicly available at: https://github.com/hoangthangta/FC_KAN. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 485,460 |
2407.09494 | Exploring Thermography Technology: A Comprehensive Facial Dataset for
Face Detection, Recognition, and Emotion | This dataset includes 6823 thermal images captured using a UNI-T UTi165A camera for face detection, recognition, and emotion analysis. It consists of 2485 facial recognition images depicting emotions (happy, sad, angry, natural, surprised), 2054 images for face recognition, and 2284 images for face detection. The dataset covers various conditions, color palettes, shooting angles, and zoom levels, with a temperature range of -10{\deg}C to 400{\deg}C and a resolution of 19,200 pixels. It serves as a valuable resource for advancing thermal imaging technology, aiding in algorithm development, and benchmarking for facial recognition across different palettes. Additionally, it contributes to facial motion recognition, fostering interdisciplinary collaboration in computer vision, psychology, and neuroscience. The dataset promotes transparency in thermal face detection and recognition research, with applications in security, healthcare, and human-computer interaction. | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 472,590 |
2201.08448 | Kinit Classification in Ethiopian Chants, Azmaris and Modern Music: A
New Dataset and CNN Benchmark | In this paper, we create EMIR, the first-ever Music Information Retrieval dataset for Ethiopian music. EMIR is freely available for research purposes and contains 600 sample recordings of Orthodox Tewahedo chants, traditional Azmari songs and contemporary Ethiopian secular music. Each sample is classified by five expert judges into one of four well-known Ethiopian Kinits, Tizita, Bati, Ambassel and Anchihoye. Each Kinit uses its own pentatonic scale and also has its own stylistic characteristics. Thus, Kinit classification needs to combine scale identification with genre recognition. After describing the dataset, we present the Ethio Kinits Model (EKM), based on VGG, for classifying the EMIR clips. In Experiment 1, we investigated whether Filterbank, Mel-spectrogram, Chroma, or Mel-frequency Cepstral coefficient (MFCC) features work best for Kinit classification using EKM. MFCC was found to be superior and was therefore adopted for Experiment 2, where the performance of EKM models using MFCC was compared using three different audio sample lengths. 3s length gave the best results. In Experiment 3, EKM and four existing models were compared on the EMIR dataset: AlexNet, ResNet50, VGG16 and LSTM. EKM was found to have the best accuracy (95.00%) as well as the fastest training time. We hope this work will encourage others to explore Ethiopian music and to experiment with other models for Kinit classification. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 276,338 |
2211.08429 | An Automatic ICD Coding Network Using Partition-Based Label Attention | International Classification of Diseases (ICD) is a global medical classification system which provides unique codes for diagnoses and procedures appropriate to a patient's clinical record. However, manual coding by human coders is expensive and error-prone. Automatic ICD coding has the potential to solve this problem. With the advancement of deep learning technologies, many deep learning-based methods for automatic ICD coding are being developed. In particular, a label attention mechanism is effective for multi-label classification, i.e., the ICD coding. It effectively obtains the label-specific representations from the input clinical records. However, because the existing label attention mechanism finds key tokens in the entire text at once, the important information dispersed in each paragraph may be omitted from the attention map. To overcome this, we propose a novel neural network architecture composed of two parts of encoders and two kinds of label attention layers. The input text is segmentally encoded in the former encoder and integrated by the follower. Then, the conventional and partition-based label attention mechanisms extract important global and local feature representations. Our classifier effectively integrates them to enhance the ICD coding performance. We verified the proposed method using the MIMIC-III, a benchmark dataset of the ICD coding. Our results show that our network improves the ICD coding performance based on the partition-based mechanism. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 330,624 |
2309.07967 | iHAS: Instance-wise Hierarchical Architecture Search for Deep Learning
Recommendation Models | Current recommender systems employ large-sized embedding tables with uniform dimensions for all features, leading to overfitting, high computational cost, and suboptimal generalizing performance. Many techniques aim to solve this issue by feature selection or embedding dimension search. However, these techniques typically select a fixed subset of features or embedding dimensions for all instances and feed all instances into one recommender model without considering heterogeneity between items or users. This paper proposes a novel instance-wise Hierarchical Architecture Search framework, iHAS, which automates neural architecture search at the instance level. Specifically, iHAS incorporates three stages: searching, clustering, and retraining. The searching stage identifies optimal instance-wise embedding dimensions across different field features via carefully designed Bernoulli gates with stochastic selection and regularizers. After obtaining these dimensions, the clustering stage divides samples into distinct groups via a deterministic selection approach of Bernoulli gates. The retraining stage then constructs different recommender models, each one designed with optimal dimensions for the corresponding group. We conduct extensive experiments to evaluate the proposed iHAS on two public benchmark datasets from a real-world recommender system. The experimental results demonstrate the effectiveness of iHAS and its outstanding transferability to widely-used deep recommendation models. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 391,976 |
1912.05901 | Adaptive Bayesian Reticulum | Neural Networks and Decision Trees: two popular techniques for supervised learning that are seemingly disconnected in their formulation and optimization method, have recently been combined in a single construct. The connection pivots on assembling an artificial Neural Network with nodes that allow for a gate-like function to mimic a tree split, optimized using the standard approach of recursively applying the chain rule to update its parameters. Yet two main challenges have impeded wide use of this hybrid approach: (a) the inability of global gradient ascent techniques to optimize hierarchical parameters (as introduced by the gate function); and (b) the construction of the tree structure, which has relied on standard decision tree algorithms to learn the network topology or incrementally (and heuristically) searching the space at random. Here we propose a probabilistic construct that exploits the idea of a node's unexplained potential (the total error channeled through the node) in order to decide where to expand further, mimicking the standard tree construction in a Neural Network setting, alongside a modified gradient ascent that first locally optimizes an expanded node before a global optimization. The probabilistic approach allows us to evaluate each new split as a ratio of likelihoods that balances the statistical improvement in explaining the evidence against the additional model complexity --- thus providing a natural stopping condition. The result is a novel classification and regression technique that leverages the strength of both: a tree-structure that grows naturally and is simple to interpret with the plasticity of Neural Networks that allow for soft margins and slanted boundaries. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 157,217 |
2501.17550 | Action Recognition Using Temporal Shift Module and Ensemble Learning | This paper presents the first-rank solution for the Multi-Modal Action Recognition Challenge, part of the Multi-Modal Visual Pattern Recognition Workshop at the \acl{ICPR} 2024. The competition aimed to recognize human actions using a diverse dataset of 20 action classes, collected from multi-modal sources. The proposed approach is built upon the \acl{TSM}, a technique aimed at efficiently capturing temporal dynamics in video data, incorporating multiple data input types. Our strategy included transfer learning to leverage pre-trained models, followed by meticulous fine-tuning on the challenge's specific dataset to optimize performance for the 20 action classes. We carefully selected a backbone network to balance computational efficiency and recognition accuracy and further refined the model using an ensemble technique that integrates outputs from different modalities. This ensemble approach proved crucial in boosting the overall performance. Our solution achieved a perfect top-1 accuracy on the test set, demonstrating the effectiveness of the proposed approach in recognizing human actions across 20 classes. Our code is available online https://github.com/ffyyytt/TSM-MMVPR. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 528,367 |
1803.00218 | Interval-based Prediction Uncertainty Bound Computation in Learning with
Missing Values | The problem of machine learning with missing values is common in many areas. A simple approach is to first construct a dataset without missing values simply by discarding instances with missing entries or by imputing a fixed value for each missing entry, and then train a prediction model with the new dataset. A drawback of this naive approach is that the uncertainty in the missing entries is not properly incorporated in the prediction. In order to evaluate prediction uncertainty, the multiple imputation (MI) approach has been studied, but the performance of MI is sensitive to the choice of the probabilistic model of the true values in the missing entries, and the computational cost of MI is high because multiple models must be trained. In this paper, we propose an alternative approach called the Interval-based Prediction Uncertainty Bounding (IPUB) method. The IPUB method represents the uncertainties due to missing entries as intervals, and efficiently computes the lower and upper bounds of the prediction results when all possible training sets constructed by imputing arbitrary values in the intervals are considered. The IPUB method can be applied to a wide class of convex learning algorithms including penalized least-squares regression, support vector machine (SVM), and logistic regression. We demonstrate the advantages of the IPUB method by comparing it with an existing method in numerical experiment with benchmark datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 91,629 |
2310.07855 | CrIBo: Self-Supervised Learning via Cross-Image Object-Level
Bootstrapping | Leveraging nearest neighbor retrieval for self-supervised representation learning has proven beneficial with object-centric images. However, this approach faces limitations when applied to scene-centric datasets, where multiple objects within an image are only implicitly captured in the global representation. Such global bootstrapping can lead to undesirable entanglement of object representations. Furthermore, even object-centric datasets stand to benefit from a finer-grained bootstrapping approach. In response to these challenges, we introduce a novel Cross-Image Object-Level Bootstrapping method tailored to enhance dense visual representation learning. By employing object-level nearest neighbor bootstrapping throughout the training, CrIBo emerges as a notably strong and adequate candidate for in-context learning, leveraging nearest neighbor retrieval at test time. CrIBo shows state-of-the-art performance on the latter task while being highly competitive in more standard downstream segmentation tasks. Our code and pretrained models are publicly available at https://github.com/tileb1/CrIBo. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 399,153 |
2502.03023 | Parametric Scaling Law of Tuning Bias in Conformal Prediction | Conformal prediction is a popular framework of uncertainty quantification that constructs prediction sets with coverage guarantees. To uphold the exchangeability assumption, many conformal prediction methods necessitate an additional holdout set for parameter tuning. Yet, the impact of violating this principle on coverage remains underexplored, making it ambiguous in practical applications. In this work, we empirically find that the tuning bias - the coverage gap introduced by leveraging the same dataset for tuning and calibration, is negligible for simple parameter tuning in many conformal prediction methods. In particular, we observe the scaling law of the tuning bias: this bias increases with parameter space complexity and decreases with calibration set size. Formally, we establish a theoretical framework to quantify the tuning bias and provide rigorous proof for the scaling law of the tuning bias by deriving its upper bound. In the end, we discuss how to reduce the tuning bias, guided by the theories we developed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 530,566 |
2104.07985 | Probabilistic water demand forecasting using quantile regression
algorithms | Machine and statistical learning algorithms can be reliably automated and applied at scale. Therefore, they can constitute a considerable asset for designing practical forecasting systems, such as those related to urban water demand. Quantile regression algorithms are statistical and machine learning algorithms that can provide probabilistic forecasts in a straightforward way, and have not been applied so far for urban water demand forecasting. In this work, we aim to fill this gap by automating and extensively comparing several quantile-regression-based practical systems for probabilistic one-day ahead urban water demand forecasting. For designing the practical systems, we use five individual algorithms (i.e., the quantile regression, linear boosting, generalized random forest, gradient boosting machine and quantile regression neural network algorithms), their mean combiner and their median combiner. The comparison is conducted by exploiting a large urban water flow dataset, as well as several types of hydrometeorological time series (which are considered as exogenous predictor variables in the forecasting setting). The results mostly favour the practical systems designed using the linear boosting algorithm, probably due to the presence of trends in the urban water flow time series. The forecasts of the mean and median combiners are also found to be skilful in general terms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 230,621 |
2203.14507 | ANNA: Enhanced Language Representation for Question Answering | Pre-trained language models have brought significant improvements in performance in a variety of natural language processing tasks. Most existing models performing state-of-the-art results have shown their approaches in the separate perspectives of data processing, pre-training tasks, neural network modeling, or fine-tuning. In this paper, we demonstrate how the approaches affect performance individually, and that the language model performs the best results on a specific question answering task when those approaches are jointly considered in pre-training models. In particular, we propose an extended pre-training task, and a new neighbor-aware mechanism that attends neighboring tokens more to capture the richness of context for pre-training language modeling. Our best model achieves new state-of-the-art results of 95.7\% F1 and 90.6\% EM on SQuAD 1.1 and also outperforms existing pre-trained language models such as RoBERTa, ALBERT, ELECTRA, and XLNet on the SQuAD 2.0 benchmark. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 288,037 |
2308.00002 | An Overview Of Temporal Commonsense Reasoning and Acquisition | Temporal commonsense reasoning refers to the ability to understand the typical temporal context of phrases, actions, and events, and use it to reason over problems requiring such knowledge. This trait is essential in temporal natural language processing tasks, with possible applications such as timeline summarization, temporal question answering, and temporal natural language inference. Recent research on the performance of large language models suggests that, although they are adept at generating syntactically correct sentences and solving classification tasks, they often take shortcuts in their reasoning and fall prey to simple linguistic traps. This article provides an overview of research in the domain of temporal commonsense reasoning, particularly focusing on enhancing language model performance through a variety of augmentations and their evaluation across a growing number of datasets. However, these augmented models still struggle to approach human performance on reasoning tasks over temporal common sense properties, such as the typical occurrence times, orderings, or durations of events. We further emphasize the need for careful interpretation of research to guard against overpromising evaluation results in light of the shallow reasoning present in transformers. This can be achieved by appropriately preparing datasets and suitable evaluation metrics. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 382,782 |
1912.01051 | Estimating Numerical Distributions under Local Differential Privacy | When collecting information, local differential privacy (LDP) relieves the concern of privacy leakage from users' perspective, as user's private information is randomized before sent to the aggregator. We study the problem of recovering the distribution over a numerical domain while satisfying LDP. While one can discretize a numerical domain and then apply the protocols developed for categorical domains, we show that taking advantage of the numerical nature of the domain results in better trade-off of privacy and utility. We introduce a new reporting mechanism, called the square wave SW mechanism, which exploits the numerical nature in reporting. We also develop an Expectation Maximization with Smoothing (EMS) algorithm, which is applied to aggregated histograms from the SW mechanism to estimate the original distributions. Extensive experiments demonstrate that our proposed approach, SW with EMS, consistently outperforms other methods in a variety of utility metrics. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | true | false | 155,959 |
2206.01474 | Offline Reinforcement Learning with Causal Structured World Models | Model-based methods have recently shown promising for offline reinforcement learning (RL), aiming to learn good policies from historical data without interacting with the environment. Previous model-based offline RL methods learn fully connected nets as world-models that map the states and actions to the next-step states. However, it is sensible that a world-model should adhere to the underlying causal effect such that it will support learning an effective policy generalizing well in unseen states. In this paper, We first provide theoretical results that causal world-models can outperform plain world-models for offline RL by incorporating the causal structure into the generalization error bound. We then propose a practical algorithm, oFfline mOdel-based reinforcement learning with CaUsal Structure (FOCUS), to illustrate the feasibility of learning and leveraging causal structure in offline RL. Experimental results on two benchmarks show that FOCUS reconstructs the underlying causal structure accurately and robustly. Consequently, it performs better than the plain model-based offline RL algorithms and other causal model-based RL algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 300,490 |
2210.02659 | Explainable Abuse Detection as Intent Classification and Slot Filling | To proactively offer social media users a safe online experience, there is a need for systems that can detect harmful posts and promptly alert platform moderators. In order to guarantee the enforcement of a consistent policy, moderators are provided with detailed guidelines. In contrast, most state-of-the-art models learn what abuse is from labelled examples and as a result base their predictions on spurious cues, such as the presence of group identifiers, which can be unreliable. In this work we introduce the concept of policy-aware abuse detection, abandoning the unrealistic expectation that systems can reliably learn which phenomena constitute abuse from inspecting the data alone. We propose a machine-friendly representation of the policy that moderators wish to enforce, by breaking it down into a collection of intents and slots. We collect and annotate a dataset of 3,535 English posts with such slots, and show how architectures for intent classification and slot filling can be used for abuse detection, while providing a rationale for model decisions. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 321,732 |
2208.06461 | Real-Time Accident Detection in Traffic Surveillance Using Deep Learning | Automatic detection of traffic accidents is an important emerging topic in traffic monitoring systems. Nowadays many urban intersections are equipped with surveillance cameras connected to traffic management systems. Therefore, computer vision techniques can be viable tools for automatic accident detection. This paper presents a new efficient framework for accident detection at intersections for traffic surveillance applications. The proposed framework consists of three hierarchical steps, including efficient and accurate object detection based on the state-of-the-art YOLOv4 method, object tracking based on Kalman filter coupled with the Hungarian algorithm for association, and accident detection by trajectory conflict analysis. A new cost function is applied for object association to accommodate for occlusion, overlapping objects, and shape changes in the object tracking step. The object trajectories are analyzed in terms of velocity, angle, and distance in order to detect different types of trajectory conflicts including vehicle-to-vehicle, vehicle-to-pedestrian, and vehicle-to-bicycle. Experimental results using real traffic video data show the feasibility of the proposed method in real-time applications of traffic surveillance. In particular, trajectory conflicts, including near-accidents and accidents occurring at urban intersections are detected with a low false alarm rate and a high detection rate. The robustness of the proposed framework is evaluated using video sequences collected from YouTube with diverse illumination conditions. The dataset is publicly available at: http://github.com/hadi-ghnd/AccidentDetection. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 312,724 |
2411.05003 | ReCapture: Generative Video Camera Controls for User-Provided Videos
using Masked Video Fine-Tuning | Recently, breakthroughs in video modeling have allowed for controllable camera trajectories in generated videos. However, these methods cannot be directly applied to user-provided videos that are not generated by a video model. In this paper, we present ReCapture, a method for generating new videos with novel camera trajectories from a single user-provided video. Our method allows us to re-generate the reference video, with all its existing scene motion, from vastly different angles and with cinematic camera motion. Notably, using our method we can also plausibly hallucinate parts of the scene that were not observable in the reference video. Our method works by (1) generating a noisy anchor video with a new camera trajectory using multiview diffusion models or depth-based point cloud rendering and then (2) regenerating the anchor video into a clean and temporally consistent reangled video using our proposed masked video fine-tuning technique. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | true | 506,504 |
1910.06160 | Mask-Guided Attention Network for Occluded Pedestrian Detection | Pedestrian detection relying on deep convolution neural networks has made significant progress. Though promising results have been achieved on standard pedestrians, the performance on heavily occluded pedestrians remains far from satisfactory. The main culprits are intra-class occlusions involving other pedestrians and inter-class occlusions caused by other objects, such as cars and bicycles. These result in a multitude of occlusion patterns. We propose an approach for occluded pedestrian detection with the following contributions. First, we introduce a novel mask-guided attention network that fits naturally into popular pedestrian detection pipelines. Our attention network emphasizes on visible pedestrian regions while suppressing the occluded ones by modulating full body features. Second, we empirically demonstrate that coarse-level segmentation annotations provide reasonable approximation to their dense pixel-wise counterparts. Experiments are performed on CityPersons and Caltech datasets. Our approach sets a new state-of-the-art on both datasets. Our approach obtains an absolute gain of 9.5% in log-average miss rate, compared to the best reported results on the heavily occluded (HO) pedestrian set of CityPersons test set. Further, on the HO pedestrian set of Caltech dataset, our method achieves an absolute gain of 5.0% in log-average miss rate, compared to the best reported results. Code and models are available at: https://github.com/Leotju/MGAN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 149,278 |
1707.01673 | Energy Efficient Resource Allocation for Hybrid Services with Future
Channel Gains | In this paper, we propose a framework to maximize energy efficiency (EE) of a system supporting real-time (RT) and non-real-time services by exploiting future average channel gains of mobile users, which change in the timescale of seconds and are reported predictable within a minute-long time window. To demonstrate the potential of improving EE by jointly optimizing resource allocation for both services by harnessing both future average channel gains and current instantaneous channel gains, we optimize a two-timescale policy with perfect prediction, by taking orthogonal frequency division multiple access system serving RT and video-on-demand (VoD) users as an example. Considering that fine-grained prediction for every user is with high cost, we propose a heuristic policy that only needs to predict the median of average channel gains of VoD users. Simulation results show that the optimal policy outperforms relevant counterparts, indicating the necessity of the joint optimization for both services and for two timescales. Besides, the heuristic policy performs closely to the optimal policy with perfect prediction while becomes superior with large prediction errors. This suggests that the EE gain over non-predictive policies can be captured with coarse-grained prediction. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 76,581 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.