id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.00238 | Uncertainty categories in medical image segmentation: a study of
source-related diversity | Measuring uncertainties in the output of a deep learning method is useful in several ways, such as in assisting with interpretation of the outputs, helping build confidence with end users, and for improving the training and performance of the networks. Several different methods have been proposed to estimate uncertainties, including those from epistemic (relating to the model used) and aleatoric (relating to the data) sources using test-time dropout and augmentation, respectively. Not only are these uncertainty sources different, but they are governed by parameter settings (e.g., dropout rate or type and level of augmentation) that establish even more distinct uncertainty categories. This work investigates how different the uncertainties are from these categories, for magnitude and spatial pattern, to empirically address the question of whether they provide usefully distinct information that should be captured whenever uncertainties are used. We take the well characterised BraTS challenge dataset to demonstrate that there are substantial differences in both magnitude and spatial pattern of uncertainties from the different categories, and discuss the implications of these in various use cases. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 282,926 |
2303.08028 | EdgeServe: A Streaming System for Decentralized Model Serving | The relevant features for a machine learning task may arrive as one or more continuous streams of data. Serving machine learning models over streams of data creates a number of interesting systems challenges in managing data routing, time-synchronization, and rate control. This paper presents EdgeServe, a distributed streaming system that can serve predictions from machine learning models in real time. We evaluate EdgeServe on three streaming prediction tasks: (1) human activity recognition, (2) autonomous driving, and (3) network intrusion detection. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | true | 351,477 |
2009.10794 | Investigating Machine Learning Methods for Language and Dialect
Identification of Cuneiform Texts | Identification of the languages written using cuneiform symbols is a difficult task due to the lack of resources and the problem of tokenization. The Cuneiform Language Identification task in VarDial 2019 addresses the problem of identifying seven languages and dialects written in cuneiform; Sumerian and six dialects of Akkadian language: Old Babylonian, Middle Babylonian Peripheral, Standard Babylonian, Neo-Babylonian, Late Babylonian, and Neo-Assyrian. This paper describes the approaches taken by SharifCL team to this problem in VarDial 2019. The best result belongs to an ensemble of Support Vector Machines and a naive Bayes classifier, both working on character-level features, with macro-averaged F1-score of 72.10%. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 196,983 |
2004.04907 | Socioeconomic correlations of urban patterns inferred from aerial
images: interpreting activation maps of Convolutional Neural Networks | Urbanisation is a great challenge for modern societies, promising better access to economic opportunities while widening socioeconomic inequalities. Accurately tracking how this process unfolds has been challenging for traditional data collection methods, while remote sensing information offers an alternative to gather a more complete view on these societal changes. By feeding a neural network with satellite images one may recover the socioeconomic information associated to that area, however these models lack to explain how visual features contained in a sample, trigger a given prediction. Here we close this gap by predicting socioeconomic status across France from aerial images and interpreting class activation mappings in terms of urban topology. We show that the model disregards the spatial correlations existing between urban class and socioeconomic status to derive its predictions. These results pave the way to build interpretable models, which may help to better track and understand urbanisation and its consequences. | false | false | false | false | false | false | true | false | false | false | false | true | false | true | false | false | false | false | 172,022 |
2408.05697 | Evaluating BM3D and NBNet: A Comprehensive Study of Image Denoising
Across Multiple Datasets | This paper investigates image denoising, comparing traditional non-learning-based techniques, represented by Block-Matching 3D (BM3D), with modern learning-based methods, exemplified by NBNet. We assess these approaches across diverse datasets, including CURE-OR, CURE-TSR, SSID+, Set-12, and Chest-Xray, each presenting unique noise challenges. Our analysis employs seven Image Quality Assessment (IQA) metrics and examines the impact on object detection performance. We find that while BM3D excels in scenarios like blur challenges, NBNet is more effective in complex noise environments such as under-exposure and over-exposure. The study reveals the strengths and limitations of each method, providing insights into the effectiveness of different denoising strategies in varied real-world applications. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 479,894 |
2403.05102 | Enhancing Texture Generation with High-Fidelity Using Advanced Texture
Priors | The recent advancements in 2D generation technology have sparked a widespread discussion on using 2D priors for 3D shape and texture content generation. However, these methods often overlook the subsequent user operations, such as texture aliasing and blurring that occur when the user acquires the 3D model and simplifies its structure. Traditional graphics methods partially alleviate this issue, but recent texture synthesis technologies fail to ensure consistency with the original model's appearance and cannot achieve high-fidelity restoration. Moreover, background noise frequently arises in high-resolution texture synthesis, limiting the practical application of these generation technologies.In this work, we propose a high-resolution and high-fidelity texture restoration technique that uses the rough texture as the initial input to enhance the consistency between the synthetic texture and the initial texture, thereby overcoming the issues of aliasing and blurring caused by the user's structure simplification operations. Additionally, we introduce a background noise smoothing technique based on a self-supervised scheme to address the noise problem in current high-resolution texture synthesis schemes. Our approach enables high-resolution texture synthesis, paving the way for high-definition and high-detail texture synthesis technology. Experiments demonstrate that our scheme outperforms currently known schemes in high-fidelity texture recovery under high-resolution conditions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 435,869 |
2409.09302 | Heterogeneous Roles against Assignment Based Policies in Two vs Two
Target Defense Game | In this paper, we consider a target defense game in which the attacker team seeks to reach a high-value target while the defender team seeks to prevent that by capturing them away from the target. To address the curse of dimensionality, a popular approach to solve such team-vs-team game is to decompose it into a set of one-vs-one games. Such an approximation assumes independence between teammates assigned to different one-vs-one games, ignoring the possibility of a richer set of cooperative behaviors, ultimately leading to suboptimality. In this paper, we provide teammate-aware strategies for the attacker team and show that they can outperform the assignment-based strategy, if the defenders still employ an assignment-based strategy. More specifically, the attacker strategy involves heterogeneous roles where one attacker actively intercepts a defender to help its teammate reach the target. We provide sufficient conditions under which such a strategy benefits the attackers, and we validate the results using numerical simulations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 488,260 |
2103.05465 | PointDSC: Robust Point Cloud Registration using Deep Spatial Consistency | Removing outlier correspondences is one of the critical steps for successful feature-based point cloud registration. Despite the increasing popularity of introducing deep learning methods in this field, spatial consistency, which is essentially established by a Euclidean transformation between point clouds, has received almost no individual attention in existing learning frameworks. In this paper, we present PointDSC, a novel deep neural network that explicitly incorporates spatial consistency for pruning outlier correspondences. First, we propose a nonlocal feature aggregation module, weighted by both feature and spatial coherence, for feature embedding of the input correspondences. Second, we formulate a differentiable spectral matching module, supervised by pairwise spatial compatibility, to estimate the inlier confidence of each correspondence from the embedded features. With modest computation cost, our method outperforms the state-of-the-art hand-crafted and learning-based outlier rejection approaches on several real-world datasets by a significant margin. We also show its wide applicability by combining PointDSC with different 3D local descriptors. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 223,989 |
1904.09764 | Deep Anchored Convolutional Neural Networks | Convolutional Neural Networks (CNNs) have been proven to be extremely successful at solving computer vision tasks. State-of-the-art methods favor such deep network architectures for its accuracy performance, with the cost of having massive number of parameters and high weights redundancy. Previous works have studied how to prune such CNNs weights. In this paper, we go to another extreme and analyze the performance of a network stacked with a single convolution kernel across layers, as well as other weights sharing techniques. We name it Deep Anchored Convolutional Neural Network (DACNN). Sharing the same kernel weights across layers allows to reduce the model size tremendously, more precisely, the network is compressed in memory by a factor of L, where L is the desired depth of the network, disregarding the fully connected layer for prediction. The number of parameters in DACNN barely increases as the network grows deeper, which allows us to build deep DACNNs without any concern about memory costs. We also introduce a partial shared weights network (DACNN-mix) as well as an easy-plug-in module, coined regulators, to boost the performance of our architecture. We validated our idea on 3 datasets: CIFAR-10, CIFAR-100 and SVHN. Our results show that we can save massive amounts of memory with our model, while maintaining a high accuracy performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 128,476 |
2502.11234 | MaskFlow: Discrete Flows For Flexible and Efficient Long Video
Generation | Generating long, high-quality videos remains a challenge due to the complex interplay of spatial and temporal dynamics and hardware limitations. In this work, we introduce \textbf{MaskFlow}, a unified video generation framework that combines discrete representations with flow-matching to enable efficient generation of high-quality long videos. By leveraging a frame-level masking strategy during training, MaskFlow conditions on previously generated unmasked frames to generate videos with lengths ten times beyond that of the training sequences. MaskFlow does so very efficiently by enabling the use of fast Masked Generative Model (MGM)-style sampling and can be deployed in both fully autoregressive as well as full-sequence generation modes. We validate the quality of our method on the FaceForensics (FFS) and Deepmind Lab (DMLab) datasets and report Fr\'echet Video Distance (FVD) competitive with state-of-the-art approaches. We also provide a detailed analysis on the sampling efficiency of our method and demonstrate that MaskFlow can be applied to both timestep-dependent and timestep-independent models in a training-free manner. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 534,262 |
1212.4940 | Fourier Domain Beamforming for Medical Ultrasound | Sonography techniques use multiple transducer elements for tissue visualization. Signals detected at each element are sampled prior to digital beamforming. The required sampling rates are up to 4 times the Nyquist rate of the signal and result in considerable amount of data, that needs to be stored and processed. A developed technique, based on the finite rate of innovation model, compressed sensing (CS) and Xampling ideas, allows to reduce the number of samples needed to reconstruct an image comprised of strong reflectors. A significant drawback of this method is its inability to treat speckle, which is of significant importance in medical imaging. Here we build on previous work and show explicitly how to perform beamforming in the Fourier domain. Beamforming in frequency exploits the low bandwidth of the beamformed signal and allows to bypass the oversampling dictated by digital implementation of beamforming in time. We show that this allows to obtain the same beamformed image as in standard beamforming but from far fewer samples. Finally, we present an analysis based CS-technique that allows for further reduction in sampling rate, using only a portion of the beamformed signal's bandwidth, namely, sampling the signal at sub-Nyquist rates. We demonstrate our methods on in vivo cardiac ultrasound data and show that reductions up to 1/25 over standard beamforming rates are possible. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 20,501 |
2203.02997 | Smoothing with the Best Rectangle Window is Optimal for All Tapered
Rectangle Windows | We investigate the optimal selection of weight windows for the problem of weighted least squares. We show that weight windows should be symmetric around its center, which is also its peak. We consider the class of tapered rectangle window weights, which are nonincreasing away from the center. We show that the best rectangle window is optimal for such window definitions. We also extend our results to the least absolutes and more general case of arbitrary loss functions to find similar results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 283,932 |
1102.3181 | Spatially Coupled Quasi-Cyclic Quantum LDPC Codes | We face the following dilemma for designing low-density parity-check codes (LDPC) for quantum error correction. 1) The row weights of parity-check should be large: The minimum distances are bounded above by the minimum row weights of parity-check matrices of constituent classical codes. Small minimum distance tends to result in poor decoding performance at the error-floor region. 2) The row weights of parity-check matrices should not be large: The sum-product decoding performance at the water-fall region is degraded as the row weight increases. Recently, Kudekar et al. showed spatially-coupled (SC) LDPC codes exhibit capacity-achieving performance for classical channels. SC LDPC codes have both large row weight and capacity-achieving error-floor and water-fall performance. In this paper, we design SC LDPC-CSS (Calderbank, Shor and Steane) codes for quantum error correction over the depolarizing channels. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 9,218 |
1908.06566 | Adversarial Defense by Suppressing High-frequency Components | Recent works show that deep neural networks trained on image classification dataset bias towards textures. Those models are easily fooled by applying small high-frequency perturbations to clean images. In this paper, we learn robust image classification models by removing high-frequency components. Specifically, we develop a differentiable high-frequency suppression module based on discrete Fourier transform (DFT). Combining with adversarial training, we won the 5th place in the IJCAI-2019 Alibaba Adversarial AI Challenge. Our code is available online. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 142,049 |
2302.08091 | Do We Still Need Clinical Language Models? | Although recent advances in scaling large language models (LLMs) have resulted in improvements on many NLP tasks, it remains unclear whether these models trained primarily with general web text are the right tool in highly specialized, safety critical domains such as clinical text. Recent results have suggested that LLMs encode a surprising amount of medical knowledge. This raises an important question regarding the utility of smaller domain-specific language models. With the success of general-domain LLMs, is there still a need for specialized clinical models? To investigate this question, we conduct an extensive empirical analysis of 12 language models, ranging from 220M to 175B parameters, measuring their performance on 3 different clinical tasks that test their ability to parse and reason over electronic health records. As part of our experiments, we train T5-Base and T5-Large models from scratch on clinical notes from MIMIC III and IV to directly investigate the efficiency of clinical tokens. We show that relatively small specialized clinical models substantially outperform all in-context learning approaches, even when finetuned on limited annotated data. Further, we find that pretraining on clinical tokens allows for smaller, more parameter-efficient models that either match or outperform much larger language models trained on general text. We release the code and the models used under the PhysioNet Credentialed Health Data license and data use agreement. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 345,938 |
2012.00802 | Adversarial Robustness Across Representation Spaces | Adversarial robustness corresponds to the susceptibility of deep neural networks to imperceptible perturbations made at test time. In the context of image tasks, many algorithms have been proposed to make neural networks robust to adversarial perturbations made to the input pixels. These perturbations are typically measured in an $\ell_p$ norm. However, robustness often holds only for the specific attack used for training. In this work we extend the above setting to consider the problem of training of deep neural networks that can be made simultaneously robust to perturbations applied in multiple natural representation spaces. For the case of image data, examples include the standard pixel representation as well as the representation in the discrete cosine transform~(DCT) basis. We design a theoretically sound algorithm with formal guarantees for the above problem. Furthermore, our guarantees also hold when the goal is to require robustness with respect to multiple $\ell_p$ norm based attacks. We then derive an efficient practical implementation and demonstrate the effectiveness of our approach on standard datasets for image classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 209,239 |
1509.05281 | Network analysis of named entity co-occurrences in written texts | The use of methods borrowed from statistics and physics to analyze written texts has allowed the discovery of unprecedent patterns of human behavior and cognition by establishing links between models features and language structure. While current models have been useful to unveil patterns via analysis of syntactical and semantical networks, only a few works have probed the relevance of investigating the structure arising from the relationship between relevant entities such as characters, locations and organizations. In this study, we represent entities appearing in the same context as a co-occurrence network, where links are established according to a null model based on random, shuffled texts. Computational simulations performed in novels revealed that the proposed model displays interesting topological features, such as the small world feature, characterized by high values of clustering coefficient. The effectiveness of our model was verified in a practical pattern recognition task in real networks. When compared with traditional word adjacency networks, our model displayed optimized results in identifying unknown references in texts. Because the proposed representation plays a complementary role in characterizing unstructured documents via topological analysis of named entities, we believe that it could be useful to improve the characterization of written texts (and related systems), specially if combined with traditional approaches based on statistical and deeper paradigms. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 47,027 |
1307.3696 | Where in the Internet is congestion? | Understanding the distribution of congestion in the Internet is a long-standing problem. Using data from the SamKnows US broadband access network measurement study, commissioned by the FCC, we explore patterns of congestion distribution in DSL and cable Internet service provider (ISP) networks. Using correlation-based analysis we estimate prevalence of congestion in the periphery versus the core of ISP networks. We show that there are significant differences in congestion levels and its distribution between DSL and cable ISP networks and identify bottleneck sections in each type of network. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 25,823 |
1102.0674 | Effective Mechanism for Social Recommendation of News | Recommendation systems represent an important tool for news distribution on the Internet. In this work we modify a recently proposed social recommendation model in order to deal with no explicit ratings of users on news. The model consists of a network of users which continually adapts in order to achieve an efficient news traffic. To optimize network's topology we propose different stochastic algorithms that are scalable with respect to the network's size. Agent-based simulations reveal the features and the performance of these algorithms. To overcome the resultant drawbacks of each method we introduce two improved algorithms and show that they can optimize network's topology almost as fast and effectively as other not-scalable methods that make use of much more information. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 9,012 |
2111.07525 | Automatic Analysis of Linguistic Features in Journal Articles of
Different Academic Impacts with Feature Engineering Techniques | English research articles (RAs) are an essential genre in academia, so the attempts to employ NLP to assist the development of academic writing ability have received considerable attention in the last two decades. However, there has been no study employing feature engineering techniques to investigate the linguistic features of RAs of different academic impacts (i.e., the papers of high/moderate citation times published in the journals of high/moderate impact factors). This study attempts to extract micro-level linguistic features in high- and moderate-impact journal RAs, using feature engineering methods. We extracted 25 highly relevant features from the Corpus of English Journal Articles through feature selection methods. All papers in the corpus deal with COVID-19 medical empirical studies. The selected features were then validated of the classification performance in terms of consistency and accuracy through supervised machine learning methods. Results showed that 24 linguistic features such as the overlapping of content words between adjacent sentences, the use of third-person pronouns, auxiliary verbs, tense, emotional words provide consistent and accurate predictions for journal articles with different academic impacts. Lastly, the random forest model is shown to be the best model to fit the relationship between these 24 features and journal articles with high and moderate impacts. These findings can be used to inform academic writing courses and lay the foundation for developing automatic evaluation systems for L2 graduate students. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 266,404 |
2403.12194 | The POLAR Traverse Dataset: A Dataset of Stereo Camera Images Simulating
Traverses across Lunar Polar Terrain under Extreme Lighting Conditions | We present the POLAR Traverse Dataset: a dataset of high-fidelity stereo pair images of lunar-like terrain under polar lighting conditions designed to simulate a straight-line traverse. Images from individual traverses with different camera heights and pitches were recorded at 1 m intervals by moving a suspended stereo bar across a test bed filled with regolith simulant and shaped to mimic lunar south polar terrain. Ground truth geometry and camera position information was also recorded. This dataset is intended for developing and testing software algorithms that rely on stereo or monocular camera images, such as visual odometry, for use in the lunar polar environment, as well as to provide insight into the expected lighting conditions in lunar polar regions. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 439,068 |
2203.01488 | PetsGAN: Rethinking Priors for Single Image Generation | Single image generation (SIG), described as generating diverse samples that have similar visual content with the given single image, is first introduced by SinGAN which builds a pyramid of GANs to progressively learn the internal patch distribution of the single image. It also shows great potentials in a wide range of image manipulation tasks. However, the paradigm of SinGAN has limitations in terms of generation quality and training time. Firstly, due to the lack of high-level information, SinGAN cannot handle the object images well as it does on the scene and texture images. Secondly, the separate progressive training scheme is time-consuming and easy to cause artifact accumulation. To tackle these problems, in this paper, we dig into the SIG problem and improve SinGAN by fully-utilization of internal and external priors. The main contributions of this paper include: 1) We introduce to SIG a regularized latent variable model. To the best of our knowledge, it is the first time to give a clear formulation and optimization goal of SIG, and all the existing methods for SIG can be regarded as special cases of this model. 2) We design a novel Prior-based end-to-end training GAN (PetsGAN) to overcome the problems of SinGAN. Our method gets rid of the time-consuming progressive training scheme and can be trained end-to-end. 3) We construct abundant qualitative and quantitative experiments to show the superiority of our method on both generated image quality, diversity, and the training speed. Moreover, we apply our method to other image manipulation tasks (e.g., style transfer, harmonization), and the results further prove the effectiveness and efficiency of our method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 283,392 |
2202.00441 | Few-Bit Backward: Quantized Gradients of Activation Functions for Memory
Footprint Reduction | Memory footprint is one of the main limiting factors for large neural network training. In backpropagation, one needs to store the input to each operation in the computational graph. Every modern neural network model has quite a few pointwise nonlinearities in its architecture, and such operation induces additional memory costs which -- as we show -- can be significantly reduced by quantization of the gradients. We propose a systematic approach to compute optimal quantization of the retained gradients of the pointwise nonlinear functions with only a few bits per each element. We show that such approximation can be achieved by computing optimal piecewise-constant approximation of the derivative of the activation function, which can be done by dynamic programming. The drop-in replacements are implemented for all popular nonlinearities and can be used in any existing pipeline. We confirm the memory reduction and the same convergence on several open benchmarks. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 278,137 |
1808.02350 | YOLO3D: End-to-end real-time 3D Oriented Object Bounding Box Detection
from LiDAR Point Cloud | Object detection and classification in 3D is a key task in Automated Driving (AD). LiDAR sensors are employed to provide the 3D point cloud reconstruction of the surrounding environment, while the task of 3D object bounding box detection in real time remains a strong algorithmic challenge. In this paper, we build on the success of the one-shot regression meta-architecture in the 2D perspective image space and extend it to generate oriented 3D object bounding boxes from LiDAR point cloud. Our main contribution is in extending the loss function of YOLO v2 to include the yaw angle, the 3D box center in Cartesian coordinates and the height of the box as a direct regression problem. This formulation enables real-time performance, which is essential for automated driving. Our results are showing promising figures on KITTI benchmark, achieving real-time performance (40 fps) on Titan X GPU. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 104,755 |
2304.12858 | Unpaired Image Translation to Mitigate Domain Shift in Liquid Argon Time
Projection Chamber Detector Responses | Deep learning algorithms often are trained and deployed on different datasets. Any systematic difference between the training and a test dataset may degrade the algorithm performance--what is known as the domain shift problem. This issue is prevalent in many scientific domains where algorithms are trained on simulated data but applied to real-world datasets. Typically, the domain shift problem is solved through various domain adaptation methods. However, these methods are often tailored for a specific downstream task and may not easily generalize to different tasks. This work explores the feasibility of using an alternative way to solve the domain shift problem that is not specific to any downstream algorithm. The proposed approach relies on modern Unpaired Image-to-Image translation techniques, designed to find translations between different image domains in a fully unsupervised fashion. In this study, the approach is applied to a domain shift problem commonly encountered in Liquid Argon Time Projection Chamber (LArTPC) detector research when seeking a way to translate samples between two differently distributed detector datasets deterministically. This translation allows for mapping real-world data into the simulated data domain where the downstream algorithms can be run with much less domain-shift-related degradation. Conversely, using the translation from the simulated data in a real-world domain can increase the realism of the simulated dataset and reduce the magnitude of any systematic uncertainties. We adapted several UI2I translation algorithms to work on scientific data and demonstrated the viability of these techniques for solving the domain shift problem with LArTPC detector data. To facilitate further development of domain adaptation techniques for scientific datasets, the "Simple Liquid-Argon Track Samples" dataset used in this study also is published. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 360,372 |
2310.16109 | Complex Image Generation SwinTransformer Network for Audio Denoising | Achieving high-performance audio denoising is still a challenging task in real-world applications. Existing time-frequency methods often ignore the quality of generated frequency domain images. This paper converts the audio denoising problem into an image generation task. We first develop a complex image generation SwinTransformer network to capture more information from the complex Fourier domain. We then impose structure similarity and detailed loss functions to generate high-quality images and develop an SDR loss to minimize the difference between denoised and clean audios. Extensive experiments on two benchmark datasets demonstrate that our proposed model is better than state-of-the-art methods. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 402,576 |
2402.08898 | UniEnc-CASSNAT: An Encoder-only Non-autoregressive ASR for Speech SSL
Models | Non-autoregressive automatic speech recognition (NASR) models have gained attention due to their parallelism and fast inference. The encoder-based NASR, e.g. connectionist temporal classification (CTC), can be initialized from the speech foundation models (SFM) but does not account for any dependencies among intermediate tokens. The encoder-decoder-based NASR, like CTC alignment-based single-step non-autoregressive transformer (CASS-NAT), can mitigate the dependency problem but is not able to efficiently integrate SFM. Inspired by the success of recent work of speech-text joint pre-training with a shared transformer encoder, we propose a new encoder-based NASR, UniEnc-CASSNAT, to combine the advantages of CTC and CASS-NAT. UniEnc-CASSNAT consists of only an encoder as the major module, which can be the SFM. The encoder plays the role of both the CASS-NAT encoder and decoder by two forward passes. The first pass of the encoder accepts the speech signal as input, while the concatenation of the speech signal and the token-level acoustic embedding is used as the input for the second pass. Examined on the Librispeech 100h, MyST, and Aishell1 datasets, the proposed UniEnc-CASSNAT achieves state-of-the-art NASR results and is better or comparable to CASS-NAT with only an encoder and hence, fewer model parameters. Our codes are publicly available. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 429,283 |
1602.00095 | Walsh Sampling with Incomplete Noisy Signals | With the advent of massive data outputs at a regular rate, admittedly, signal processing technology plays an increasingly key role. Nowadays, signals are not merely restricted to physical sources, they have been extended to digital sources as well. Under the general assumption of discrete statistical signal sources, we propose a practical problem of sampling incomplete noisy signals for which we do not know a priori and the sampling size is bounded. We approach this sampling problem by Shannon's channel coding theorem. Our main results demonstrate that it is the large Walsh coefficient(s) that characterize(s) discrete statistical signals, regardless of the signal sources. By the connection of Shannon's theorem, we establish the necessary and sufficient condition for our generic sampling problem for the first time. Our generic sampling results find practical and powerful applications in not only statistical cryptanalysis, but software system performance optimization. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 51,524 |
2406.08575 | Using Quality Attribute Scenarios for ML Model Test Case Generation | Testing of machine learning (ML) models is a known challenge identified by researchers and practitioners alike. Unfortunately, current practice for ML model testing prioritizes testing for model performance, while often neglecting the requirements and constraints of the ML-enabled system that integrates the model. This limited view of testing leads to failures during integration, deployment, and operations, contributing to the difficulties of moving models from development to production. This paper presents an approach based on quality attribute (QA) scenarios to elicit and define system- and model-relevant test cases for ML models. The QA-based approach described in this paper has been integrated into MLTE, a process and tool to support ML model test and evaluation. Feedback from users of MLTE highlights its effectiveness in testing beyond model performance and identifying failures early in the development process. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 463,539 |
2204.05862 | Training a Helpful and Harmless Assistant with Reinforcement Learning
from Human Feedback | We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 291,168 |
2405.01012 | Correcting Biased Centered Kernel Alignment Measures in Biological and
Artificial Neural Networks | Centred Kernel Alignment (CKA) has recently emerged as a popular metric to compare activations from biological and artificial neural networks (ANNs) in order to quantify the alignment between internal representations derived from stimuli sets (e.g. images, text, video) that are presented to both systems. In this paper we highlight issues that the community should take into account if using CKA as an alignment metric with neural data. Neural data are in the low-data high-dimensionality domain, which is one of the cases where (biased) CKA results in high similarity scores even for pairs of random matrices. Using fMRI and MEG data from the THINGS project, we show that if biased CKA is applied to representations of different sizes in the low-data high-dimensionality domain, they are not directly comparable due to biased CKA's sensitivity to differing feature-sample ratios and not stimuli-driven responses. This situation can arise both when comparing a pre-selected area of interest (e.g. ROI) to multiple ANN layers, as well as when determining to which ANN layer multiple regions of interest (ROIs) / sensor groups of different dimensionality are most similar. We show that biased CKA can be artificially driven to its maximum value when using independent random data of different sample-feature ratios. We further show that shuffling sample-feature pairs of real neural data does not drastically alter biased CKA similarity in comparison to unshuffled data, indicating an undesirable lack of sensitivity to stimuli-driven neural responses. Positive alignment of true stimuli-driven responses is only achieved by using debiased CKA. Lastly, we report findings that suggest biased CKA is sensitive to the inherent structure of neural data, only differing from shuffled data when debiased CKA detects stimuli-driven alignment. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 451,182 |
2309.00544 | Modular, Multi-Robot Integration of Laboratories: An Autonomous
Solid-State Workflow for Powder X-Ray Diffraction | Automation can transform productivity in research activities that use liquid handling, such as organic synthesis, but it has made less impact in materials laboratories, which require sample preparation steps and a range of solid-state characterization techniques. For example, powder X-ray diffraction (PXRD) is a key method in materials and pharmaceutical chemistry, but its end-to-end automation is challenging because it involves solid powder handling and sample processing. Here we present a fully autonomous solid-state workflow for PXRD experiments that can match or even surpass manual data quality. The workflow involves 12 steps performed by a team of three multipurpose robots, illustrating the power of flexible, modular automation to integrate complex, multitask laboratories. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 389,336 |
2407.08990 | Dynamic neural network with memristive CIM and CAM for 2D and 3D vision | The brain is dynamic, associative and efficient. It reconfigures by associating the inputs with past experiences, with fused memory and processing. In contrast, AI models are static, unable to associate inputs with past experiences, and run on digital computers with physically separated memory and processing. We propose a hardware-software co-design, a semantic memory-based dynamic neural network (DNN) using memristor. The network associates incoming data with the past experience stored as semantic vectors. The network and the semantic memory are physically implemented on noise-robust ternary memristor-based Computing-In-Memory (CIM) and Content-Addressable Memory (CAM) circuits, respectively. We validate our co-designs, using a 40nm memristor macro, on ResNet and PointNet++ for classifying images and 3D points from the MNIST and ModelNet datasets, which not only achieves accuracy on par with software but also a 48.1% and 15.9% reduction in computational budget. Moreover, it delivers a 77.6% and 93.3% reduction in energy consumption. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | true | 472,398 |
2205.15879 | Simplex Neural Population Learning: Any-Mixture Bayes-Optimality in
Symmetric Zero-sum Games | Learning to play optimally against any mixture over a diverse set of strategies is of important practical interests in competitive games. In this paper, we propose simplex-NeuPL that satisfies two desiderata simultaneously: i) learning a population of strategically diverse basis policies, represented by a single conditional network; ii) using the same network, learn best-responses to any mixture over the simplex of basis policies. We show that the resulting conditional policies incorporate prior information about their opponents effectively, enabling near optimal returns against arbitrary mixture policies in a game with tractable best-responses. We verify that such policies behave Bayes-optimally under uncertainty and offer insights in using this flexibility at test time. Finally, we offer evidence that learning best-responses to any mixture policies is an effective auxiliary task for strategic exploration, which, by itself, can lead to more performant populations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 299,904 |
2212.08686 | Evaluating Step-by-Step Reasoning through Symbolic Verification | Pre-trained language models (LMs) have shown remarkable reasoning performance using explanations or chain-of-thoughts (CoT)) for in-context learning. On the other hand, these reasoning tasks are usually presumed to be more approachable for symbolic programming. To understand the mechanism of reasoning of LMs, we curate synthetic datasets containing equivalent (natural, symbolic) data pairs, where symbolic examples contain first-order logic rules and predicates from non-parametric knowledge bases (KBs), supporting automated verification of intermediate reasoning results. Then we revisit neuro-symbolic approaches and propose to learn from demonstrations containing logic rules and corresponding examples to iteratively reason over KBs, recovering Prolog's backward chaining algorithm and supporting automated verification of LMs' outputs. Comprehensive experiments are included to systematically compare LMLP with CoT in deductive reasoning settings, showing that LMLP enjoys more than $25\%$ higher accuracy than CoT on length generalization benchmarks even with smaller model sizes. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 336,837 |
2105.10736 | How They Tweet? An Insightful Analysis of Twitter Handles of Saudi
Arabia | The emergence of social network site has attracted many users across the world to share their feeling, news, achievements and personal thoughts over several platforms. The recent crisis due to worldwide lockdown amid COVID 19 has shown how these online social platforms have grown stronger and turned up as the major source of connection among people when there is social distancing everywhere. Therefore, we have surveyed Twitter users and their mannerism with respect to languages, frequency of tweets, the region of belonging, etc. The above observations have been considered especially with respect to Saudi Arabia. An insightful analysis of the tweets and twitter handles of the kingdom has been presented. The results show some interesting facts that are envisaged to lay a platform for further research in the field of social, political and data sciences related to the Middle East. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 236,486 |
1612.02174 | EMC Regulations and Spectral Constraints for Multicarrier Modulation in
PLC | This paper considers Electromagnetic Compatibility (EMC) aspects in the context of Power Line Communication (PLC) systems. It offers a complete overview of both narrow band PLC and broad band PLC EMC norms. How to interpret and translate such norms and measurement procedures into typical constraints used by designers of communication systems, is discussed. In particular, the constraints to the modulated signal spectrum are considered and the ability of pulse shaped OFDM (PS-OFDM), used in most of the PLC standards as IEEE P1901 and P1901.2, to fulfill them is analyzed. In addition, aiming to improve the spectrum management ability, a novel scheme named Pulse Shaped Cyclic Block Filtered Multitone modulation (PS-CB-FMT) is introduced and compared to PS-OFDM. It is shown that, PS-CB-FMT offers better ability to fulfill the norms which translates in higher system capacity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,197 |
2008.01566 | On the Generalizability of Neural Program Models with respect to
Semantic-Preserving Program Transformations | With the prevalence of publicly available source code repositories to train deep neural network models, neural program models can do well in source code analysis tasks such as predicting method names in given programs that cannot be easily done by traditional program analysis techniques. Although such neural program models have been tested on various existing datasets, the extent to which they generalize to unforeseen source code is largely unknown. Since it is very challenging to test neural program models on all unforeseen programs, in this paper, we propose to evaluate the generalizability of neural program models with respect to semantic-preserving transformations: a generalizable neural program model should perform equally well on programs that are of the same semantics but of different lexical appearances and syntactical structures. We compare the results of various neural program models for the method name prediction task on programs before and after automated semantic-preserving transformations. We use three Java datasets of different sizes and three state-of-the-art neural network models for code, namely code2vec, code2seq, and GGNN, to build nine such neural program models for evaluation. Our results show that even with small semantically preserving changes to the programs, these neural program models often fail to generalize their performance. Our results also suggest that neural program models based on data and control dependencies in programs generalize better than neural program models based only on abstract syntax trees. On the positive side, we observe that as the size of the training dataset grows and diversifies the generalizability of correct predictions produced by the neural program models can be improved too. Our results on the generalizability of neural program models provide insights to measure their limitations and provide a stepping stone for their improvement. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 190,375 |
0801.4287 | Movie Recommendation Systems Using An Artificial Immune System | We apply the Artificial Immune System (AIS) technology to the Collaborative Filtering (CF) technology when we build the movie recommendation system. Two different affinity measure algorithms of AIS, Kendall tau and Weighted Kappa, are used to calculate the correlation coefficients for this movie recommendation system. From the testing we think that Weighted Kappa is more suitable than Kendall tau for movie problems. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 1,223 |
1412.2817 | Diffusion Estimation Over Cooperative Multi-Agent Networks With Missing
Data | In many fields, and especially in the medical and social sciences and in recommender systems, data are gathered through clinical studies or targeted surveys. Participants are generally reluctant to respond to all questions in a survey or they may lack information to respond adequately to some questions. The data collected from these studies tend to lead to linear regression models where the regression vectors are only known partially: some of their entries are either missing completely or replaced randomly by noisy values. In this work, assuming missing positions are replaced by noisy values, we examine how a connected network of agents, with each one of them subjected to a stream of data with incomplete regression information, can cooperate with each other through local interactions to estimate the underlying model parameters in the presence of missing data. We explain how to adjust the distributed diffusion through (de)regularization in order to eliminate the bias introduced by the incomplete model. We also propose a technique to recursively estimate the (de)regularization parameter and examine the performance of the resulting strategy. We illustrate the results by considering two applications: one dealing with a mental health survey and the other dealing with a household consumption survey. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 38,237 |
0905.0044 | ADMiRA: Atomic Decomposition for Minimum Rank Approximation | We address the inverse problem that arises in compressed sensing of a low-rank matrix. Our approach is to pose the inverse problem as an approximation problem with a specified target rank of the solution. A simple search over the target rank then provides the minimum rank solution satisfying a prescribed data approximation bound. We propose an atomic decomposition that provides an analogy between parsimonious representations of a sparse vector and a low-rank matrix. Efficient greedy algorithms to solve the inverse problem for the vector case are extended to the matrix case through this atomic decomposition. In particular, we propose an efficient and guaranteed algorithm named ADMiRA that extends CoSaMP, its analogue for the vector case. The performance guarantee is given in terms of the rank-restricted isometry property and bounds both the number of iterations and the error in the approximate solution for the general case where the solution is approximately low-rank and the measurements are noisy. With a sparse measurement operator such as the one arising in the matrix completion problem, the computation in ADMiRA is linear in the number of measurements. The numerical experiments for the matrix completion problem show that, although the measurement operator in this case does not satisfy the rank-restricted isometry property, ADMiRA is a competitive algorithm for matrix completion. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 3,625 |
2208.10834 | Real-Time Sonar Fusion for Layered Navigation Controller | Navigation in varied and dynamic indoor environments remains a complex task for autonomous mobile platforms. Especially when conditions worsen, typical sensor modalities may fail to operate optimally and subsequently provide inapt input for safe navigation control. In this study, we present an approach for the navigation of a dynamic indoor environment with a mobile platform with a single or several sonar sensors using a layered control system. These sensors can operate in conditions such as rain, fog, dust, or dirt. The different control layers, such as collision avoidance and corridor following behavior, are activated based on acoustic flow queues in the fusion of the sonar images. The novelty of this work is allowing these sensors to be freely positioned on the mobile platform and providing the framework for designing the optimal navigational outcome based on a zoning system around the mobile platform. Presented in this paper is the acoustic flow model used, as well as the design of the layered controller. Next to validation in simulation, an implementation is presented and validated in a real office environment using a real mobile platform with one, two, or three sonar sensors in real time with 2D navigation. Multiple sensor layouts were validated in both the simulation and real experiments to demonstrate that the modular approach for the controller and sensor fusion works optimally. The results of this work show stable and safe navigation of indoor environments with dynamic objects. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 314,221 |
2004.10397 | A Framework for Evaluating Gradient Leakage Attacks in Federated
Learning | Federated learning (FL) is an emerging distributed machine learning framework for collaborative model training with a network of clients (edge devices). FL offers default client privacy by allowing clients to keep their sensitive data on local devices and to only share local training parameter updates with the federated server. However, recent studies have shown that even sharing local parameter updates from a client to the federated server may be susceptible to gradient leakage attacks and intrude the client privacy regarding its training data. In this paper, we present a principled framework for evaluating and comparing different forms of client privacy leakage attacks. We first provide formal and experimental analysis to show how adversaries can reconstruct the private local training data by simply analyzing the shared parameter update from local training (e.g., local gradient or weight update vector). We then analyze how different hyperparameter configurations in federated learning and different settings of the attack algorithm may impact on both attack effectiveness and attack cost. Our framework also measures, evaluates, and analyzes the effectiveness of client privacy leakage attacks under different gradient compression ratios when using communication efficient FL protocols. Our experiments also include some preliminary mitigation strategies to highlight the importance of providing a systematic attack evaluation framework towards an in-depth understanding of the various forms of client privacy leakage threats in federated learning and developing theoretical foundations for attack mitigation. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 173,625 |
2410.19300 | Golden Ratio-Based Sufficient Dimension Reduction | Many machine learning applications deal with high dimensional data. To make computations feasible and learning more efficient, it is often desirable to reduce the dimensionality of the input variables by finding linear combinations of the predictors that can retain as much original information as possible in the relationship between the response and the original predictors. We propose a neural network based sufficient dimension reduction method that not only identifies the structural dimension effectively, but also estimates the central space well. It takes advantages of approximation capabilities of neural networks for functions in Barron classes and leads to reduced computation cost compared to other dimension reduction methods in the literature. Additionally, the framework can be extended to fit practical dimension reduction, making the methodology more applicable in practical settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 502,253 |
1610.06995 | Modeling and Analysis of Uplink Non-Orthogonal Multiple Access (NOMA) in
Large-Scale Cellular Networks Using Poisson Cluster Processes | Non-orthogonal multiple access (NOMA) serves multiple users by superposing their distinct message signals. The desired message signal is decoded at the receiver by applying successive interference cancellation (SIC). Using the theory of Poisson cluster process (PCP), we provide a framework to analyze multi-cell uplink NOMA systems. Specifically, we characterize the rate coverage probability of a NOMA user who is at rank $m$ (in terms of the distance from its serving BS) among all users in a cell and the mean rate coverage probability of all users in a cell. Since the signal-to-interference-plus-noise ratio (SINR) of $m$-th user relies on efficient SIC, we consider three scenarios, i.e., perfect SIC (in which the signals of $m-1$ interferers who are stronger than $m$-th user are decoded successfully), imperfect SIC (in which the signals of of $m-1$ interferers who are stronger than $m$-th user may or may not be decoded successfully), and imperfect worst case SIC (in which the decoding of the signal of $m$-th user is always unsuccessful whenever the decoding of its relative $m-1$ stronger users is unsuccessful). The derived expressions are customized to capture the performance of a user at rank $m$ in an equivalent orthogonal multiple access (OMA) system. Finally, numerical results are presented to validate the derived expressions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 62,726 |
1711.01351 | Uplink Performance Analysis of a Drone Cell in a Random Field of Ground
Interferers | Aerial base stations are a promising technology to increase the capabilities of the existing communication networks. However, the existing analytical frameworks do not sufficiently characterize the impact of ground interferers on the aerial base stations. In order to address this issue, we model the effect of interference coming from the coexisting ground networks on the aerial link, which could be the uplink of an aerial cell served by a drone base station. By considering a Poisson field of ground interferers, we characterize the aggregate interference experienced by the drone. This result includes the effect of the drone antenna pattern, the height-dependent shadowing, and various types of environment. We show that the benefits that a drone obtains from a better line-of-sight (LoS) at high altitudes is counteracted by a high vulnerability to the interference coming from the ground. However, by deriving the link coverage probability and transmission rate we show that a drone base station is still a promising technology if the overall system is properly dimensioned according to the given density and transmission power of the interferers. Particularly, our results illustrate how the benefits of such network is maximized by defining the optimal drone altitude and signal-to- interference (SIR) requirement. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 83,866 |
2006.09289 | Isometric Autoencoders | High dimensional data is often assumed to be concentrated on or near a low-dimensional manifold. Autoencoders (AE) is a popular technique to learn representations of such data by pushing it through a neural network with a low dimension bottleneck while minimizing a reconstruction error. Using high capacity AE often leads to a large collection of minimizers, many of which represent a low dimensional manifold that fits the data well but generalizes poorly. Two sources of bad generalization are: extrinsic, where the learned manifold possesses extraneous parts that are far from the data; and intrinsic, where the encoder and decoder introduce arbitrary distortion in the low dimensional parameterization. An approach taken to alleviate these issues is to add a regularizer that favors a particular solution; common regularizers promote sparsity, small derivatives, or robustness to noise. In this paper, we advocate an isometry (i.e., local distance preserving) regularizer. Specifically, our regularizer encourages: (i) the decoder to be an isometry; and (ii) the encoder to be the decoder's pseudo-inverse, that is, the encoder extends the inverse of the decoder to the ambient space by orthogonal projection. In a nutshell, (i) and (ii) fix both intrinsic and extrinsic degrees of freedom and provide a non-linear generalization to principal component analysis (PCA). Experimenting with the isometry regularizer on dimensionality reduction tasks produces useful low-dimensional data representations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 182,506 |
2302.05294 | MoreauGrad: Sparse and Robust Interpretation of Neural Networks via
Moreau Envelope | Explaining the predictions of deep neural nets has been a topic of great interest in the computer vision literature. While several gradient-based interpretation schemes have been proposed to reveal the influential variables in a neural net's prediction, standard gradient-based interpretation frameworks have been commonly observed to lack robustness to input perturbations and flexibility for incorporating prior knowledge of sparsity and group-sparsity structures. In this work, we propose MoreauGrad as an interpretation scheme based on the classifier neural net's Moreau envelope. We demonstrate that MoreauGrad results in a smooth and robust interpretation of a multi-layer neural network and can be efficiently computed through first-order optimization methods. Furthermore, we show that MoreauGrad can be naturally combined with $L_1$-norm regularization techniques to output a sparse or group-sparse explanation which are prior conditions applicable to a wide range of deep learning applications. We empirically evaluate the proposed MoreauGrad scheme on standard computer vision datasets, showing the qualitative and quantitative success of the MoreauGrad approach in comparison to standard gradient-based interpretation methods. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 344,994 |
1801.06172 | Contextual and Position-Aware Factorization Machines for Sentiment
Classification | While existing machine learning models have achieved great success for sentiment classification, they typically do not explicitly capture sentiment-oriented word interaction, which can lead to poor results for fine-grained analysis at the snippet level (a phrase or sentence). Factorization Machine provides a possible approach to learning element-wise interaction for recommender systems, but they are not directly applicable to our task due to the inability to model contexts and word sequences. In this work, we develop two Position-aware Factorization Machines which consider word interaction, context and position information. Such information is jointly encoded in a set of sentiment-oriented word interaction vectors. Compared to traditional word embeddings, SWI vectors explicitly capture sentiment-oriented word interaction and simplify the parameter learning. Experimental results show that while they have comparable performance with state-of-the-art methods for document-level classification, they benefit the snippet/sentence-level sentiment analysis. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 88,567 |
1710.04200 | Joint Image Filtering with Deep Convolutional Networks | Joint image filters leverage the guidance image as a prior and transfer the structural details from the guidance image to the target image for suppressing noise or enhancing spatial resolution. Existing methods either rely on various explicit filter constructions or hand-designed objective functions, thereby making it difficult to understand, improve, and accelerate these filters in a coherent framework. In this paper, we propose a learning-based approach for constructing joint filters based on Convolutional Neural Networks. In contrast to existing methods that consider only the guidance image, the proposed algorithm can selectively transfer salient structures that are consistent with both guidance and target images. We show that the model trained on a certain type of data, e.g., RGB and depth images, generalizes well to other modalities, e.g., flash/non-Flash and RGB/NIR images. We validate the effectiveness of the proposed joint filter through extensive experimental evaluations with state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 82,446 |
2108.09986 | Indoor Path Planning for an Unmanned Aerial Vehicle via Curriculum
Learning | In this study, reinforcement learning was applied to learning two-dimensional path planning including obstacle avoidance by unmanned aerial vehicle (UAV) in an indoor environment. The task assigned to the UAV was to reach the goal position in the shortest amount of time without colliding with any obstacles. Reinforcement learning was performed in a virtual environment created using Gazebo, a virtual environment simulator, to reduce the learning time and cost. Curriculum learning, which consists of two stages was performed for more efficient learning. As a result of learning with two reward models, the maximum goal rates achieved were 71.2% and 88.0%. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 251,762 |
2007.03006 | Announcing CzEng 2.0 Parallel Corpus with over 2 Gigawords | We present a new release of the Czech-English parallel corpus CzEng 2.0 consisting of over 2 billion words (2 "gigawords") in each language. The corpus contains document-level information and is filtered with several techniques to lower the amount of noise. In addition to the data in the previous version of CzEng, it contains new authentic and also high-quality synthetic parallel data. CzEng is freely available for research and educational purposes. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 185,915 |
2311.12502 | Framework for continuous transition to Agile Systems Engineering in the
Automotive Industry | The increasing pressure within VUCA (volatility, uncertainty, complexity and ambiguity) driven environments causes traditional, plan-driven Systems Engineering approaches to no longer suffice. Agility is then changing from a "nice-to-have" to a "must-have" capability for successful system developing organisations. The current state of the art, however, does not provide clear answers on how to map this need in terms of processes, methods, tools and competencies (PMTC) and how to successfully manage the transition within established industries. In this paper, we propose an agile Systems Engineering (SE) Framework for the automotive industry to meet the new agility demand. In addition to the methodological background, we present results of a pilot project in the chassis development department of a German automotive manufacturer and demonstrate the effectiveness of the newly proposed framework. By adopting the described agile SE Framework, companies can foster innovation and collaboration based on a learning, continuous improvement and self-reinforcing base. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 409,362 |
2012.05858 | SPAA: Stealthy Projector-based Adversarial Attacks on Deep Image
Classifiers | Light-based adversarial attacks use spatial augmented reality (SAR) techniques to fool image classifiers by altering the physical light condition with a controllable light source, e.g., a projector. Compared with physical attacks that place hand-crafted adversarial objects, projector-based ones obviate modifying the physical entities, and can be performed transiently and dynamically by altering the projection pattern. However, subtle light perturbations are insufficient to fool image classifiers, due to the complex environment and project-and-capture process. Thus, existing approaches focus on projecting clearly perceptible adversarial patterns, while the more interesting yet challenging goal, stealthy projector-based attack, remains open. In this paper, for the first time, we formulate this problem as an end-to-end differentiable process and propose a Stealthy Projector-based Adversarial Attack (SPAA) solution. In SPAA, we approximate the real Project-and-Capture process using a deep neural network named PCNet, then we include PCNet in the optimization of projector-based attacks such that the generated adversarial projection is physically plausible. Finally, to generate both robust and stealthy adversarial projections, we propose an algorithm that uses minimum perturbation and adversarial confidence thresholds to alternate between the adversarial loss and stealthiness loss optimization. Our experimental evaluations show that SPAA clearly outperforms other methods by achieving higher attack success rates and meanwhile being stealthier, for both targeted and untargeted attacks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 210,913 |
2501.09776 | Multi-Head Self-Attending Neural Tucker Factorization | Quality-of-service (QoS) data exhibit dynamic temporal patterns that are crucial for accurately predicting missing values. These patterns arise from the evolving interactions between users and services, making it essential to capture the temporal dynamics inherent in such data for improved prediction performance. As the size and complexity of QoS datasets increase, existing models struggle to provide accurate predictions, highlighting the need for more flexible and dynamic methods to better capture the underlying patterns in large-scale QoS data. To address this issue, we introduce a neural network-based tensor factorization approach tailored for learning spatiotemporal representations of high-dimensional and incomplete (HDI) tensors, namely the Multi-head Self-attending Neural Tucker Factorization (MSNTucF). The model is elaborately designed for modeling intricate nonlinear spatiotemporal feature interaction patterns hidden in real world data with a two-fold idea. It first employs a neural network structure to generalize the traditional framework of Tucker factorization and then proposes to leverage a multi-head self-attending module to enforce nonlinear latent interaction learning. In empirical studies on two dynamic QoS datasets from real applications, the proposed MSNTucF model demonstrates superior performance compared to state-of-the-art benchmark models in estimating missing observations. This highlights its ability to learn non-linear spatiotemporal representations of HDI tensors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 525,270 |
2211.14053 | Re^2TAL: Rewiring Pretrained Video Backbones for Reversible Temporal
Action Localization | Temporal action localization (TAL) requires long-form reasoning to predict actions of various durations and complex content. Given limited GPU memory, training TAL end to end (i.e., from videos to predictions) on long videos is a significant challenge. Most methods can only train on pre-extracted features without optimizing them for the localization problem, consequently limiting localization performance. In this work, to extend the potential in TAL networks, we propose a novel end-to-end method Re2TAL, which rewires pretrained video backbones for reversible TAL. Re2TAL builds a backbone with reversible modules, where the input can be recovered from the output such that the bulky intermediate activations can be cleared from memory during training. Instead of designing one single type of reversible module, we propose a network rewiring mechanism, to transform any module with a residual connection to a reversible module without changing any parameters. This provides two benefits: (1) a large variety of reversible networks are easily obtained from existing and even future model designs, and (2) the reversible models require much less training effort as they reuse the pre-trained parameters of their original non-reversible versions. Re2TAL, only using the RGB modality, reaches 37.01% average mAP on ActivityNet-v1.3, a new state-of-the-art record, and mAP 64.9% at tIoU=0.5 on THUMOS-14, outperforming all other RGB-only methods. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 332,703 |
2008.06982 | A Self-supervised GAN for Unsupervised Few-shot Object Recognition | This paper addresses unsupervised few-shot object recognition, where all training images are unlabeled, and test images are divided into queries and a few labeled support images per object class of interest. The training and test images do not share object classes. We extend the vanilla GAN with two loss functions, both aimed at self-supervised learning. The first is a reconstruction loss that enforces the discriminator to reconstruct the probabilistically sampled latent code which has been used for generating the "fake" image. The second is a triplet loss that enforces the discriminator to output image encodings that are closer for more similar images. Evaluation, comparisons, and detailed ablation studies are done in the context of few-shot classification. Our approach significantly outperforms the state of the art on the Mini-Imagenet and Tiered-Imagenet datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 191,958 |
2102.11855 | Deep Unitary Convolutional Neural Networks | Deep neural networks can suffer from the exploding and vanishing activation problem, in which the networks fail to train properly because the neural signals either amplify or attenuate across the layers and become saturated. While other normalization methods aim to fix the stated problem, most of them have inference speed penalties in those applications that require running averages of the neural activations. Here we extend the unitary framework based on Lie algebra to neural networks of any dimensionalities, overcoming the major constraints of the prior arts that limit synaptic weights to be square matrices. Our proposed unitary convolutional neural networks deliver up to 32% faster inference speeds and up to 50% reduction in permanent hard disk space while maintaining competitive prediction accuracy. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 221,538 |
1810.05683 | Long-Duration Autonomy for Small Rotorcraft UAS including Recharging | Many unmanned aerial vehicle surveillance and monitoring applications require observations at precise locations over long periods of time, ideally days or weeks at a time (e.g. ecosystem monitoring), which has been impractical due to limited endurance and the requirement of humans in the loop for operation. To overcome these limitations, we propose a fully autonomous small rotorcraft UAS that is capable of performing repeated sorties for long-term observation missions without any human intervention. We address two key technologies that are critical for such a system: full platform autonomy including emergency response to enable mission execution independently from human operators, and the ability of vision-based precision landing on a recharging station for automated energy replenishment. Experimental results of up to 11 hours of fully autonomous operation in indoor and outdoor environments illustrate the capability of our system. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 110,282 |
2109.04145 | PIMNet: A Parallel, Iterative and Mimicking Network for Scene Text
Recognition | Nowadays, scene text recognition has attracted more and more attention due to its various applications. Most state-of-the-art methods adopt an encoder-decoder framework with attention mechanism, which generates text autoregressively from left to right. Despite the convincing performance, the speed is limited because of the one-by-one decoding strategy. As opposed to autoregressive models, non-autoregressive models predict the results in parallel with a much shorter inference time, but the accuracy falls behind the autoregressive counterpart considerably. In this paper, we propose a Parallel, Iterative and Mimicking Network (PIMNet) to balance accuracy and efficiency. Specifically, PIMNet adopts a parallel attention mechanism to predict the text faster and an iterative generation mechanism to make the predictions more accurate. In each iteration, the context information is fully explored. To improve learning of the hidden layer, we exploit the mimicking learning in the training phase, where an additional autoregressive decoder is adopted and the parallel decoder mimics the autoregressive decoder with fitting outputs of the hidden layer. With the shared backbone between the two decoders, the proposed PIMNet can be trained end-to-end without pre-training. During inference, the branch of the autoregressive decoder is removed for a faster speed. Extensive experiments on public benchmarks demonstrate the effectiveness and efficiency of PIMNet. Our code will be available at https://github.com/Pay20Y/PIMNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 254,295 |
2408.03172 | Leveraging Parameter Efficient Training Methods for Low Resource Text
Classification: A Case Study in Marathi | With the surge in digital content in low-resource languages, there is an escalating demand for advanced Natural Language Processing (NLP) techniques tailored to these languages. BERT (Bidirectional Encoder Representations from Transformers), serving as the foundational framework for numerous NLP architectures and language models, is increasingly employed for the development of low-resource NLP models. Parameter Efficient Fine-Tuning (PEFT) is a method for fine-tuning Large Language Models (LLMs) and reducing the training parameters to some extent to decrease the computational costs needed for training the model and achieve results comparable to a fully fine-tuned model. In this work, we present a study of PEFT methods for the Indic low-resource language Marathi. We conduct a comprehensive analysis of PEFT methods applied to various monolingual and multilingual Marathi BERT models. These approaches are evaluated on prominent text classification datasets like MahaSent, MahaHate, and MahaNews. The incorporation of PEFT techniques is demonstrated to significantly expedite the training speed of the models, addressing a critical aspect of model development and deployment. In this study, we explore Low-Rank Adaptation of Large Language Models (LoRA) and adapter methods for low-resource text classification. We show that these methods are competitive with full fine-tuning and can be used without loss in accuracy. This study contributes valuable insights into the effectiveness of Marathi BERT models, offering a foundation for the continued advancement of NLP capabilities in Marathi and similar Indic languages. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 478,921 |
2310.05072 | Performance Analysis of RIS-Aided Double Spatial Scattering Modulation
for mmWave MIMO Systems | In this paper, we investigate a practical structure of reconfigurable intelligent surface (RIS)-based double spatial scattering modulation (DSSM) for millimeter-wave (mmWave) multiple-input multiple-output (MIMO) systems. A suboptimal detector is proposed, in which the beam direction is first demodulated according to the received beam strength, and then the remaining information is demodulated by adopting the maximum likelihood algorithm. Based on the proposed suboptimal detector, we derive the conditional pairwise error probability expression. Further, the exact numerical integral and closed-form expressions of unconditional pairwise error probability (UPEP) are derived via two different approaches. To provide more insights, we derive the upper bound and asymptotic expressions of UPEP. In addition, the diversity gain of the RIS-DSSM scheme was also given. Furthermore, the union upper bound of average bit error probability (ABEP) is obtained by combining the UPEP and the number of error bits. Simulation results are provided to validate the derived upper bound and asymptotic expressions of ABEP. We found an interesting phenomenon that the ABEP performance of the proposed system-based phase shift keying is better than that of the quadrature amplitude modulation. Additionally, the performance advantage of ABEP is more significant with the increase in the number of RIS elements. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 397,968 |
2005.07064 | Multi-agent Communication meets Natural Language: Synergies between
Functional and Structural Language Learning | We present a method for combining multi-agent communication and traditional data-driven approaches to natural language learning, with an end goal of teaching agents to communicate with humans in natural language. Our starting point is a language model that has been trained on generic, not task-specific language data. We then place this model in a multi-agent self-play environment that generates task-specific rewards used to adapt or modulate the model, turning it into a task-conditional language model. We introduce a new way for combining the two types of learning based on the idea of reranking language model samples, and show that this method outperforms others in communicating with humans in a visual referential communication task. Finally, we present a taxonomy of different types of language drift that can occur alongside a set of measures to detect them. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 177,182 |
2306.07646 | Enhanced Multimodal Representation Learning with Cross-modal KD | This paper explores the tasks of leveraging auxiliary modalities which are only available at training to enhance multimodal representation learning through cross-modal Knowledge Distillation (KD). The widely adopted mutual information maximization-based objective leads to a short-cut solution of the weak teacher, i.e., achieving the maximum mutual information by simply making the teacher model as weak as the student model. To prevent such a weak solution, we introduce an additional objective term, i.e., the mutual information between the teacher and the auxiliary modality model. Besides, to narrow down the information gap between the student and teacher, we further propose to minimize the conditional entropy of the teacher given the student. Novel training schemes based on contrastive learning and adversarial learning are designed to optimize the mutual information and the conditional entropy, respectively. Experimental results on three popular multimodal benchmark datasets have shown that the proposed method outperforms a range of state-of-the-art approaches for video recognition, video retrieval and emotion classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 373,096 |
2305.17695 | k-NNN: Nearest Neighbors of Neighbors for Anomaly Detection | Anomaly detection aims at identifying images that deviate significantly from the norm. We focus on algorithms that embed the normal training examples in space and when given a test image, detect anomalies based on the features distance to the k-nearest training neighbors. We propose a new operator that takes into account the varying structure & importance of the features in the embedding space. Interestingly, this is done by taking into account not only the nearest neighbors, but also the neighbors of these neighbors (k-NNN). We show that by simply replacing the nearest neighbor component in existing algorithms by our k-NNN operator, while leaving the rest of the algorithms untouched, each algorithms own results are improved. This is the case both for common homogeneous datasets, such as flowers or nuts of a specific type, as well as for more diverse datasets | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 368,708 |
2303.02901 | $\alpha$-divergence Improves the Entropy Production Estimation via
Machine Learning | Recent years have seen a surge of interest in the algorithmic estimation of stochastic entropy production (EP) from trajectory data via machine learning. A crucial element of such algorithms is the identification of a loss function whose minimization guarantees the accurate EP estimation. In this study, we show that there exists a host of loss functions, namely those implementing a variational representation of the $\alpha$-divergence, which can be used for the EP estimation. By fixing $\alpha$ to a value between $-1$ and $0$, the $\alpha$-NEEP (Neural Estimator for Entropy Production) exhibits a much more robust performance against strong nonequilibrium driving or slow dynamics, which adversely affects the existing method based on the Kullback-Leibler divergence ($\alpha = 0$). In particular, the choice of $\alpha = -0.5$ tends to yield the optimal results. To corroborate our findings, we present an exactly solvable simplification of the EP estimation problem, whose loss function landscape and stochastic properties give deeper intuition into the robustness of the $\alpha$-NEEP. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 349,529 |
2410.01771 | Bayesian Binary Search | We present Bayesian Binary Search (BBS), a novel probabilistic variant of the classical binary search/bisection algorithm. BBS leverages machine learning/statistical techniques to estimate the probability density of the search space and modifies the bisection step to split based on probability density rather than the traditional midpoint, allowing for the learned distribution of the search space to guide the search algorithm. Search space density estimation can flexibly be performed using supervised probabilistic machine learning techniques (e.g., Gaussian process regression, Bayesian neural networks, quantile regression) or unsupervised learning algorithms (e.g., Gaussian mixture models, kernel density estimation (KDE), maximum likelihood estimation (MLE)). We demonstrate significant efficiency gains of using BBS on both simulated data across a variety of distributions and in a real-world binary search use case of probing channel balances in the Bitcoin Lightning Network, for which we have deployed the BBS algorithm in a production setting. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 493,936 |
2008.10748 | An empirical investigation of different classifiers, encoding and
ensemble schemes for next event prediction using business process event logs | There is a growing need for empirical benchmarks that support researchers and practitioners in selecting the best machine learning technique for given prediction tasks. In this paper, we consider the next event prediction task in business process predictive monitoring and we extend our previously published benchmark by studying the impact on the performance of different encoding windows and of using ensemble schemes. The choice of whether to use ensembles and which scheme to use often depends on the type of data and classification task. While there is a general understanding that ensembles perform well in predictive monitoring of business processes, next event prediction is a task for which no other benchmarks involving ensembles are available. The proposed benchmark helps researchers to select a high performing individual classifier or ensemble scheme given the variability at the case level of the event log under consideration. Experimental results show that choosing an optimal number of events for feature encoding is challenging, resulting in the need to consider each event log individually when selecting an optimal value. Ensemble schemes improve the performance of low performing classifiers in this task, such as SVM, whereas high performing classifiers, such as tree-based classifiers, are not better off when ensemble schemes are considered. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 193,074 |
1612.00385 | Temporal Attention-Gated Model for Robust Sequence Classification | Typical techniques for sequence classification are designed for well-segmented sequences which have been edited to remove noisy or irrelevant parts. Therefore, such methods cannot be easily applied on noisy sequences expected in real-world applications. In this paper, we present the Temporal Attention-Gated Model (TAGM) which integrates ideas from attention models and gated recurrent networks to better deal with noisy or unsegmented sequences. Specifically, we extend the concept of attention model to measure the relevance of each observation (time step) of a sequence. We then use a novel gated recurrent network to learn the hidden representation for the final prediction. An important advantage of our approach is interpretability since the temporal attention weights provide a meaningful value for the salience of each time step in the sequence. We demonstrate the merits of our TAGM approach, both for prediction accuracy and interpretability, on three different tasks: spoken digit recognition, text-based sentiment analysis and visual event recognition. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 64,873 |
2309.06286 | Transferability analysis of data-driven additive manufacturing
knowledge: a case study between powder bed fusion and directed energy
deposition | Data-driven research in Additive Manufacturing (AM) has gained significant success in recent years. This has led to a plethora of scientific literature to emerge. The knowledge in these works consists of AM and Artificial Intelligence (AI) contexts that have not been mined and formalized in an integrated way. Moreover, no tools or guidelines exist to support data-driven knowledge transfer from one context to another. As a result, data-driven solutions using specific AI techniques are being developed and validated only for specific AM process technologies. There is a potential to exploit the inherent similarities across various AM technologies and adapt the existing solutions from one process or problem to another using AI, such as Transfer Learning. We propose a three-step knowledge transferability analysis framework in AM to support data-driven AM knowledge transfer. As a prerequisite to transferability analysis, AM knowledge is featurized into identified knowledge components. The framework consists of pre-transfer, transfer, and post-transfer steps to accomplish knowledge transfer. A case study is conducted between flagship metal AM processes. Laser Powder Bed Fusion (LPBF) is the source of knowledge motivated by its relative matureness in applying AI over Directed Energy Deposition (DED), which drives the need for knowledge transfer as the less explored target process. We show successful transfer at different levels of the data-driven solution, including data representation, model architecture, and model parameters. The pipeline of AM knowledge transfer can be automated in the future to allow efficient cross-context or cross-process knowledge exchange. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 391,374 |
1809.09613 | Size Agnostic Change Point Detection Framework for Evolving Networks | Changes in the structure of observed social and complex networks' structure can indicate a significant underlying change in an organization, or reflect the response of the network to an external event. Automatic detection of change points in evolving networks is rudimentary to the research and the understanding of the effect of such events on networks. Here we present an easy-to-implement and fast framework for change point detection in temporal evolving networks. Unlike previous approaches, our method is size agnostic, and does not require either prior knowledge about the network's size and structure, nor does it require obtaining historical information or nodal identities over time. We use both synthetic data derived from dynamic models and two real datasets: Enron email exchange and Ask-Ubuntu forum. Our framework succeeds with both precision and recall and outperforms previous solutions | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 108,745 |
1904.07765 | An Evaluation Framework for Interactive Recommender System | Traditional recommender systems present a relatively static list of recommendations to a user where the feedback is typically limited to an accept/reject or a rating model. However, these simple modes of feedback may only provide limited insights as to why a user likes or dislikes an item and what aspects of the item the user has considered. Interactive recommender systems present an opportunity to engage the user in the process by allowing them to interact with the recommendations, provide feedback and impact the results in real-time. Evaluation of the impact of the user interaction typically requires an extensive user study which is time consuming and gives researchers limited opportunities to tune their solutions without having to conduct multiple rounds of user feedback. Additionally, user experience and design aspects can have a significant impact on the user feedback which may result in not necessarily assessing the quality of some of the underlying algorithmic decisions in the overall solution. As a result, we present an evaluation framework which aims to simulate the users interacting with the recommender. We formulate metrics to evaluate the quality of the interactive recommenders which are outputted by the framework once simulation is completed. While simulation along is not sufficient to evaluate a complete solution, the results can be useful to help researchers tune their solution before moving to the user study stage. | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 127,878 |
2501.05408 | TimeRL: Efficient Deep Reinforcement Learning with Polyhedral Dependence
Graphs | Modern deep learning (DL) workloads increasingly use complex deep reinforcement learning (DRL) algorithms that generate training data within the learning loop. This results in programs with several nested loops and dynamic data dependencies between tensors. While DL systems with eager execution support such dynamism, they lack the optimizations and smart scheduling of graph-based execution. Graph-based execution, however, cannot express dynamic tensor shapes, instead requiring the use of multiple static subgraphs. Either execution model for DRL thus leads to redundant computation, reduced parallelism, and less efficient memory management. We describe TimeRL, a system for executing dynamic DRL programs that combines the dynamism of eager execution with the whole-program optimizations and scheduling of graph-based execution. TimeRL achieves this by introducing the declarative programming model of recurrent tensors, which allows users to define dynamic dependencies as intuitive recurrence equations. TimeRL translates recurrent tensors into a polyhedral dependence graph (PDG) with dynamic dependencies as symbolic expressions. Through simple PDG transformations, TimeRL applies whole-program optimizations, such as automatic vectorization, incrementalization, and operator fusion. The PDG also allows for the computation of an efficient program-wide execution schedule, which decides on buffer deallocations, buffer donations, and GPU/CPU memory swapping. We show that TimeRL executes current DRL algorithms up to 47$\times$ faster than existing DRL systems, while using 16$\times$ less GPU peak memory. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 523,566 |
1911.11726 | Network Embedding: An Overview | Networks are one of the most powerful structures for modeling problems in the real world. Downstream machine learning tasks defined on networks have the potential to solve a variety of problems. With link prediction, for instance, one can predict whether two persons will become friends on a social network. Many machine learning algorithms, however, require that each input example is a real vector. Network embedding encompasses various methods for unsupervised, and sometimes supervised, learning of feature representations of nodes and links in a network. Typically, embedding methods are based on the assumption that the similarity between nodes in the network should be reflected in the learned feature representations. In this paper, we review significant contributions to network embedding in the last decade. In particular, we look at four methods: Spectral Clustering, DeepWalk, Large-scale Information Network Embedding (LINE), and node2vec. We describe each method and list its advantages and shortcomings. In addition, we give examples of real-world machine learning problems on networks in which the embedding is critical in order to maximize the predictive performance of the machine learning task. Finally, we take a look at research trends and state-of-the art methods in the research on network embedding. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 155,205 |
1905.05892 | Pareto-Optimal Allocation of Transactive Energy at Market Equilibrium in
Distribution Systems: A Constrained Vector Optimization Approach | In a grid constrained transactive distribution system market, distribution locational marginal pricing DLMP is influenced by the distance from the substation to an energy user, thereby causing households that are further away from the substation to be charged more. The Jain index of fairness, which has been recently applied to alleviate this undesirable effect of inefficient energy allocations, is used in this research to quantify fairness. It is shown that the Jain index is strictly quasi-concave. A bilevel distributed mechanism is proposed, where at the lower level, auction mechanisms are invoked simultaneously at each aggregator to obtain energy costs under market equilibrium conditions. A constrained multi gradient ascent algorithm, Augmented Lagrangian Multigradient Approach ALMA, is proposed for implementation at the upper level to attain energy allocations that represent tradeoffs between efficiency and fairness. Theoretical issues pertaining to ALMA as a generic algorithm for constrained vector optimization are considered. It is shown that when the objectives are restricted to be strictly quasi concave functions and if the feasible region is convex, ALMA converges towards global Pareto optimality. The overall effectiveness of the proposed approach is confirmed through a set of MATLAB simulations implemented on a modified IEEE 37-bus system platform. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 130,843 |
1405.5550 | Application of Artificial Neural Networks in Predicting Abrasion
Resistance of Solution Polymerized Styrene-Butadiene Rubber Based Composites | Abrasion resistance of solution polymerized styrene-butadiene rubber (SSBR) based composites is a typical and crucial property in practical applications. Previous studies show that the abrasion resistance can be calculated by the multiple linear regression model. In our study, considering this relationship can also be described into the non-linear conditions, a Multilayer Feed-forward Neural Networks model with 3 nodes (MLFN-3) was successfully established to describe the relationship between the abrasion resistance and other properties, using 23 groups of data, with the RMS error 0.07. Our studies have proved that Artificial Neural Networks (ANN) model can be used to predict the SSBR-based composites, which is an accurate and robust process. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 33,282 |
2202.01764 | JaQuAD: Japanese Question Answering Dataset for Machine Reading
Comprehension | Question Answering (QA) is a task in which a machine understands a given document and a question to find an answer. Despite impressive progress in the NLP area, QA is still a challenging problem, especially for non-English languages due to the lack of annotated datasets. In this paper, we present the Japanese Question Answering Dataset, JaQuAD, which is annotated by humans. JaQuAD consists of 39,696 extractive question-answer pairs on Japanese Wikipedia articles. We finetuned a baseline model which achieves 78.92% for F1 score and 63.38% for EM on test set. The dataset and our experiments are available at https://github.com/SkelterLabsInc/JaQuAD. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 278,577 |
2402.17906 | Representation learning in multiplex graphs: Where and how to fuse
information? | In recent years, unsupervised and self-supervised graph representation learning has gained popularity in the research community. However, most proposed methods are focused on homogeneous networks, whereas real-world graphs often contain multiple node and edge types. Multiplex graphs, a special type of heterogeneous graphs, possess richer information, provide better modeling capabilities and integrate more detailed data from potentially different sources. The diverse edge types in multiplex graphs provide more context and insights into the underlying processes of representation learning. In this paper, we tackle the problem of learning representations for nodes in multiplex networks in an unsupervised or self-supervised manner. To that end, we explore diverse information fusion schemes performed at different levels of the graph processing pipeline. The detailed analysis and experimental evaluation of various scenarios inspired us to propose improvements in how to construct GNN architectures that deal with multiplex graphs. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 433,194 |
2209.09329 | MAN: Multi-Action Networks Learning | Learning control policies with large discrete action spaces is a challenging problem in the field of reinforcement learning due to present inefficiencies in exploration. With high dimensional action spaces, there are a large number of potential actions in each individual dimension over which policies would be learned. In this work, we introduce a Deep Reinforcement Learning (DRL) algorithm call Multi-Action Networks (MAN) Learning that addresses the challenge of high-dimensional large discrete action spaces. We propose factorizing the N-dimension action space into N 1-dimensional components, known as sub-actions, creating a Value Neural Network for each sub-action. Then, MAN uses temporal-difference learning to train the networks synchronously, which is simpler than training a single network with a large action output directly. To evaluate the proposed method, we test MAN on three scenarios: an n-dimension maze task, a block stacking task, and then extend MAN to handle 12 games from the Atari Arcade Learning environment with 18 action spaces. Our results indicate that MAN learns faster than both Deep Q-Learning and Double Deep Q-Learning, implying our method is a better performing synchronous temporal difference algorithm than those currently available for large discrete action spaces. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 318,461 |
1702.06506 | PixelNet: Representation of the pixels, by the pixels, and for the
pixels | We explore design principles for general pixel-level prediction problems, from low-level edge detection to mid-level surface normal estimation to high-level semantic segmentation. Convolutional predictors, such as the fully-convolutional network (FCN), have achieved remarkable success by exploiting the spatial redundancy of neighboring pixels through convolutional processing. Though computationally efficient, we point out that such approaches are not statistically efficient during learning precisely because spatial redundancy limits the information learned from neighboring pixels. We demonstrate that stratified sampling of pixels allows one to (1) add diversity during batch updates, speeding up learning; (2) explore complex nonlinear predictors, improving accuracy; and (3) efficiently train state-of-the-art models tabula rasa (i.e., "from scratch") for diverse pixel-labeling tasks. Our single architecture produces state-of-the-art results for semantic segmentation on PASCAL-Context dataset, surface normal estimation on NYUDv2 depth dataset, and edge detection on BSDS. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 68,625 |
2409.07823 | Online vs Offline: A Comparative Study of First-Party and Third-Party
Evaluations of Social Chatbots | This paper explores the efficacy of online versus offline evaluation methods in assessing conversational chatbots, specifically comparing first-party direct interactions with third-party observational assessments. By extending a benchmarking dataset of user dialogs with empathetic chatbots with offline third-party evaluations, we present a systematic comparison between the feedback from online interactions and the more detached offline third-party evaluations. Our results reveal that offline human evaluations fail to capture the subtleties of human-chatbot interactions as effectively as online assessments. In comparison, automated third-party evaluations using a GPT-4 model offer a better approximation of first-party human judgments given detailed instructions. This study highlights the limitations of third-party evaluations in grasping the complexities of user experiences and advocates for the integration of direct interaction feedback in conversational AI evaluation to enhance system development and user satisfaction. | true | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 487,677 |
2101.10587 | Low Resource Recognition and Linking of Biomedical Concepts from a Large
Ontology | Tools to explore scientific literature are essential for scientists, especially in biomedicine, where about a million new papers are published every year. Many such tools provide users the ability to search for specific entities (e.g. proteins, diseases) by tracking their mentions in papers. PubMed, the most well known database of biomedical papers, relies on human curators to add these annotations. This can take several weeks for new papers, and not all papers get tagged. Machine learning models have been developed to facilitate the semantic indexing of scientific papers. However their performance on the more comprehensive ontologies of biomedical concepts does not reach the levels of typical entity recognition problems studied in NLP. In large part this is due to their low resources, where the ontologies are large, there is a lack of descriptive text defining most entities, and labeled data can only cover a small portion of the ontology. In this paper, we develop a new model that overcomes these challenges by (1) generalizing to entities unseen at training time, and (2) incorporating linking predictions into the mention segmentation decisions. Our approach achieves new state-of-the-art results for the UMLS ontology in both traditional recognition/linking (+8 F1 pts) as well as semantic indexing-based evaluation (+10 F1 pts). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 216,992 |
2405.14189 | Semantic-guided Prompt Organization for Universal Goal Hijacking against
LLMs | With the rising popularity of Large Language Models (LLMs), assessing their trustworthiness through security tasks has gained critical importance. Regarding the new task of universal goal hijacking, previous efforts have concentrated solely on optimization algorithms, overlooking the crucial role of the prompt. To fill this gap, we propose a universal goal hijacking method called POUGH that incorporates semantic-guided prompt processing strategies. Specifically, the method starts with a sampling strategy to select representative prompts from a candidate pool, followed by a ranking strategy that prioritizes the prompts. Once the prompts are organized sequentially, the method employs an iterative optimization algorithm to generate the universal fixed suffix for the prompts. Experiments conducted on four popular LLMs and ten types of target responses verified the effectiveness of our method. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 456,297 |
2402.05976 | RankSum An unsupervised extractive text summarization based on rank
fusion | In this paper, we propose Ranksum, an approach for extractive text summarization of single documents based on the rank fusion of four multi-dimensional sentence features extracted for each sentence: topic information, semantic content, significant keywords, and position. The Ranksum obtains the sentence saliency rankings corresponding to each feature in an unsupervised way followed by the weighted fusion of the four scores to rank the sentences according to their significance. The scores are generated in completely unsupervised way, and a labeled document set is required to learn the fusion weights. Since we found that the fusion weights can generalize to other datasets, we consider the Ranksum as an unsupervised approach. To determine topic rank, we employ probabilistic topic models whereas semantic information is captured using sentence embeddings. To derive rankings using sentence embeddings, we utilize Siamese networks to produce abstractive sentence representation and then we formulate a novel strategy to arrange them in their order of importance. A graph-based strategy is applied to find the significant keywords and related sentence rankings in the document. We also formulate a sentence novelty measure based on bigrams, trigrams, and sentence embeddings to eliminate redundant sentences from the summary. The ranks of all the sentences computed for each feature are finally fused to get the final score for each sentence in the document. We evaluate our approach on publicly available summarization datasets CNN/DailyMail and DUC 2002. Experimental results show that our approach outperforms other existing state-of-the-art summarization methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 428,106 |
2203.06690 | Algebraic Learning: Towards Interpretable Information Modeling | Along with the proliferation of digital data collected using sensor technologies and a boost of computing power, Deep Learning (DL) based approaches have drawn enormous attention in the past decade due to their impressive performance in extracting complex relations from raw data and representing valuable information. Meanwhile, though, rooted in its notorious black-box nature, the appreciation of DL has been highly debated due to the lack of interpretability. On the one hand, DL only utilizes statistical features contained in raw data while ignoring human knowledge of the underlying system, which results in both data inefficiency and trust issues; on the other hand, a trained DL model does not provide to researchers any extra insight about the underlying system beyond its output, which, however, is the essence of most fields of science, e.g. physics and economics. This thesis addresses the issue of interpretability in general information modeling and endeavors to ease the problem from two scopes. Firstly, a problem-oriented perspective is applied to incorporate knowledge into modeling practice, where interesting mathematical properties emerge naturally which cast constraints on modeling. Secondly, given a trained model, various methods could be applied to extract further insights about the underlying system. These two pathways are termed as guided model design and secondary measurements. Remarkably, a novel scheme emerges for the modeling practice in statistical learning: Algebraic Learning (AgLr). Instead of being restricted to the discussion of any specific model, AgLr starts from idiosyncrasies of a learning task itself and studies the structure of a legitimate model class. This novel scheme demonstrates the noteworthy value of abstract algebra for general AI, which has been overlooked in recent progress, and could shed further light on interpretable information modeling. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 285,199 |
1602.03320 | Graph Wavelets via Sparse Cuts: Extended Version | Modeling information that resides on vertices of large graphs is a key problem in several real-life applications, ranging from social networks to the Internet-of-things. Signal Processing on Graphs and, in particular, graph wavelets can exploit the intrinsic smoothness of these datasets in order to represent them in a both compact and accurate manner. However, how to discover wavelet bases that capture the geometry of the data with respect to the signal as well as the graph structure remains an open question. In this paper, we study the problem of computing graph wavelet bases via sparse cuts in order to produce low-dimensional encodings of data-driven bases. This problem is connected to known hard problems in graph theory (e.g. multiway cuts) and thus requires an efficient heuristic. We formulate the basis discovery task as a relaxation of a vector optimization problem, which leads to an elegant solution as a regularized eigenvalue computation. Moreover, we propose several strategies in order to scale our algorithm to large graphs. Experimental results show that the proposed algorithm can effectively encode both the graph structure and signal, producing compressed and accurate representations for vertex values in a wide range of datasets (e.g. sensor and gene networks) and significantly outperforming the best baseline. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 51,986 |
1103.2491 | Heterogeneous Learning in Zero-Sum Stochastic Games with Incomplete
Information | Learning algorithms are essential for the applications of game theory in a networking environment. In dynamic and decentralized settings where the traffic, topology and channel states may vary over time and the communication between agents is impractical, it is important to formulate and study games of incomplete information and fully distributed learning algorithms which for each agent requires a minimal amount of information regarding the remaining agents. In this paper, we address this major challenge and introduce heterogeneous learning schemes in which each agent adopts a distinct learning pattern in the context of games with incomplete information. We use stochastic approximation techniques to show that the heterogeneous learning schemes can be studied in terms of their deterministic ordinary differential equation (ODE) counterparts. Depending on the learning rates of the players, these ODEs could be different from the standard replicator dynamics, (myopic) best response (BR) dynamics, logit dynamics, and fictitious play dynamics. We apply the results to a class of security games in which the attacker and the defender adopt different learning schemes due to differences in their rationality levels and the information they acquire. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 9,586 |
2409.18053 | DualAD: Dual-Layer Planning for Reasoning in Autonomous Driving | We present a novel autonomous driving framework, DualAD, designed to imitate human reasoning during driving. DualAD comprises two layers: a rule-based motion planner at the bottom layer that handles routine driving tasks requiring minimal reasoning, and an upper layer featuring a rule-based text encoder that converts driving scenarios from absolute states into text description. This text is then processed by a large language model (LLM) to make driving decisions. The upper layer intervenes in the bottom layer's decisions when potential danger is detected, mimicking human reasoning in critical situations. Closed-loop experiments demonstrate that DualAD, using a zero-shot pre-trained model, significantly outperforms rule-based motion planners that lack reasoning abilities. Our experiments also highlight the effectiveness of the text encoder, which considerably enhances the model's scenario understanding. Additionally, the integrated DualAD model improves with stronger LLMs, indicating the framework's potential for further enhancement. Code and benchmarks are available at github.com/TUM-AVS/DualAD. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 492,089 |
1909.08496 | Exploring Bit-Slice Sparsity in Deep Neural Networks for Efficient
ReRAM-Based Deployment | Emerging resistive random-access memory (ReRAM) has recently been intensively investigated to accelerate the processing of deep neural networks (DNNs). Due to the in-situ computation capability, analog ReRAM crossbars yield significant throughput improvement and energy reduction compared to traditional digital methods. However, the power hungry analog-to-digital converters (ADCs) prevent the practical deployment of ReRAM-based DNN accelerators on end devices with limited chip area and power budget. We observe that due to the limited bit-density of ReRAM cells, DNN weights are bit sliced and correspondingly stored on multiple ReRAM bitlines. The accumulated current on bitlines resulted by weights directly dictates the overhead of ADCs. As such, bitwise weight sparsity rather than the sparsity of the full weight, is desirable for efficient ReRAM deployment. In this work, we propose bit-slice L1, the first algorithm to induce bit-slice sparsity during the training of dynamic fixed-point DNNs. Experiment results show that our approach achieves 2x sparsity improvement compared to previous algorithms. The resulting sparsity allows the ADC resolution to be reduced to 1-bit of the most significant bit-slice and down to 3-bit for the others bits, which significantly speeds up processing and reduces power and area overhead. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 145,991 |
1512.01859 | Statistical Signatures of Structural Organization: The case of long
memory in renewal processes | Identifying and quantifying memory are often critical steps in developing a mechanistic understanding of stochastic processes. These are particularly challenging and necessary when exploring processes that exhibit long-range correlations. The most common signatures employed rely on second-order temporal statistics and lead, for example, to identifying long memory in processes with power-law autocorrelation function and Hurst exponent greater than $1/2$. However, most stochastic processes hide their memory in higher-order temporal correlations. Information measures---specifically, divergences in the mutual information between a process' past and future (excess entropy) and minimal predictive memory stored in a process' causal states (statistical complexity)---provide a different way to identify long memory in processes with higher-order temporal correlations. However, there are no ergodic stationary processes with infinite excess entropy for which information measures have been compared to autocorrelation functions and Hurst exponents. Here, we show that fractal renewal processes---those with interevent distribution tails $\propto t^{-\alpha}$---exhibit long memory via a phase transition at $\alpha = 1$. Excess entropy diverges only there and statistical complexity diverges there and for all $\alpha < 1$. When these processes do have power-law autocorrelation function and Hurst exponent greater than $1/2$, they do not have divergent excess entropy. This analysis breaks the intuitive association between these different quantifications of memory. We hope that the methods used here, based on causal states, provide some guide as to how to construct and analyze other long memory processes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 49,875 |
2310.00646 | Source Attribution for Large Language Model-Generated Data | The impressive performances of Large Language Models (LLMs) and their immense potential for commercialization have given rise to serious concerns over the Intellectual Property (IP) of their training data. In particular, the synthetic texts generated by LLMs may infringe the IP of the data being used to train the LLMs. To this end, it is imperative to be able to perform source attribution by identifying the data provider who contributed to the generation of a synthetic text by an LLM. In this paper, we show that this problem can be tackled by watermarking, i.e., by enabling an LLM to generate synthetic texts with embedded watermarks that contain information about their source(s). We identify the key properties of such watermarking frameworks (e.g., source attribution accuracy, robustness against adversaries), and propose a source attribution framework that satisfies these key properties due to our algorithmic designs. Our framework enables an LLM to learn an accurate mapping from the generated texts to data providers, which sets the foundation for effective source attribution. Extensive empirical evaluations show that our framework achieves effective source attribution. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 396,076 |
2304.03763 | Clutter Detection and Removal in 3D Scenes with View-Consistent
Inpainting | Removing clutter from scenes is essential in many applications, ranging from privacy-concerned content filtering to data augmentation. In this work, we present an automatic system that removes clutter from 3D scenes and inpaints with coherent geometry and texture. We propose techniques for its two key components: 3D segmentation from shared properties and 3D inpainting, both of which are important problems. The definition of 3D scene clutter (frequently-moving objects) is not well captured by commonly-studied object categories in computer vision. To tackle the lack of well-defined clutter annotations, we group noisy fine-grained labels, leverage virtual rendering, and impose an instance-level area-sensitive loss. Once clutter is removed, we inpaint geometry and texture in the resulting holes by merging inpainted RGB-D images. This requires novel voting and pruning strategies that guarantee multi-view consistency across individually inpainted images for mesh reconstruction. Experiments on ScanNet and Matterport dataset show that our method outperforms baselines for clutter segmentation and 3D inpainting, both visually and quantitatively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 356,936 |
2502.07254 | Fairness in Multi-Agent AI: A Unified Framework for Ethical and
Equitable Autonomous Systems | Ensuring fairness in decentralized multi-agent systems presents significant challenges due to emergent biases, systemic inefficiencies, and conflicting agent incentives. This paper provides a comprehensive survey of fairness in multi-agent AI, introducing a novel framework where fairness is treated as a dynamic, emergent property of agent interactions. The framework integrates fairness constraints, bias mitigation strategies, and incentive mechanisms to align autonomous agent behaviors with societal values while balancing efficiency and robustness. Through empirical validation, we demonstrate that incorporating fairness constraints results in more equitable decision-making. This work bridges the gap between AI ethics and system design, offering a foundation for accountable, transparent, and socially responsible multi-agent AI systems. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | true | false | false | false | 532,508 |
1710.02410 | End-to-end Driving via Conditional Imitation Learning | Deep networks trained on demonstrations of human driving have learned to follow roads and avoid obstacles. However, driving policies trained via imitation learning cannot be controlled at test time. A vehicle trained end-to-end to imitate an expert cannot be guided to take a specific turn at an upcoming intersection. This limits the utility of such systems. We propose to condition imitation learning on high-level command input. At test time, the learned driving policy functions as a chauffeur that handles sensorimotor coordination but continues to respond to navigational commands. We evaluate different architectures for conditional imitation learning in vision-based driving. We conduct experiments in realistic three-dimensional simulations of urban driving and on a 1/5 scale robotic truck that is trained to drive in a residential area. Both systems drive based on visual input yet remain responsive to high-level navigational commands. The supplementary video can be viewed at https://youtu.be/cFtnflNe5fM | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 82,168 |
2207.06687 | Breaking Correlation Shift via Conditional Invariant Regularizer | Recently, generalization on out-of-distribution (OOD) data with correlation shift has attracted great attentions. The correlation shift is caused by the spurious attributes that correlate to the class label, as the correlation between them may vary in training and test data. For such a problem, we show that given the class label, the models that are conditionally independent of spurious attributes are OOD generalizable. Based on this, a metric Conditional Spurious Variation (CSV) which controls the OOD generalization error, is proposed to measure such conditional independence. To improve the OOD generalization, we regularize the training process with the proposed CSV. Under mild assumptions, our training objective can be formulated as a nonconvex-concave mini-max problem. An algorithm with a provable convergence rate is proposed to solve the problem. Extensive empirical results verify our algorithm's efficacy in improving OOD generalization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 307,963 |
2105.06811 | Quantified Sleep: Machine learning techniques for observational n-of-1
studies | This paper applies statistical learning techniques to an observational Quantified-Self (QS) study to build a descriptive model of sleep quality. A total of 472 days of my sleep data was collected with an Oura ring and combined with lifestyle, environmental, and psychological data. Such n-of-1 QS projects pose a number of challenges: heterogeneous data sources; missing values; high dimensionality; dynamic feedback loops; human biases. This paper directly addresses these challenges with an end-to-end QS pipeline that produces robust descriptive models. Sleep quality is one of the most difficult modelling targets in QS research, due to high noise and a large number of weakly-contributing factors. Sleep quality was selected so that approaches from this paper would generalise to most other n-of-1 QS projects. Techniques are presented for combining and engineering features for the different classes of data types, sample frequencies, and schema - including event logs, weather, and geo-spatial data. Statistical analyses for outliers, normality, (auto)correlation, stationarity, and missing data are detailed, along with a proposed method for hierarchical clustering to identify correlated groups of features. The missing data was overcome using a combination of knowledge-based and statistical techniques, including several multivariate imputation algorithms. "Markov unfolding" is presented for collapsing the time series into a collection of independent observations, whilst incorporating historical information. The final model was interpreted in two ways: by inspecting the internal $\beta$-parameters, and using the SHAP framework. These two interpretation techniques were combined to produce a list of the 16 most-predictive features, demonstrating that an observational study can greatly narrow down the number of features that need to be considered when designing interventional QS studies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 235,237 |
1502.01228 | Linear-time Online Action Detection From 3D Skeletal Data Using Bags of
Gesturelets | Sliding window is one direct way to extend a successful recognition system to handle the more challenging detection problem. While action recognition decides only whether or not an action is present in a pre-segmented video sequence, action detection identifies the time interval where the action occurred in an unsegmented video stream. Sliding window approaches for action detection can however be slow as they maximize a classifier score over all possible sub-intervals. Even though new schemes utilize dynamic programming to speed up the search for the optimal sub-interval, they require offline processing on the whole video sequence. In this paper, we propose a novel approach for online action detection based on 3D skeleton sequences extracted from depth data. It identifies the sub-interval with the maximum classifier score in linear time. Furthermore, it is invariant to temporal scale variations and is suitable for real-time applications with low latency. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 39,913 |
2207.04316 | Improving Diffusion Model Efficiency Through Patching | Diffusion models are a powerful class of generative models that iteratively denoise samples to produce data. While many works have focused on the number of iterations in this sampling procedure, few have focused on the cost of each iteration. We find that adding a simple ViT-style patching transformation can considerably reduce a diffusion model's sampling time and memory usage. We justify our approach both through an analysis of the diffusion model objective, and through empirical experiments on LSUN Church, ImageNet 256, and FFHQ 1024. We provide implementations in Tensorflow and Pytorch. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 307,157 |
2206.12617 | Language Models as Knowledge Embeddings | Knowledge embeddings (KE) represent a knowledge graph (KG) by embedding entities and relations into continuous vector spaces. Existing methods are mainly structure-based or description-based. Structure-based methods learn representations that preserve the inherent structure of KGs. They cannot well represent abundant long-tail entities in real-world KGs with limited structural information. Description-based methods leverage textual information and language models. Prior approaches in this direction barely outperform structure-based ones, and suffer from problems like expensive negative sampling and restrictive description demand. In this paper, we propose LMKE, which adopts Language Models to derive Knowledge Embeddings, aiming at both enriching representations of long-tail entities and solving problems of prior description-based methods. We formulate description-based KE learning with a contrastive learning framework to improve efficiency in training and evaluation. Experimental results show that LMKE achieves state-of-the-art performance on KE benchmarks of link prediction and triple classification, especially for long-tail entities. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 304,661 |
2411.18183 | Equi join query acceleration using algebraic signatures (Published at
IADIS'2008 Applied Computing conf.) | Evaluation of join queries is very challenging since they have to deal with an increasing data size. We study the relational join query processing realized by hash tables and we focus on the case of equi join queries. We propose to use a new form of signatures, the algebraic signatures, for fast comparison between values of two attributes in relations participating in an equi join operations. Our technique is efficient especially when the attribute join is a long string. In this paper, we investigate this issue and prove that algebraic signatures combined to known hash join technique constitute an efficient method to accelerate equi join operations. Algebraic signatures allow fast string search. They are descending from the Karp-Rabin signatures. String matching using our algebraic calculus is then several times faster comparing to the fastest known methods, e.g. Boyer Moore.We justify our approach and present an experimental evaluation. We also present a cost analysis for an equi join operation using algebraic signatures. The performance evaluation of our technique shows the improvement of query processing times. We also discuss the reductions of required memory sizes and the disk I/O. The main contribution of this paper is the using of algebraic signatures to accelerate equi join operations especially when the attribute join is a long string and to avoid multiples I/O disk by reduce memory requirement. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 511,765 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.