id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2308.10729 | Patch Is Not All You Need | Vision Transformers have achieved great success in computer visions, delivering exceptional performance across various tasks. However, their inherent reliance on sequential input enforces the manual partitioning of images into patch sequences, which disrupts the image's inherent structural and semantic continuity. To handle this, we propose a novel Pattern Transformer (Patternformer) to adaptively convert images to pattern sequences for Transformer input. Specifically, we employ the Convolutional Neural Network to extract various patterns from the input image, with each channel representing a unique pattern that is fed into the succeeding Transformer as a visual token. By enabling the network to optimize these patterns, each pattern concentrates on its local region of interest, thereby preserving its intrinsic structural and semantic information. Only employing the vanilla ResNet and Transformer, we have accomplished state-of-the-art performance on CIFAR-10 and CIFAR-100, and have achieved competitive results on ImageNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 386,860 |
2410.14413 | WeSpeR: Population spectrum retrieval and spectral density estimation of
weighted sample covariance | The spectrum of the weighted sample covariance shows a asymptotic non random behavior when the dimension grows with the number of samples. In this setting, we prove that the asymptotic spectral distribution $F$ of the weighted sample covariance has a continuous density on $\mathbb{R}^*$. We address then the practical problem of numerically finding this density. We propose a procedure to compute it, to determine the support of $F$ and define an efficient grid on it. We use this procedure to design the $\textit{WeSpeR}$ algorithm, which estimates the spectral density and retrieves the true spectral covariance spectrum. Empirical tests confirm the good properties of the $\textit{WeSpeR}$ algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,019 |
2304.13829 | Controlled density transport using Perron Frobenius generators | We consider the problem of the transport of a density of states from an initial state distribution to a desired final state distribution through a dynamical system with actuation. In particular, we consider the case where the control signal is a function of time, but not space; that is, the same actuation is applied at every point in the state space. This is motivated by several problems in fluid mechanics, such as mixing and manipulation of a collection of particles by a global control input such as a uniform magnetic field, as well as by more general control problems where a density function describes an uncertainty distribution or a distribution of agents in a multi-agent system. We formulate this problem using the generators of the Perron-Frobenius operator associated with the drift and control vector fields of the system. By considering finite-dimensional approximations of these operators, the density transport problem can be expressed as a control problem for a bilinear system in a high-dimensional, lifted state. With this system, we frame the density control problem as a problem of driving moments of the density function to the moments of a desired density function, where the moments of the density can be expressed as an output which is linear in the lifted state. This output tracking problem for the lifted bilinear system is then solved using differential dynamic programming, an iterative trajectory optimization scheme. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 360,719 |
1610.01112 | Reset-Free Guided Policy Search: Efficient Deep Reinforcement Learning
with Stochastic Initial States | Autonomous learning of robotic skills can allow general-purpose robots to learn wide behavioral repertoires without requiring extensive manual engineering. However, robotic skill learning methods typically make one of several trade-offs to enable practical real-world learning, such as requiring manually designed policy or value function representations, initialization from human-provided demonstrations, instrumentation of the training environment, or extremely long training times. In this paper, we propose a new reinforcement learning algorithm for learning manipulation skills that can train general-purpose neural network policies with minimal human engineering, while still allowing for fast, efficient learning in stochastic environments. Our approach builds on the guided policy search (GPS) algorithm, which transforms the reinforcement learning problem into supervised learning from a computational teacher (without human demonstrations). In contrast to prior GPS methods, which require a consistent set of initial states to which the system must be reset after each episode, our approach can handle randomized initial states, allowing it to be used in environments where deterministic resets are impossible. We compare our method to existing policy search techniques in simulation, showing that it can train high-dimensional neural network policies with the same sample efficiency as prior GPS methods, and present real-world results on a PR2 robotic manipulator. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 61,926 |
2402.00728 | Dropout-Based Rashomon Set Exploration for Efficient Predictive
Multiplicity Estimation | Predictive multiplicity refers to the phenomenon in which classification tasks may admit multiple competing models that achieve almost-equally-optimal performance, yet generate conflicting outputs for individual samples. This presents significant concerns, as it can potentially result in systemic exclusion, inexplicable discrimination, and unfairness in practical applications. Measuring and mitigating predictive multiplicity, however, is computationally challenging due to the need to explore all such almost-equally-optimal models, known as the Rashomon set, in potentially huge hypothesis spaces. To address this challenge, we propose a novel framework that utilizes dropout techniques for exploring models in the Rashomon set. We provide rigorous theoretical derivations to connect the dropout parameters to properties of the Rashomon set, and empirically evaluate our framework through extensive experimentation. Numerical results show that our technique consistently outperforms baselines in terms of the effectiveness of predictive multiplicity metric estimation, with runtime speedup up to $20\times \sim 5000\times$. With efficient Rashomon set exploration and metric estimation, mitigation of predictive multiplicity is then achieved through dropout ensemble and model selection. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 425,699 |
2311.16533 | Motor State Prediction and Friction Compensation for Brushless DC Motor
Drives Using Data-Driven Techniques | In order to provide robust, reliable, and accurate position and velocity control of motor drives, friction compensation has emerged as a key difficulty. Non-characterised friction could give rise to large position errors and vibrations which could be intensified by stick-slip motion and limit cycles. This paper presents an application of two data-driven nonlinear model identification techniques to discover the governing equations of motor dynamics that also characterise friction. Namely, the extraction of low-power data from time-delayed coordinates of motor velocity and sparse regression on nonlinear terms was applied to data acquired from a Brushless DC (BLDC) motor, to identify the underlying dynamics. The latter can be considered an extension of the conventional linear motor model commonly used in many model-based controllers. The identified nonlinear model was then contrasted with a nonlinear model that included the LuGre friction model and a linear model without friction. A nonlinear grey box model estimation method was used to calculate the optimum friction parameters for the LuGre model. The resulting nonlinear motor model with friction characteristics was then validated using a feedback friction compensation algorithm. The novel model showed more than 90% accuracy in predicting the motor states in all considered input excitation signals. In addition, the model-based friction compensation scheme showed a relative increase in performance when compared with a system without friction compensation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 410,950 |
1204.4521 | A Privacy-Aware Bayesian Approach for Combining Classifier and Cluster
Ensembles | This paper introduces a privacy-aware Bayesian approach that combines ensembles of classifiers and clusterers to perform semi-supervised and transductive learning. We consider scenarios where instances and their classification/clustering results are distributed across different data sites and have sharing restrictions. As a special case, the privacy aware computation of the model when instances of the target data are distributed across different data sites, is also discussed. Experimental results show that the proposed approach can provide good classification accuracies while adhering to the data/model sharing constraints. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 15,597 |
1706.04208 | Hybrid Reward Architecture for Reinforcement Learning | One of the main challenges in reinforcement learning (RL) is generalisation. In typical deep RL methods this is achieved by approximating the optimal value function with a low-dimensional representation using a deep network. While this approach works well in many domains, in domains where the optimal value function cannot easily be reduced to a low-dimensional representation, learning can be very slow and unstable. This paper contributes towards tackling such challenging domains, by proposing a new method, called Hybrid Reward Architecture (HRA). HRA takes as input a decomposed reward function and learns a separate value function for each component reward function. Because each component typically only depends on a subset of all features, the corresponding value function can be approximated more easily by a low-dimensional representation, enabling more effective learning. We demonstrate HRA on a toy-problem and the Atari game Ms. Pac-Man, where HRA achieves above-human performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 75,294 |
2008.00476 | SCNet: A Neural Network for Automated Side-Channel Attack | The side-channel attack is an attack method based on the information gained about implementations of computer systems, rather than weaknesses in algorithms. Information about system characteristics such as power consumption, electromagnetic leaks and sound can be exploited by the side-channel attack to compromise the system. Much research effort has been directed towards this field. However, such an attack still requires strong skills, thus can only be performed effectively by experts. Here, we propose SCNet, which automatically performs side-channel attacks. And we also design this network combining with side-channel domain knowledge and different deep learning model to improve the performance and better to explain the result. The results show that our model achieves good performance with fewer parameters. The proposed model is a useful tool for automatically testing the robustness of computer systems. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 190,020 |
2301.09878 | ODOR: The ICPR2022 ODeuropa Challenge on Olfactory Object Recognition | The Odeuropa Challenge on Olfactory Object Recognition aims to foster the development of object detection in the visual arts and to promote an olfactory perspective on digital heritage. Object detection in historical artworks is particularly challenging due to varying styles and artistic periods. Moreover, the task is complicated due to the particularity and historical variance of predefined target objects, which exhibit a large intra-class variance, and the long tail distribution of the dataset labels, with some objects having only very few training examples. These challenges should encourage participants to create innovative approaches using domain adaptation or few-shot learning. We provide a dataset of 2647 artworks annotated with 20 120 tightly fit bounding boxes that are split into a training and validation set (public). A test set containing 1140 artworks and 15 480 annotations is kept private for the challenge evaluation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 341,634 |
2211.09322 | Targeted Attention for Generalized- and Zero-Shot Learning | The Zero-Shot Learning (ZSL) task attempts to learn concepts without any labeled data. Unlike traditional classification/detection tasks, the evaluation environment is provided unseen classes never encountered during training. As such, it remains both challenging, and promising on a variety of fronts, including unsupervised concept learning, domain adaptation, and dataset drift detection. Recently, there have been a variety of approaches towards solving ZSL, including improved metric learning methods, transfer learning, combinations of semantic and image domains using, e.g. word vectors, and generative models to model the latent space of known classes to classify unseen classes. We find many approaches require intensive training augmentation with attributes or features that may be commonly unavailable (attribute-based learning) or susceptible to adversarial attacks (generative learning). We propose combining approaches from the related person re-identification task for ZSL, with key modifications to ensure sufficiently improved performance in the ZSL setting without the need for feature or training dataset augmentation. We are able to achieve state-of-the-art performance on the CUB200 and Cars196 datasets in the ZSL setting compared to recent works, with NMI (normalized mutual inference) of 63.27 and top-1 of 61.04 for CUB200, and NMI 66.03 with top-1 82.75% in Cars196. We also show state-of-the-art results in the Generalized Zero-Shot Learning (GZSL) setting, with Harmonic Mean R-1 of 66.14% on the CUB200 dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 330,935 |
1206.3252 | Convex Point Estimation using Undirected Bayesian Transfer Hierarchies | When related learning tasks are naturally arranged in a hierarchy, an appealing approach for coping with scarcity of instances is that of transfer learning using a hierarchical Bayes framework. As fully Bayesian computations can be difficult and computationally demanding, it is often desirable to use posterior point estimates that facilitate (relatively) efficient prediction. However, the hierarchical Bayes framework does not always lend itself naturally to this maximum aposteriori goal. In this work we propose an undirected reformulation of hierarchical Bayes that relies on priors in the form of similarity measures. We introduce the notion of "degree of transfer" weights on components of these similarity measures, and show how they can be automatically learned within a joint probabilistic framework. Importantly, our reformulation results in a convex objective for many learning problems, thus facilitating optimal posterior point estimation using standard optimization techniques. In addition, we no longer require proper priors, allowing for flexible and straightforward specification of joint distributions over transfer hierarchies. We show that our framework is effective for learning models that are part of transfer hierarchies for two real-life tasks: object shape modeling using Gaussian density estimation and document classification. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 16,511 |
2502.10842 | Mobile Robotic Multi-View Photometric Stereo | Multi-View Photometric Stereo (MVPS) is a popular method for fine-detailed 3D acquisition of an object from images. Despite its outstanding results on diverse material objects, a typical MVPS experimental setup requires a well-calibrated light source and a monocular camera installed on an immovable base. This restricts the use of MVPS on a movable platform, limiting us from taking MVPS benefits in 3D acquisition for mobile robotics applications. To this end, we introduce a new mobile robotic system for MVPS. While the proposed system brings advantages, it introduces additional algorithmic challenges. Addressing them, in this paper, we further propose an incremental approach for mobile robotic MVPS. Our approach leverages a supervised learning setup to predict per-view surface normal, object depth, and per-pixel uncertainty in model-predicted results. A refined depth map per view is obtained by solving an MVPS-driven optimization problem proposed in this paper. Later, we fuse the refined depth map while tracking the camera pose w.r.t the reference frame to recover globally consistent object 3D geometry. Experimental results show the advantages of our robotic system and algorithm, featuring the local high-frequency surface detail recovery with globally consistent object shape. Our work is beyond any MVPS system yet presented, providing encouraging results on objects with unknown reflectance properties using fewer frames without a tiring calibration and installation process, enabling computationally efficient robotic automation approach to photogrammetry. The proposed approach is nearly 100 times computationally faster than the state-of-the-art MVPS methods such as [1, 2] while maintaining the similar results when tested on subjects taken from the benchmark DiLiGenT MV dataset [3]. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 534,070 |
cs/0702077 | Properties of Rank Metric Codes | This paper investigates general properties of codes with the rank metric. We first investigate asymptotic packing properties of rank metric codes. Then, we study sphere covering properties of rank metric codes, derive bounds on their parameters, and investigate their asymptotic covering properties. Finally, we establish several identities that relate the rank weight distribution of a linear code to that of its dual code. One of our identities is the counterpart of the MacWilliams identity for the Hamming metric, and it has a different form from the identity by Delsarte. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 540,159 |
2210.08652 | Adaptive Contrastive Learning with Dynamic Correlation for Multi-Phase
Organ Segmentation | Recent studies have demonstrated the superior performance of introducing ``scan-wise" contrast labels into contrastive learning for multi-organ segmentation on multi-phase computed tomography (CT). However, such scan-wise labels are limited: (1) a coarse classification, which could not capture the fine-grained ``organ-wise" contrast variations across all organs; (2) the label (i.e., contrast phase) is typically manually provided, which is error-prone and may introduce manual biases of defining phases. In this paper, we propose a novel data-driven contrastive loss function that adapts the similar/dissimilar contrast relationship between samples in each minibatch at organ-level. Specifically, as variable levels of contrast exist between organs, we hypothesis that the contrast differences in the organ-level can bring additional context for defining representations in the latent space. An organ-wise contrast correlation matrix is computed with mean organ intensities under one-hot attention maps. The goal of adapting the organ-driven correlation matrix is to model variable levels of feature separability at different phases. We evaluate our proposed approach on multi-organ segmentation with both non-contrast CT (NCCT) datasets and the MICCAI 2015 BTCV Challenge contrast-enhance CT (CECT) datasets. Compared to the state-of-the-art approaches, our proposed contrastive loss yields a substantial and significant improvement of 1.41% (from 0.923 to 0.936, p-value$<$0.01) and 2.02% (from 0.891 to 0.910, p-value$<$0.01) on mean Dice scores across all organs with respect to NCCT and CECT cohorts. We further assess the trained model performance with the MICCAI 2021 FLARE Challenge CECT datasets and achieve a substantial improvement of mean Dice score from 0.927 to 0.934 (p-value$<$0.01). The code is available at: https://github.com/MASILab/DCC_CL | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 324,236 |
2106.06440 | Learning Compositional Shape Priors for Few-Shot 3D Reconstruction | The impressive performance of deep convolutional neural networks in single-view 3D reconstruction suggests that these models perform non-trivial reasoning about the 3D structure of the output space. Recent work has challenged this belief, showing that, on standard benchmarks, complex encoder-decoder architectures perform similarly to nearest-neighbor baselines or simple linear decoder models that exploit large amounts of per-category data. However, building large collections of 3D shapes for supervised training is a laborious process; a more realistic and less constraining task is inferring 3D shapes for categories with few available training examples, calling for a model that can successfully generalize to novel object classes. In this work we experimentally demonstrate that naive baselines fail in this few-shot learning setting, in which the network must learn informative shape priors for inference of new categories. We propose three ways to learn a class-specific global shape prior, directly from data. Using these techniques, we are able to capture multi-scale information about the 3D shape, and account for intra-class variability by virtue of an implicit compositional structure. Experiments on the popular ShapeNet dataset show that our method outperforms a zero-shot baseline by over 40%, and the current state-of-the-art by over 10%, in terms of relative performance, in the few-shot setting. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 240,483 |
2203.00796 | Flow-Based Control of Marine Robots in Gyre-Like Environments | We present a flow-based control strategy that enables resource-constrained marine robots to patrol gyre-like flow environments on an orbital trajectory with a periodicity in a given range. The controller does not require a detailed model of the flow field and relies only on the robot's location relative to the center of the gyre. Instead of precisely tracking a pre-defined trajectory, the robots are tasked to stay in between two bounding trajectories with known periodicity. Furthermore, the proposed strategy leverages the surrounding flow field to minimize control effort. We prove that the proposed strategy enables robots to cycle in the flow satisfying the desired periodicity requirements. Our method is tested and validated both in simulation and in experiments using a low-cost, underactuated, surface swimming robot, i.e. the Modboat. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 283,119 |
1909.07815 | Particle reconstruction of volumetric particle image velocimetry with
strategy of machine learning | Three-dimensional particle reconstruction with limited two-dimensional projections is an under-determined inverse problem that the exact solution is often difficult to be obtained. In general, approximate solutions can be obtained by iterative optimization methods. In the current work, a practical particle reconstruction method based on a convolutional neural network (CNN) with geometry-informed features is proposed. The proposed technique can refine the particle reconstruction from a very coarse initial guess of particle distribution generated by any traditional algebraic reconstruction technique (ART) based methods. Compared with available ART-based algorithms, the novel technique makes significant improvements in terms of reconstruction quality, {robustness to noises}, and at least an order of magnitude faster in the offline stage. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 145,783 |
2009.00463 | Single-shot Hyperspectral-Depth Imaging with Learned Diffractive Optics | Imaging depth and spectrum have been extensively studied in isolation from each other for decades. Recently, hyperspectral-depth (HS-D) imaging emerges to capture both information simultaneously by combining two different imaging systems; one for depth, the other for spectrum. While being accurate, this combinational approach induces increased form factor, cost, capture time, and alignment/registration problems. In this work, departing from the combinational principle, we propose a compact single-shot monocular HS-D imaging method. Our method uses a diffractive optical element (DOE), the point spread function of which changes with respect to both depth and spectrum. This enables us to reconstruct spectrum and depth from a single captured image. To this end, we develop a differentiable simulator and a neural-network-based reconstruction that are jointly optimized via automatic differentiation. To facilitate learning the DOE, we present a first HS-D dataset by building a benchtop HS-D imager that acquires high-quality ground truth. We evaluate our method with synthetic and real experiments by building an experimental prototype and achieve state-of-the-art HS-D imaging results. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 194,050 |
2501.04534 | Combining YOLO and Visual Rhythm for Vehicle Counting | Video-based vehicle detection and counting play a critical role in managing transport infrastructure. Traditional image-based counting methods usually involve two main steps: initial detection and subsequent tracking, which are applied to all video frames, leading to a significant increase in computational complexity. To address this issue, this work presents an alternative and more efficient method for vehicle detection and counting. The proposed approach eliminates the need for a tracking step and focuses solely on detecting vehicles in key video frames, thereby increasing its efficiency. To achieve this, we developed a system that combines YOLO, for vehicle detection, with Visual Rhythm, a way to create time-spatial images that allows us to focus on frames that contain useful information. Additionally, this method can be used for counting in any application involving unidirectional moving targets to be detected and identified. Experimental analysis using real videos shows that the proposed method achieves mean counting accuracy around 99.15% over a set of videos, with a processing speed three times faster than tracking based approaches. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 523,256 |
2010.02636 | Neural Speech Synthesis for Estonian | This technical report describes the results of a collaboration between the NLP research group at the University of Tartu and the Institute of Estonian Language on improving neural speech synthesis for Estonian. The report (written in Estonian) describes the project results, the summary of which is: (1) Speech synthesis data from 6 speakers for a total of 92.4 hours is collected and openly released (CC-BY-4.0). Data available at https://konekorpus.tartunlp.ai and https://www.eki.ee/litsents/. (2) software and models for neural speech synthesis is released open-source (MIT license). Available at https://koodivaramu.eesti.ee/tartunlp/text-to-speech . (3) We ran evaluations of the new models and compared them to other existing solutions (HMM-based HTS models from EKI, http://www.eki.ee/heli/, and Google's speech synthesis for Estonian, accessed via https://translate.google.com). Evaluation includes voice acceptability MOS scores for sentence-level and longer excerpts, detailed error analysis and evaluation of the pre-processing module. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 199,113 |
2202.13525 | Gradient-free Multi-domain Optimization for Autonomous Systems | Autonomous systems are composed of several subsystems such as mechanical, propulsion, perception, planning and control. These are traditionally designed separately which makes performance optimization of the integrated system a significant challenge. In this paper, we study the problem of using gradient-free optimization methods to jointly optimize the multiple domains of an autonomous system to find the set of optimal architectures for both hardware and software. We specifically perform multi-domain, multi-parameter optimization on an autonomous vehicle to find the best decision-making process, motion planning and control algorithms, and the physical parameters for autonomous racing. We detail the multi-domain optimization scheme, benchmark with different core components, and provide insights for generalization to new autonomous systems. In addition, this paper provides a benchmark of the performances of six different gradient-free optimizers in three different operating environments. Our approach is validated with a case study where we describe the autonomous vehicle system architecture, optimization methods, and finally, provide an argument on gradient-free optimization being a powerful choice to improve the performance of autonomous systems in an integrated manner. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | 282,647 |
2408.00779 | Learning Structurally Stabilized Representations for Multi-modal
Lossless DNA Storage | In this paper, we present Reed-Solomon coded single-stranded representation learning (RSRL), a novel end-to-end model for learning representations for multi-modal lossless DNA storage. In contrast to existing learning-based methods, the proposed RSRL is inspired by both error-correction codec and structural biology. Specifically, RSRL first learns the representations for the subsequent storage from the binary data transformed by the Reed-Solomon codec. Then, the representations are masked by an RS-code-informed mask to focus on correcting the burst errors occurring in the learning process. With the decoded representations with error corrections, a novel biologically stabilized loss is formulated to regularize the data representations to possess stable single-stranded structures. By incorporating these novel strategies, the proposed RSRL can learn highly durable, dense, and lossless representations for the subsequent storage tasks into DNA sequences. The proposed RSRL has been compared with a number of strong baselines in real-world tasks of multi-modal data storage. The experimental results obtained demonstrate that RSRL can store diverse types of data with much higher information density and durability but much lower error rates. | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | false | true | 477,979 |
2012.07916 | When Physical Unclonable Function Meets Biometrics | As the Covid-19 pandemic grips the world, healthcare systems are being reshaped, where the e-health concepts become more likely to be accepted. Wearable devices often carry sensitive information from users which are exposed to security and privacy risks. Moreover, users have always had the concern of being counterfeited between the fabrication process and vendors' storage. Hence, not only securing personal data is becoming a crucial obligation, but also device verification is another challenge. To address biometrics authentication and physically unclonable functions (PUFs) need to be put in place to mitigate the security and privacy of the users. Among biometrics modalities, Electrocardiogram (ECG) based biometric has become popular as it can authenticate patients and monitor the patient's vital signs. However, researchers have recently started to study the vulnerabilities of the ECG biometric systems and tried to address the issues of spoofing. Moreover, most of the wearable is enabled with CPU and memories. Thus, volatile memory-based (NVM) PUF can be easily placed in the device to avoid counterfeit. However, many research challenged the unclonability characteristics of PUFs. Thus, a careful study on these attacks should be sufficient to address the need. In this paper, our aim is to provide a comprehensive study on the state-of-the-art developments papers based on biometrics enabled hardware security. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 211,600 |
2302.12918 | Deep Graph Stream SVDD: Anomaly Detection in Cyber-Physical Systems | Our work focuses on anomaly detection in cyber-physical systems. Prior literature has three limitations: (1) Failing to capture long-delayed patterns in system anomalies; (2) Ignoring dynamic changes in sensor connections; (3) The curse of high-dimensional data samples. These limit the detection performance and usefulness of existing works. To address them, we propose a new approach called deep graph stream support vector data description (SVDD) for anomaly detection. Specifically, we first use a transformer to preserve both short and long temporal patterns of monitoring data in temporal embeddings. Then we cluster these embeddings according to sensor type and utilize them to estimate the change in connectivity between various sensors to construct a new weighted graph. The temporal embeddings are mapped to the new graph as node attributes to form weighted attributed graph. We input the graph into a variational graph auto-encoder model to learn final spatio-temporal representation. Finally, we learn a hypersphere that encompasses normal embeddings and predict the system status by calculating the distances between the hypersphere and data samples. Extensive experiments validate the superiority of our model, which improves F1-score by 35.87%, AUC by 19.32%, while being 32 times faster than the best baseline at training and inference. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 347,735 |
2312.17404 | Parameter Optimization with Conscious Allocation (POCA) | The performance of modern machine learning algorithms depends upon the selection of a set of hyperparameters. Common examples of hyperparameters are learning rate and the number of layers in a dense neural network. Auto-ML is a branch of optimization that has produced important contributions in this area. Within Auto-ML, hyperband-based approaches, which eliminate poorly-performing configurations after evaluating them at low budgets, are among the most effective. However, the performance of these algorithms strongly depends on how effectively they allocate the computational budget to various hyperparameter configurations. We present the new Parameter Optimization with Conscious Allocation (POCA), a hyperband-based algorithm that adaptively allocates the inputted budget to the hyperparameter configurations it generates following a Bayesian sampling scheme. We compare POCA to its nearest competitor at optimizing the hyperparameters of an artificial toy function and a deep neural network and find that POCA finds strong configurations faster in both settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 418,736 |
2006.14276 | Combining Ensemble Kalman Filter and Reservoir Computing to predict
spatio-temporal chaotic systems from imperfect observations and models | Prediction of spatio-temporal chaotic systems is important in various fields, such as Numerical Weather Prediction (NWP). While data assimilation methods have been applied in NWP, machine learning techniques, such as Reservoir Computing (RC), are recently recognized as promising tools to predict spatio-temporal chaotic systems. However, the sensitivity of the skill of the machine learning based prediction to the imperfectness of observations is unclear. In this study, we evaluate the skill of RC with noisy and sparsely distributed observations. We intensively compare the performances of RC and Local Ensemble Transform Kalman Filter (LETKF) by applying them to the prediction of the Lorenz 96 system. Although RC can successfully predict the Lorenz 96 system if the system is perfectly observed, we find that RC is vulnerable to observation sparsity compared with LETKF. To overcome this limitation of RC, we propose to combine LETKF and RC. In our proposed method, the system is predicted by RC that learned the analysis time series estimated by LETKF. Our proposed method can successfully predict the Lorenz 96 system using noisy and sparsely distributed observations. Most importantly, our method can predict better than LETKF when the process-based model is imperfect. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 184,174 |
2210.00572 | Risk-graded Safety for Handling Medical Queries in Conversational AI | Conversational AI systems can engage in unsafe behaviour when handling users' medical queries that can have severe consequences and could even lead to deaths. Systems therefore need to be capable of both recognising the seriousness of medical inputs and producing responses with appropriate levels of risk. We create a corpus of human written English language medical queries and the responses of different types of systems. We label these with both crowdsourced and expert annotations. While individual crowdworkers may be unreliable at grading the seriousness of the prompts, their aggregated labels tend to agree with professional opinion to a greater extent on identifying the medical queries and recognising the risk types posed by the responses. Results of classification experiments suggest that, while these tasks can be automated, caution should be exercised, as errors can potentially be very serious. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 320,912 |
2306.15803 | On Logic-Based Explainability with Partially Specified Inputs | In the practical deployment of machine learning (ML) models, missing data represents a recurring challenge. Missing data is often addressed when training ML models. But missing data also needs to be addressed when deciding predictions and when explaining those predictions. Missing data represents an opportunity to partially specify the inputs of the prediction to be explained. This paper studies the computation of logic-based explanations in the presence of partially specified inputs. The paper shows that most of the algorithms proposed in recent years for computing logic-based explanations can be generalized for computing explanations given the partially specified inputs. One related result is that the complexity of computing logic-based explanations remains unchanged. A similar result is proved in the case of logic-based explainability subject to input constraints. Furthermore, the proposed solution for computing explanations given partially specified inputs is applied to classifiers obtained from well-known public datasets, thereby illustrating a number of novel explainability use cases. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 376,161 |
1506.00765 | Video (GIF) Sentiment Analysis using Large-Scale Mid-Level Ontology | With faster connection speed, Internet users are now making social network a huge reservoir of texts, images and video clips (GIF). Sentiment analysis for such online platform can be used to predict political elections, evaluates economic indicators and so on. However, GIF sentiment analysis is quite challenging, not only because it hinges on spatio-temporal visual contentabstraction, but also for the relationship between such abstraction and final sentiment remains unknown.In this paper, we dedicated to find out such relationship.We proposed a SentiPairSequence basedspatiotemporal visual sentiment ontology, which forms the midlevel representations for GIFsentiment. The establishment process of SentiPair contains two steps. First, we construct the Synset Forest to define the semantic tree structure of visual sentiment label elements. Then, through theSynset Forest, we organically select and combine sentiment label elements to form a mid-level visual sentiment representation. Our experiments indicate that SentiPair outperforms other competing mid-level attributes. Using SentiPair, our analysis frameworkcan achieve satisfying prediction accuracy (72.6%). We also opened ourdataset (GSO-2015) to the research community. GSO-2015 contains more than 6,000 manually annotated GIFs out of more than 40,000 candidates. Each is labeled with both sentiment and SentiPair Sequence. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | true | 43,713 |
2005.11437 | S3VAE: Self-Supervised Sequential VAE for Representation Disentanglement
and Data Generation | We propose a sequential variational autoencoder to learn disentangled representations of sequential data (e.g., videos and audios) under self-supervision. Specifically, we exploit the benefits of some readily accessible supervisory signals from input data itself or some off-the-shelf functional models and accordingly design auxiliary tasks for our model to utilize these signals. With the supervision of the signals, our model can easily disentangle the representation of an input sequence into static factors and dynamic factors (i.e., time-invariant and time-varying parts). Comprehensive experiments across videos and audios verify the effectiveness of our model on representation disentanglement and generation of sequential data, and demonstrate that, our model with self-supervision performs comparable to, if not better than, the fully-supervised model with ground truth labels, and outperforms state-of-the-art unsupervised models by a large margin. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 178,472 |
1709.05554 | Deep Automated Multi-task Learning | Multi-task learning (MTL) has recently contributed to learning better representations in service of various NLP tasks. MTL aims at improving the performance of a primary task, by jointly training on a secondary task. This paper introduces automated tasks, which exploit the sequential nature of the input data, as secondary tasks in an MTL model. We explore next word prediction, next character prediction, and missing word completion as potential automated tasks. Our results show that training on a primary task in parallel with a secondary automated task improves both the convergence speed and accuracy for the primary task. We suggest two methods for augmenting an existing network with automated tasks and establish better performance in topic prediction, sentiment analysis, and hashtag recommendation. Finally, we show that the MTL models can perform well on datasets that are small and colloquial by nature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 80,903 |
1905.01799 | RSL19BD at DBDC4: Ensemble of Decision Tree-based and LSTM-based Models | RSL19BD (Waseda University Sakai Laboratory) participated in the Fourth Dialogue Breakdown Detection Challenge (DBDC4) and submitted five runs to both English and Japanese subtasks. In these runs, we utilise the Decision Tree-based model and the Long Short-Term Memory-based (LSTM-based) model following the approaches of RSL17BD and KTH in the Third Dialogue Breakdown Detection Challenge (DBDC3) respectively. The Decision Tree-based model follows the approach of RSL17BD but utilises RandomForestRegressor instead of ExtraTreesRegressor. In addition, instead of predicting the mean and the variance of the probability distribution of the three breakdown labels, it predicts the probability of each label directly. The LSTM-based model follows the approach of KTH with some changes in the architecture and utilises Convolutional Neural Network (CNN) to perform text feature extraction. In addition, instead of targeting the single breakdown label and minimising the categorical cross entropy loss, it targets the probability distribution of the three breakdown labels and minimises the mean squared error. Run 1 utilises a Decision Tree-based model; Run 2 utilises an LSTM-based model; Run 3 performs an ensemble of 5 LSTM-based models; Run 4 performs an ensemble of Run 1 and Run 2; Run 5 performs an ensemble of Run 1 and Run 3. Run 5 statistically significantly outperformed all other runs in terms of MSE (NB, PB, B) for the English data and all other runs except Run 4 in terms of MSE (NB, PB, B) for the Japanese data (alpha level = 0.05). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 129,820 |
2405.08293 | Airport Delay Prediction with Temporal Fusion Transformers | Since flight delay hurts passengers, airlines, and airports, its prediction becomes crucial for the decision-making of all stakeholders in the aviation industry and thus has been attempted by various previous research. However, previous delay predictions are often categorical and at a highly aggregated level. To improve that, this study proposes to apply the novel Temporal Fusion Transformer model and predict numerical airport arrival delays at quarter hour level for U.S. top 30 airports. Inputs to our model include airport demand and capacity forecasts, historic airport operation efficiency information, airport wind and visibility conditions, as well as enroute weather and traffic conditions. The results show that our model achieves satisfactory performance measured by small prediction errors on the test set. In addition, the interpretability analysis of the model outputs identifies the important input factors for delay prediction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 454,047 |
2110.14413 | Localized Super Resolution for Foreground Images using U-Net and MR-CNN | Images play a vital role in understanding data through visual representation. It gives a clear representation of the object in context. But if this image is not clear it might not be of much use. Thus, the topic of Image Super Resolution arose and many researchers have been working towards applying Computer Vision and Deep Learning Techniques to increase the quality of images. One of the applications of Super Resolution is to increase the quality of Portrait Images. Portrait Images are images which mainly focus on capturing the essence of the main object in the frame, where the object in context is highlighted whereas the background is occluded. When performing Super Resolution the model tries to increase the overall resolution of the image. But in portrait images the foreground resolution is more important than that of the background. In this paper, the performance of a Convolutional Neural Network (CNN) architecture known as U-Net for Super Resolution combined with Mask Region Based CNN (MR-CNN) for foreground super resolution is analysed. This analysis is carried out based on Localized Super Resolution i.e. We pass the LR Images to a pre-trained Image Segmentation model (MR-CNN) and perform super resolution inference on the foreground or Segmented Images and compute the Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) metrics for comparisons. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 263,525 |
1609.03057 | Style-Transfer via Texture-Synthesis | Style-transfer is a process of migrating a style from a given image to the content of another, synthesizing a new image which is an artistic mixture of the two. Recent work on this problem adopting Convolutional Neural-networks (CNN) ignited a renewed interest in this field, due to the very impressive results obtained. There exists an alternative path towards handling the style-transfer task, via generalization of texture-synthesis algorithms. This approach has been proposed over the years, but its results are typically less impressive compared to the CNN ones. In this work we propose a novel style-transfer algorithm that extends the texture-synthesis work of Kwatra et. al. (2005), while aiming to get stylized images that get closer in quality to the CNN ones. We modify Kwatra's algorithm in several key ways in order to achieve the desired transfer, with emphasis on a consistent way for keeping the content intact in selected regions, while producing hallucinated and rich style in others. The results obtained are visually pleasing and diverse, shown to be competitive with the recent CNN style-transfer algorithms. The proposed algorithm is fast and flexible, being able to process any pair of content + style images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 60,818 |
1007.5110 | Fully Dynamic Data Structure for Top-k Queries on Uncertain Data | Top-$k$ queries allow end-users to focus on the most important (top-$k$) answers amongst those which satisfy the query. In traditional databases, a user defined score function assigns a score value to each tuple and a top-$k$ query returns $k$ tuples with the highest score. In uncertain database, top-$k$ answer depends not only on the scores but also on the membership probabilities of tuples. Several top-$k$ definitions covering different aspects of score-probability interplay have been proposed in recent past~\cite{R10,R4,R2,R8}. Most of the existing work in this research field is focused on developing efficient algorithms for answering top-$k$ queries on static uncertain data. Any change (insertion, deletion of a tuple or change in membership probability, score of a tuple) in underlying data forces re-computation of query answers. Such re-computations are not practical considering the dynamic nature of data in many applications. In this paper, we propose a fully dynamic data structure that uses ranking function $PRF^e(\alpha)$ proposed by Li et al.~\cite{R8} under the generally adopted model of $x$-relations~\cite{R11}. $PRF^e$ can effectively approximate various other top-$k$ definitions on uncertain data based on the value of parameter $\alpha$. An $x$-relation consists of a number of $x$-tuples, where $x$-tuple is a set of mutually exclusive tuples (up to a constant number) called alternatives. Each $x$-tuple in a relation randomly instantiates into one tuple from its alternatives. For an uncertain relation with $N$ tuples, our structure can answer top-$k$ queries in $O(k\log N)$ time, handles an update in $O(\log N)$ time and takes $O(N)$ space. Finally, we evaluate practical efficiency of our structure on both synthetic and real data. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 7,143 |
1611.08913 | A Simple Model of Attentional Blink | The attentional blink (AB) effect is the reduced ability of subjects to report a second target stimuli (T2) among a rapidly presented series of non-target stimuli, when it appears within a time window of about 200-500 ms after a first target (T1). We present a simple dynamical systems model explaining the AB as resulting from the temporal response dynamics of a stochastic, linear system with threshold, whose output represents the amount of attentional resources allocated to the incoming sensory stimuli. The model postulates that the available attention capacity is limited by activity of the default mode network (DMN), a correlated set of brain regions related to task irrelevant processing which is known to exhibit reduced activation following mental training such as mindfulness meditation. The model provides a parsimonious account relating key findings from the AB, DMN and meditation research literature, and suggests some new testable predictions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 64,583 |
2102.07165 | Corrective Shared Autonomy for Addressing Task Variability | Many tasks, particularly those involving interaction with the environment, are characterized by high variability, making robotic autonomy difficult. One flexible solution is to introduce the input of a human with superior experience and cognitive abilities as part of a shared autonomy policy. However, current methods for shared autonomy are not designed to address the wide range of necessary corrections (e.g., positions, forces, execution rate, etc.) that the user may need to provide to address task variability. In this paper, we present corrective shared autonomy, where users provide corrections to key robot state variables on top of an otherwise autonomous task model. We provide an instantiation of this shared autonomy paradigm and demonstrate its viability and benefits such as low user effort and physical demand via a system-level user study on three tasks involving variability situated in aircraft manufacturing. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 220,015 |
2208.08991 | Optimized Equivalent Linearization for Random Vibration | A fundamental limitation of various Equivalent Linearization Methods (ELMs) in nonlinear random vibration analysis is that they are approximate by their nature. A quantity of interest estimated from an ELM has no guarantee to be the same as the solution of the original nonlinear system. In this study, we tackle this fundamental limitation. We sequentially address the following two questions: i) given an equivalent linear system obtained from any ELM, how do we construct an estimator so that, as the linear system simulations are guided by a limited number of nonlinear system simulations, the estimator converges on the nonlinear system solution? ii) how to construct an optimized equivalent linear system such that the estimator approaches the nonlinear system solution as quickly as possible? The first question is theoretically straightforward since classical Monte Carlo techniques such as the control variates and importance sampling can improve upon the solution of any surrogate model. We adapt the well-known Monte Carlo theories into the specific context of equivalent linearization. The second question is challenging, especially when rare event probabilities are of interest. We develop specialized methods to construct and optimize linear systems. In the context of uncertainty quantification (UQ), the proposed optimized ELM can be viewed as a physical surrogate model-based UQ method. The embedded physical equations endow the surrogate model with the capability to handle high-dimensional random vectors in stochastic dynamics analysis. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 313,557 |
2310.06148 | Understanding Transfer Learning and Gradient-Based Meta-Learning
Techniques | Deep neural networks can yield good performance on various tasks but often require large amounts of data to train them. Meta-learning received considerable attention as one approach to improve the generalization of these networks from a limited amount of data. Whilst meta-learning techniques have been observed to be successful at this in various scenarios, recent results suggest that when evaluated on tasks from a different data distribution than the one used for training, a baseline that simply finetunes a pre-trained network may be more effective than more complicated meta-learning techniques such as MAML, which is one of the most popular meta-learning techniques. This is surprising as the learning behaviour of MAML mimics that of finetuning: both rely on re-using learned features. We investigate the observed performance differences between finetuning, MAML, and another meta-learning technique called Reptile, and show that MAML and Reptile specialize for fast adaptation in low-data regimes of similar data distribution as the one used for training. Our findings show that both the output layer and the noisy training conditions induced by data scarcity play important roles in facilitating this specialization for MAML. Lastly, we show that the pre-trained features as obtained by the finetuning baseline are more diverse and discriminative than those learned by MAML and Reptile. Due to this lack of diversity and distribution specialization, MAML and Reptile may fail to generalize to out-of-distribution tasks whereas finetuning can fall back on the diversity of the learned features. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 398,440 |
1910.00913 | Predictive Control Based on Reduced Order Model for temperature
homogeneity in a resin transfer molding tool for thermoset materials | Resin Transfer Molding (RTM), which has attracted much attention in the last years for lightweight manufacturing, represents an important challenge in terms of control technology. During the process, a resin fills the cavity where a reinforcement fabric has previously been layered. This resin undergoes a chemical reaction which is thermically activated. Therefore, assuring a proper reaction requires a precise control of temperature in the entire mold cavity. Three factors make this control problem especially hard: the coupling among the large number of actuators and sensors, the variability of the test conditions and the power limitations of the electric actuators which do not offer cooling capability. The present work describes an optimized Model Predictive Control (MPC) architecture capable of handling these difficulties and also achieving the tight control requirements needed in the application. The thermal distribution inside the mold cavity is included into the controller by a simplified Reduced Order Model (ROM). This representation is obtained by data from an experimentally validated Finite Element Model (FEM), using AutoRegressive model with eXogenous terms (ARX) identification. In order to maintain the simplicity of this linear representation, the time-varying model parameters are estimated by using a perturbation observer. Additionally, the performance of the basic algorithm is improved: firstly, an augmented observer to estimate the temperature distribution of an extended spatial resolution; and secondly, a symmetry condition in the calculation of the control commands. The developed architectures have successfully been implemented in a RTM tool with the fulfillment of the control requirements. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 147,791 |
0705.0252 | Power Allocation for Discrete-Input Non-Ergodic Block-Fading Channels | We consider power allocation algorithms for fixed-rate transmission over Nakagami-m non-ergodic block-fading channels with perfect transmitter and receiver channel state information and discrete input signal constellations under both short- and long-term power constraints. Optimal power allocation schemes are shown to be direct applications of previous results in the literature. We show that the SNR exponent of the optimal short-term scheme is given by the Singleton bound. We also illustrate the significant gains available by employing long-term power constraints. Due to the nature of the expressions involved, the complexity of optimal schemes may be prohibitive for system implementation. We propose simple sub-optimal power allocation schemes whose outage probability performance is very close to the minimum outage probability obtained by optimal schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 139 |
2107.13270 | A Reflection on Learning from Data: Epistemology Issues and Limitations | Although learning from data is effective and has achieved significant milestones, it has many challenges and limitations. Learning from data starts from observations and then proceeds to broader generalizations. This framework is controversial in science, yet it has achieved remarkable engineering successes. This paper reflects on some epistemological issues and some of the limitations of the knowledge discovered in data. The document discusses the common perception that getting more data is the key to achieving better machine learning models from theoretical and practical perspectives. The paper sheds some light on the shortcomings of using generic mathematical theories to describe the process. It further highlights the need for theories specialized in learning from data. While more data leverages the performance of machine learning models in general, the relation in practice is shown to be logarithmic at its best; After a specific limit, more data stabilize or degrade the machine learning models. Recent work in reinforcement learning showed that the trend is shifting away from data-oriented approaches and relying more on algorithms. The paper concludes that learning from data is hindered by many limitations. Hence an approach that has an intensional orientation is needed. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 248,153 |
2003.00710 | Learned Enrichment of Top-View Grid Maps Improves Object Detection | We propose an object detector for top-view grid maps which is additionally trained to generate an enriched version of its input. Our goal in the joint model is to improve generalization by regularizing towards structural knowledge in form of a map fused from multiple adjacent range sensor measurements. This training data can be generated in an automatic fashion, thus does not require manual annotations. We present an evidential framework to generate training data, investigate different model architectures and show that predicting enriched inputs as an additional task can improve object detection performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 166,390 |
2106.05316 | Raman spectral analysis of mixtures with one-dimensional convolutional
neural network | Recently, the combination of robust one-dimensional convolutional neural networks (1-D CNNs) and Raman spectroscopy has shown great promise in rapid identification of unknown substances with good accuracy. Using this technique, researchers can recognize a pure compound and distinguish it from unknown substances in a mixture. The novelty of this approach is that the trained neural network operates automatically without any pre- or post-processing of data. Some studies have attempted to extend this technique to the classification of pure compounds in an unknown mixture. However, the application of 1-D CNNs has typically been restricted to binary classifications of pure compounds. Here we will highlight a new approach in spectral recognition and quantification of chemical components in a multicomponent mixture. Two 1-D CNN models, RaMixNet I and II, have been developed for this purpose. The former is for rapid classification of components in a mixture while the latter is for quantitative determination of those constituents. In the proposed method, there is no limit to the number of compounds in a mixture. A data augmentation method is also introduced by adding random baselines to the Raman spectra. The experimental results revealed that the classification accuracy of RaMixNet I and II is 100% for analysis of unknown test mixtures; at the same time, the RaMixNet II model may achieve a regression accuracy of 88% for the quantification of each component. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 240,046 |
2409.01497 | DiversityMedQA: Assessing Demographic Biases in Medical Diagnosis using
Large Language Models | As large language models (LLMs) gain traction in healthcare, concerns about their susceptibility to demographic biases are growing. We introduce {DiversityMedQA}, a novel benchmark designed to assess LLM responses to medical queries across diverse patient demographics, such as gender and ethnicity. By perturbing questions from the MedQA dataset, which comprises medical board exam questions, we created a benchmark that captures the nuanced differences in medical diagnosis across varying patient profiles. Our findings reveal notable discrepancies in model performance when tested against these demographic variations. Furthermore, to ensure the perturbations were accurate, we also propose a filtering strategy that validates each perturbation. By releasing DiversityMedQA, we provide a resource for evaluating and mitigating demographic bias in LLM medical diagnoses. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 485,354 |
2111.10614 | GMSRF-Net: An improved generalizability with global multi-scale residual
fusion network for polyp segmentation | Colonoscopy is a gold standard procedure but is highly operator-dependent. Efforts have been made to automate the detection and segmentation of polyps, a precancerous precursor, to effectively minimize missed rate. Widely used computer-aided polyp segmentation systems actuated by encoder-decoder have achieved high performance in terms of accuracy. However, polyp segmentation datasets collected from varied centers can follow different imaging protocols leading to difference in data distribution. As a result, most methods suffer from performance drop and require re-training for each specific dataset. We address this generalizability issue by proposing a global multi-scale residual fusion network (GMSRF-Net). Our proposed network maintains high-resolution representations while performing multi-scale fusion operations for all resolution scales. To further leverage scale information, we design cross multi-scale attention (CMSA) and multi-scale feature selection (MSFS) modules within the GMSRF-Net. The repeated fusion operations gated by CMSA and MSFS demonstrate improved generalizability of the network. Experiments conducted on two different polyp segmentation datasets show that our proposed GMSRF-Net outperforms the previous top-performing state-of-the-art method by 8.34% and 10.31% on unseen CVC-ClinicDB and unseen Kvasir-SEG, in terms of dice coefficient. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,384 |
1205.6605 | Template-Cut: A Pattern-Based Segmentation Paradigm | We present a scale-invariant, template-based segmentation paradigm that sets up a graph and performs a graph cut to separate an object from the background. Typically graph-based schemes distribute the nodes of the graph uniformly and equidistantly on the image, and use a regularizer to bias the cut towards a particular shape. The strategy of uniform and equidistant nodes does not allow the cut to prefer more complex structures, especially when areas of the object are indistinguishable from the background. We propose a solution by introducing the concept of a "template shape" of the target object in which the nodes are sampled non-uniformly and non-equidistantly on the image. We evaluate it on 2D-images where the object's textures and backgrounds are similar, and large areas of the object have the same gray level appearance as the background. We also evaluate it in 3D on 60 brain tumor datasets for neurosurgical planning purposes. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 16,236 |
2110.08012 | A Survey on State-of-the-art Techniques for Knowledge Graphs
Construction and Challenges ahead | Global datasphere is increasing fast, and it is expected to reach 175 Zettabytes by 20251 . However, most of the content is unstructured and is not understandable by machines. Structuring this data into a knowledge graph enables multitudes of intelligent applications such as deep question answering, recommendation systems, semantic search, etc. The knowledge graph is an emerging technology that allows logical reasoning and uncovers new insights using content along with the context. Thereby, it provides necessary syntax and reasoning semantics that enable machines to solve complex healthcare, security, financial institutions, economics, and business problems. As an outcome, enterprises are putting their effort into constructing and maintaining knowledge graphs to support various downstream applications. Manual approaches are too expensive. Automated schemes can reduce the cost of building knowledge graphs up to 15-250 times. This paper critiques state-of-the-art automated techniques to produce knowledge graphs of near-human quality autonomously. Additionally, it highlights different research issues that need to be addressed to deliver high-quality knowledge graphs | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | false | 261,212 |
2011.00613 | An Information-Geometric Distance on the Space of Tasks | This paper prescribes a distance between learning tasks modeled as joint distributions on data and labels. Using tools in information geometry, the distance is defined to be the length of the shortest weight trajectory on a Riemannian manifold as a classifier is fitted on an interpolated task. The interpolated task evolves from the source to the target task using an optimal transport formulation. This distance, which we call the "coupled transfer distance" can be compared across different classifier architectures. We develop an algorithm to compute the distance which iteratively transports the marginal on the data of the source task to that of the target task while updating the weights of the classifier to track this evolving data distribution. We develop theory to show that our distance captures the intuitive idea that a good transfer trajectory is the one that keeps the generalization gap small during transfer, in particular at the end on the target task. We perform thorough empirical validation and analysis across diverse image classification datasets to show that the coupled transfer distance correlates strongly with the difficulty of fine-tuning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 204,295 |
2411.07841 | Federated Learning for Discrete Optimal Transport with Large Population
under Incomplete Information | Optimal transport is a powerful framework for the efficient allocation of resources between sources and targets. However, traditional models often struggle to scale effectively in the presence of large and heterogeneous populations. In this work, we introduce a discrete optimal transport framework designed to handle large-scale, heterogeneous target populations, characterized by type distributions. We address two scenarios: one where the type distribution of targets is known, and one where it is unknown. For the known distribution, we propose a fully distributed algorithm to achieve optimal resource allocation. In the case of unknown distribution, we develop a federated learning-based approach that enables efficient computation of the optimal transport scheme while preserving privacy. Case studies are provided to evaluate the performance of our learning algorithm. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 507,696 |
2306.03446 | Computational Agent-based Models in Opinion Dynamics: A Survey on Social
Simulations and Empirical Studies | Understanding how an individual changes its attitude, belief, and opinion due to other people's social influences is vital because of its wide implications. A core methodology that is used to study the change of attitude under social influences is agent-based model (ABM). The goal of this review paper is to compare and contrast existing ABMs, which I classify into two families, the deductive ABMs and the inductive ABMs. The former subsumes social simulation studies, and the latter involves human experiments. To facilitate the comparison between ABMs of different formulations, I propose a general unified formulation, in which all ABMs can be viewed as special cases. In addition, I show the connections between deductive ABMs and inductive ABMs, and point out their strengths and limitations. At the end of the paper, I identify underexplored areas and suggest future research directions. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 371,338 |
1810.00352 | Getting Robots Unfrozen and Unlost in Dense Pedestrian Crowds | We aim to enable a mobile robot to navigate through environments with dense crowds, e.g., shopping malls, canteens, train stations, or airport terminals. In these challenging environments, existing approaches suffer from two common problems: the robot may get frozen and cannot make any progress toward its goal, or it may get lost due to severe occlusions inside a crowd. Here we propose a navigation framework that handles the robot freezing and the navigation lost problems simultaneously. First, we enhance the robot's mobility and unfreeze the robot in the crowd using a reinforcement learning based local navigation policy developed in our previous work~\cite{long2017towards}, which naturally takes into account the coordination between the robot and the human. Secondly, the robot takes advantage of its excellent local mobility to recover from its localization failure. In particular, it dynamically chooses to approach a set of recovery positions with rich features. To the best of our knowledge, our method is the first approach that simultaneously solves the freezing problem and the navigation lost problem in dense crowds. We evaluate our method in both simulated and real-world environments and demonstrate that it outperforms the state-of-the-art approaches. Videos are available at https://sites.google.com/view/rlslam. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 109,160 |
1906.10250 | Swap Dynamics in Single-Peaked Housing Markets | This paper focuses on the problem of fairly and efficiently allocating resources to agents. We consider a specific setting, usually referred to as a housing market, where each agent must receive exactly one resource (and initially owns one). In this framework, in the domain of linear preferences, the Top Trading Cycle (TTC) algorithm is the only procedure satisfying Pareto-optimality, individual rationality and strategy-proofness. Under the restriction of single-peaked preferences, Crawler enjoys the same properties. These two centralized procedures might however involve long trading cycles. In this paper we focus instead on procedures involving the shortest cycles: bilateral swap-deals. In such swap dynamics, the agents perform pairwise mutually improving deals until reaching a swap-stable allocation (no improving swap-deal is possible). We prove that in the single-peaked domain every swap-stable allocation is Pareto-optimal, showing the efficiency of the swap dynamics. In fact, this domain turns out to be maximal when it comes to guaranteeing this property. Besides, both the outcome of TTC and Crawler can always be reached by sequences of swaps. However, some Pareto-optimal allocations are not reachable through improving swap-deals. We further analyze the outcome of swap dynamics through social welfare notions, in our context the average or minimum rank of the resources obtained by agents in the final allocation. We start by providing a worst-case analysis of these procedures. Finally, we present an extensive experimental study in which different versions of swap dynamics are compared to other existing allocation procedures. We show that they exhibit good results on average in this domain, under different cultures for generating synthetic data. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | 136,389 |
2108.09737 | A Transformer Architecture for Stress Detection from ECG | Electrocardiogram (ECG) has been widely used for emotion recognition. This paper presents a deep neural network based on convolutional layers and a transformer mechanism to detect stress using ECG signals. We perform leave-one-subject-out experiments on two publicly available datasets, WESAD and SWELL-KW, to evaluate our method. Our experiments show that the proposed model achieves strong results, comparable or better than the state-of-the-art models for ECG-based stress detection on these two datasets. Moreover, our method is end-to-end, does not require handcrafted features, and can learn robust representations with only a few convolutional blocks and the transformer component. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 251,698 |
2407.03778 | From Data to Commonsense Reasoning: The Use of Large Language Models for
Explainable AI | Commonsense reasoning is a difficult task for a computer, but a critical skill for an artificial intelligence (AI). It can enhance the explainability of AI models by enabling them to provide intuitive and human-like explanations for their decisions. This is necessary in many areas especially in question answering (QA), which is one of the most important tasks of natural language processing (NLP). Over time, a multitude of methods have emerged for solving commonsense reasoning problems such as knowledge-based approaches using formal logic or linguistic analysis. In this paper, we investigate the effectiveness of large language models (LLMs) on different QA tasks with a focus on their abilities in reasoning and explainability. We study three LLMs: GPT-3.5, Gemma and Llama 3. We further evaluate the LLM results by means of a questionnaire. We demonstrate the ability of LLMs to reason with commonsense as the models outperform humans on different datasets. While GPT-3.5's accuracy ranges from 56% to 93% on various QA benchmarks, Llama 3 achieved a mean accuracy of 90% on all eleven datasets. Thereby Llama 3 is outperforming humans on all datasets with an average 21% higher accuracy over ten datasets. Furthermore, we can appraise that, in the sense of explainable artificial intelligence (XAI), GPT-3.5 provides good explanations for its decisions. Our questionnaire revealed that 66% of participants rated GPT-3.5's explanations as either "good" or "excellent". Taken together, these findings enrich our understanding of current LLMs and pave the way for future investigations of reasoning and explainability. | true | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 470,286 |
2402.18061 | On the use of Silver Standard Data for Zero-shot Classification Tasks in
Information Extraction | The superior performance of supervised classification methods in the information extraction (IE) area heavily relies on a large amount of gold standard data. Recent zero-shot classification methods converted the task to other NLP tasks (e.g., textual entailment) and used off-the-shelf models of these NLP tasks to directly perform inference on the test data without using a large amount of IE annotation data. A potentially valuable by-product of these methods is the large-scale silver standard data, i.e., pseudo-labeled data by the off-the-shelf models of other NLP tasks. However, there is no further investigation into the use of these data. In this paper, we propose a new framework, Clean-LaVe, which aims to utilize silver standard data to enhance the zero-shot performance. Clean-LaVe includes four phases: (1) Obtaining silver data; (2) Identifying relatively clean data from silver data; (3) Finetuning the off-the-shelf model using clean data; (4) Inference on the test data. The experimental results show that Clean-LaVe can outperform the baseline by 5% and 6% on TACRED and Wiki80 dataset in the zero-shot relation classification task, and by 3%-7% on Smile (Korean and Polish) in the zero-shot cross-lingual relation classification task, and by 8% on ACE05-E+ in the zero-shot event argument classification task. The code is share in https://github.com/wjw136/Clean_LaVe.git. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 433,265 |
2406.16838 | From Decoding to Meta-Generation: Inference-time Algorithms for Large
Language Models | One of the most striking findings in modern research on large language models (LLMs) is that scaling up compute during training leads to better results. However, less attention has been given to the benefits of scaling compute during inference. This survey focuses on these inference-time approaches. We explore three areas under a unified mathematical formalism: token-level generation algorithms, meta-generation algorithms, and efficient generation. Token-level generation algorithms, often called decoding algorithms, operate by sampling a single token at a time or constructing a token-level search space and then selecting an output. These methods typically assume access to a language model's logits, next-token distributions, or probability scores. Meta-generation algorithms work on partial or full sequences, incorporating domain knowledge, enabling backtracking, and integrating external information. Efficient generation methods aim to reduce token costs and improve the speed of generation. Our survey unifies perspectives from three research communities: traditional natural language processing, modern LLMs, and machine learning systems. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 467,294 |
2306.13988 | SAM++: Enhancing Anatomic Matching using Semantic Information and
Structural Inference | Medical images like CT and MRI provide detailed information about the internal structure of the body, and identifying key anatomical structures from these images plays a crucial role in clinical workflows. Current methods treat it as a registration or key-point regression task, which has limitations in accurate matching and can only handle predefined landmarks. Recently, some methods have been introduced to address these limitations. One such method, called SAM, proposes using a dense self-supervised approach to learn a distinct embedding for each point on the CT image and achieving promising results. Nonetheless, SAM may still face difficulties when dealing with structures that have similar appearances but different semantic meanings or similar semantic meanings but different appearances. To overcome these limitations, we propose SAM++, a framework that simultaneously learns appearance and semantic embeddings with a novel fixed-points matching mechanism. We tested the SAM++ framework on two challenging tasks, demonstrating a significant improvement over the performance of SAM and outperforming other existing methods. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 375,487 |
2008.13551 | Coding Constructions for Efficient Oblivious Transfer from Noisy
Channels | We consider oblivious transfer protocols performed over binary symmetric channels in a malicious setting where parties will actively cheat if they can. We provide constructions purely based on coding theory that achieve an explicit positive rate, the essential ingredient being the existence of linear codes whose Schur products are asymptotically good. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 193,881 |
1901.03638 | A General Optimization-based Framework for Local Odometry Estimation
with Multiple Sensors | Nowadays, more and more sensors are equipped on robots to increase robustness and autonomous ability. We have seen various sensor suites equipped on different platforms, such as stereo cameras on ground vehicles, a monocular camera with an IMU (Inertial Measurement Unit) on mobile phones, and stereo cameras with an IMU on aerial robots. Although many algorithms for state estimation have been proposed in the past, they are usually applied to a single sensor or a specific sensor suite. Few of them can be employed with multiple sensor choices. In this paper, we proposed a general optimization-based framework for odometry estimation, which supports multiple sensor sets. Every sensor is treated as a general factor in our framework. Factors which share common state variables are summed together to build the optimization problem. We further demonstrate the generality with visual and inertial sensors, which form three sensor suites (stereo cameras, a monocular camera with an IMU, and stereo cameras with an IMU). We validate the performance of our system on public datasets and through real-world experiments with multiple sensors. Results are compared against other state-of-the-art algorithms. We highlight that our system is a general framework, which can easily fuse various sensors in a pose graph optimization. Our implementations are open source\footnote{https://github.com/HKUST-Aerial-Robotics/VINS-Fusion}. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 118,452 |
1805.09294 | Likelihood-free inference with emulator networks | Approximate Bayesian Computation (ABC) provides methods for Bayesian inference in simulation-based stochastic models which do not permit tractable likelihoods. We present a new ABC method which uses probabilistic neural emulator networks to learn synthetic likelihoods on simulated data -- both local emulators which approximate the likelihood for specific observed data, as well as global ones which are applicable to a range of data. Simulations are chosen adaptively using an acquisition function which takes into account uncertainty about either the posterior distribution of interest, or the parameters of the emulator. Our approach does not rely on user-defined rejection thresholds or distance functions. We illustrate inference with emulator networks on synthetic examples and on a biophysical neuron model, and show that emulators allow accurate and efficient inference even on high-dimensional problems which are challenging for conventional ABC approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 98,388 |
2404.08811 | Reducing the Barriers to Entry for Foundation Model Training | The world has recently witnessed an unprecedented acceleration in demands for Machine Learning and Artificial Intelligence applications. This spike in demand has imposed tremendous strain on the underlying technology stack in supply chain, GPU-accelerated hardware, software, datacenter power density, and energy consumption. If left on the current technological trajectory, future demands show insurmountable spending trends, further limiting market players, stifling innovation, and widening the technology gap. To address these challenges, we propose a fundamental change in the AI training infrastructure throughout the technology ecosystem. The changes require advancements in supercomputing and novel AI training approaches, from high-end software to low-level hardware, microprocessor, and chip design, while advancing the energy efficiency required by a sustainable infrastructure. This paper presents the analytical framework that quantitatively highlights the challenges and points to the opportunities to reduce the barriers to entry for training large language models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 446,410 |
2301.02383 | Deep Biological Pathway Informed Pathology-Genomic Multimodal Survival
Prediction | The integration of multi-modal data, such as pathological images and genomic data, is essential for understanding cancer heterogeneity and complexity for personalized treatments, as well as for enhancing survival predictions. Despite the progress made in integrating pathology and genomic data, most existing methods cannot mine the complex inter-modality relations thoroughly. Additionally, identifying explainable features from these models that govern preclinical discovery and clinical prediction is crucial for cancer diagnosis, prognosis, and therapeutic response studies. We propose PONET- a novel biological pathway-informed pathology-genomic deep model that integrates pathological images and genomic data not only to improve survival prediction but also to identify genes and pathways that cause different survival rates in patients. Empirical results on six of The Cancer Genome Atlas (TCGA) datasets show that our proposed method achieves superior predictive performance and reveals meaningful biological interpretations. The proposed method establishes insight into how to train biologically informed deep networks on multimodal biomedical data which will have general applicability for understanding diseases and predicting response and resistance to treatment. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 339,495 |
2405.07006 | Word-specific tonal realizations in Mandarin | The pitch contours of Mandarin two-character words are generally understood as being shaped by the underlying tones of the constituent single-character words, in interaction with articulatory constraints imposed by factors such as speech rate, co-articulation with adjacent tones, segmental make-up, and predictability. This study shows that tonal realization is also partially determined by words' meanings. We first show, on the basis of a Taiwan corpus of spontaneous conversations, using the generalized additive regression model, and focusing on the rise-fall tone pattern, that after controlling for effects of speaker and context, word type is a stronger predictor of pitch realization than all the previously established word-form related predictors combined. Importantly, the addition of information about meaning in context improves prediction accuracy even further. We then proceed to show, using computational modeling with context-specific word embeddings, that token-specific pitch contours predict word type with 50% accuracy on held-out data, and that context-sensitive, token-specific embeddings can predict the shape of pitch contours with 30% accuracy. These accuracies, which are an order of magnitude above chance level, suggest that the relation between words' pitch contours and their meanings are sufficiently strong to be functional for language users. The theoretical implications of these empirical findings are discussed. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 453,542 |
2406.17309 | Zero-Shot Long-Form Video Understanding through Screenplay | The Long-form Video Question-Answering task requires the comprehension and analysis of extended video content to respond accurately to questions by utilizing both temporal and contextual information. In this paper, we present MM-Screenplayer, an advanced video understanding system with multi-modal perception capabilities that can convert any video into textual screenplay representations. Unlike previous storytelling methods, we organize video content into scenes as the basic unit, rather than just visually continuous shots. Additionally, we developed a ``Look Back'' strategy to reassess and validate uncertain information, particularly targeting breakpoint mode. MM-Screenplayer achieved highest score in the CVPR'2024 LOng-form VidEo Understanding (LOVEU) Track 1 Challenge, with a global accuracy of 87.5% and a breakpoint accuracy of 68.8%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 467,519 |
1203.3258 | QoE-aware Media Streaming in Technology and Cost Heterogeneous Networks | We present a framework for studying the problem of media streaming in technology and cost heterogeneous environments. We first address the problem of efficient streaming in a technology-heterogeneous setting. We employ random linear network coding to simplify the packet selection strategies and alleviate issues such as duplicate packet reception. Then, we study the problem of media streaming from multiple cost-heterogeneous access networks. Our objective is to characterize analytically the trade-off between access cost and user experience. We model the Quality of user Experience (QoE) as the probability of interruption in playback as well as the initial waiting time. We design and characterize various control policies, and formulate the optimal control problem using a Markov Decision Process (MDP) with a probabilistic constraint. We present a characterization of the optimal policy using the Hamilton-Jacobi-Bellman (HJB) equation. For a fluid approximation model, we provide an exact and explicit characterization of a threshold policy and prove its optimality using the HJB equation. Our simulation results show that under properly designed control policy, the existence of alternative access technology as a complement for a primary access network can significantly improve the user experience without any bandwidth over-provisioning. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 14,892 |
2411.18476 | A comparison of extended object tracking with multi-modal sensors in
indoor environment | This paper presents a preliminary study of an efficient object tracking approach, comparing the performance of two different 3D point cloud sensory sources: LiDAR and stereo cameras, which have significant price differences. In this preliminary work, we focus on single object tracking. We first developed a fast heuristic object detector that utilizes prior information about the environment and target. The resulting target points are subsequently fed into an extended object tracking framework, where the target shape is parameterized using a star-convex hypersurface model. Experimental results show that our object tracking method using a stereo camera achieves performance similar to that of a LiDAR sensor, with a cost difference of more than tenfold. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 511,885 |
2207.04161 | Few 'Zero Level Set'-Shot Learning of Shape Signed Distance Functions in
Feature Space | We explore a new idea for learning based shape reconstruction from a point cloud, based on the recently popularized implicit neural shape representations. We cast the problem as a few-shot learning of implicit neural signed distance functions in feature space, that we approach using gradient based meta-learning. We use a convolutional encoder to build a feature space given the input point cloud. An implicit decoder learns to predict signed distance values given points represented in this feature space. Setting the input point cloud, i.e. samples from the target shape function's zero level set, as the support (i.e. context) in few-shot learning terms, we train the decoder such that it can adapt its weights to the underlying shape of this context with a few (5) tuning steps. We thus combine two types of implicit neural network conditioning mechanisms simultaneously for the first time, namely feature encoding and meta-learning. Our numerical and qualitative evaluation shows that in the context of implicit reconstruction from a sparse point cloud, our proposed strategy, i.e. meta-learning in feature space, outperforms existing alternatives, namely standard supervised learning in feature space, and meta-learning in euclidean space, while still providing fast inference. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 307,095 |
2210.06111 | THUEE system description for NIST 2020 SRE CTS challenge | This paper presents the system description of the THUEE team for the NIST 2020 Speaker Recognition Evaluation (SRE) conversational telephone speech (CTS) challenge. The subsystems including ResNet74, ResNet152, and RepVGG-B2 are developed as speaker embedding extractors in this evaluation. We used combined AM-Softmax and AAM-Softmax based loss functions, namely CM-Softmax. We adopted a two-staged training strategy to further improve system performance. We fused all individual systems as our final submission. Our approach leads to excellent performance and ranks 1st in the challenge. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 323,144 |
2409.09272 | SafeEar: Content Privacy-Preserving Audio Deepfake Detection | Text-to-Speech (TTS) and Voice Conversion (VC) models have exhibited remarkable performance in generating realistic and natural audio. However, their dark side, audio deepfake poses a significant threat to both society and individuals. Existing countermeasures largely focus on determining the genuineness of speech based on complete original audio recordings, which however often contain private content. This oversight may refrain deepfake detection from many applications, particularly in scenarios involving sensitive information like business secrets. In this paper, we propose SafeEar, a novel framework that aims to detect deepfake audios without relying on accessing the speech content within. Our key idea is to devise a neural audio codec into a novel decoupling model that well separates the semantic and acoustic information from audio samples, and only use the acoustic information (e.g., prosody and timbre) for deepfake detection. In this way, no semantic content will be exposed to the detector. To overcome the challenge of identifying diverse deepfake audio without semantic clues, we enhance our deepfake detector with real-world codec augmentation. Extensive experiments conducted on four benchmark datasets demonstrate SafeEar's effectiveness in detecting various deepfake techniques with an equal error rate (EER) down to 2.02%. Simultaneously, it shields five-language speech content from being deciphered by both machine and human auditory analysis, demonstrated by word error rates (WERs) all above 93.93% and our user study. Furthermore, our benchmark constructed for anti-deepfake and anti-content recovery evaluation helps provide a basis for future research in the realms of audio privacy preservation and deepfake detection. | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 488,243 |
2304.02152 | Can Adversarial Networks Make Uninformative Colonoscopy Video Frames
Clinically Informative? | Various artifacts, such as ghost colors, interlacing, and motion blur, hinder diagnosing colorectal cancer (CRC) from videos acquired during colonoscopy. The frames containing these artifacts are called uninformative frames and are present in large proportions in colonoscopy videos. To alleviate the impact of artifacts, we propose an adversarial network based framework to convert uninformative frames to clinically relevant frames. We examine the effectiveness of the proposed approach by evaluating the translated frames for polyp detection using YOLOv5. Preliminary results present improved detection performance along with elegant qualitative outcomes. We also examine the failure cases to determine the directions for future work. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 356,334 |
1503.02917 | A Case Based Reasoning Approach for Answer Reranking in Question
Answering | In this document I present an approach to answer validation and reranking for question answering (QA) systems. A cased-based reasoning (CBR) system judges answer candidates for questions from annotated answer candidates for earlier questions. The promise of this approach is that user feedback will result in improved answers of the QA system, due to the growing case base. In the paper, I present the adequate structuring of the case base and the appropriate selection of relevant similarity measures, in order to solve the answer validation problem. The structural case base is built from annotated MultiNet graphs, which provide representations for natural language expressions, and corresponding graph similarity measures. I cover a priori relations to experienced answer candidates for former questions. I compare the CBR System results to current approaches in an experiment integrating CBR into an existing framework for answer validation and reranking. This integration is achieved by adding CBR-related features to the input of a learned ranking model that determines the final answer ranking. In the experiments based on QA@CLEF questions, the best learned models make heavy use of CBR features. Observing the results with a continually growing case base, I present a positive effect of the size of the case base on the accuracy of the CBR subsystem. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 40,991 |
2403.12675 | Pragmatic Competence Evaluation of Large Language Models for the Korean
Language | Benchmarks play a significant role in the current evaluation of Large Language Models (LLMs), yet they often overlook the models' abilities to capture the nuances of human language, primarily focusing on evaluating embedded knowledge and technical skills. To address this gap, our study evaluates how well LLMs understand context-dependent expressions from a pragmatic standpoint, specifically in Korean. We use both Multiple-Choice Questions (MCQs) for automatic evaluation and Open-Ended Questions (OEQs) assessed by human experts. Our results show that GPT-4 leads with scores of 81.11 in MCQs and 85.69 in OEQs, closely followed by HyperCLOVA X. Additionally, while few-shot learning generally improves performance, Chain-of-Thought (CoT) prompting tends to encourage literal interpretations, which may limit effective pragmatic inference. Our findings highlight the need for LLMs to better understand and generate language that reflects human communicative norms. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 439,290 |
2404.13506 | Parameter Efficient Fine Tuning: A Comprehensive Analysis Across
Applications | The rise of deep learning has marked significant progress in fields such as computer vision, natural language processing, and medical imaging, primarily through the adaptation of pre-trained models for specific tasks. Traditional fine-tuning methods, involving adjustments to all parameters, face challenges due to high computational and memory demands. This has led to the development of Parameter Efficient Fine-Tuning (PEFT) techniques, which selectively update parameters to balance computational efficiency with performance. This review examines PEFT approaches, offering a detailed comparison of various strategies highlighting applications across different domains, including text generation, medical imaging, protein modeling, and speech synthesis. By assessing the effectiveness of PEFT methods in reducing computational load, speeding up training, and lowering memory usage, this paper contributes to making deep learning more accessible and adaptable, facilitating its wider application and encouraging innovation in model optimization. Ultimately, the paper aims to contribute towards insights into PEFT's evolving landscape, guiding researchers and practitioners in overcoming the limitations of conventional fine-tuning approaches. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 448,325 |
2109.01785 | Node Feature Kernels Increase Graph Convolutional Network Robustness | The robustness of the much-used Graph Convolutional Networks (GCNs) to perturbations of their input is becoming a topic of increasing importance. In this paper, the random GCN is introduced for which a random matrix theory analysis is possible. This analysis suggests that if the graph is sufficiently perturbed, or in the extreme case random, then the GCN fails to benefit from the node features. It is furthermore observed that enhancing the message passing step in GCNs by adding the node feature kernel to the adjacency matrix of the graph structure solves this problem. An empirical study of a GCN utilised for node classification on six real datasets further confirms the theoretical findings and demonstrates that perturbations of the graph structure can result in GCNs performing significantly worse than Multi-Layer Perceptrons run on the node features alone. In practice, adding a node feature kernel to the message passing of perturbed graphs results in a significant improvement of the GCN's performance, thereby rendering it more robust to graph perturbations. Our code is publicly available at:https://github.com/ChangminWu/RobustGCN. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 253,535 |
2112.02417 | Predicting Bandwidth Utilization on Network Links Using Machine Learning | Predicting the bandwidth utilization on network links can be extremely useful for detecting congestion in order to correct them before they occur. In this paper, we present a solution to predict the bandwidth utilization between different network links with a very high accuracy. A simulated network is created to collect data related to the performance of the network links on every interface. These data are processed and expanded with feature engineering in order to create a training set. We evaluate and compare three types of machine learning algorithms, namely ARIMA (AutoRegressive Integrated Moving Average), MLP (Multi Layer Perceptron) and LSTM (Long Short-Term Memory), in order to predict the future bandwidth consumption. The LSTM outperforms ARIMA and MLP with very accurate predictions, rarely exceeding a 3\% error (40\% for ARIMA and 20\% for the MLP). We then show that the proposed solution can be used in real time with a reaction managed by a Software-Defined Networking (SDN) platform. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 269,839 |
1511.03592 | The Fourier Transform of Poisson Multinomial Distributions and its
Algorithmic Applications | An $(n, k)$-Poisson Multinomial Distribution (PMD) is a random variable of the form $X = \sum_{i=1}^n X_i$, where the $X_i$'s are independent random vectors supported on the set of standard basis vectors in $\mathbb{R}^k.$ In this paper, we obtain a refined structural understanding of PMDs by analyzing their Fourier transform. As our core structural result, we prove that the Fourier transform of PMDs is {\em approximately sparse}, i.e., roughly speaking, its $L_1$-norm is small outside a small set. By building on this result, we obtain the following applications: {\bf Learning Theory.} We design the first computationally efficient learning algorithm for PMDs with respect to the total variation distance. Our algorithm learns an arbitrary $(n, k)$-PMD within variation distance $\epsilon$ using a near-optimal sample size of $\widetilde{O}_k(1/\epsilon^2),$ and runs in time $\widetilde{O}_k(1/\epsilon^2) \cdot \log n.$ Previously, no algorithm with a $\mathrm{poly}(1/\epsilon)$ runtime was known, even for $k=3.$ {\bf Game Theory.} We give the first efficient polynomial-time approximation scheme (EPTAS) for computing Nash equilibria in anonymous games. For normalized anonymous games with $n$ players and $k$ strategies, our algorithm computes a well-supported $\epsilon$-Nash equilibrium in time $n^{O(k^3)} \cdot (k/\epsilon)^{O(k^3\log(k/\epsilon)/\log\log(k/\epsilon))^{k-1}}.$ The best previous algorithm for this problem had running time $n^{(f(k)/\epsilon)^k},$ where $f(k) = \Omega(k^{k^2})$, for any $k>2.$ {\bf Statistics.} We prove a multivariate central limit theorem (CLT) that relates an arbitrary PMD to a discretized multivariate Gaussian with the same mean and covariance, in total variation distance. Our new CLT strengthens the CLT of Valiant and Valiant by completely removing the dependence on $n$ in the error bound. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 48,772 |
2410.00979 | Towards Full-parameter and Parameter-efficient Self-learning For
Endoscopic Camera Depth Estimation | Adaptation methods are developed to adapt depth foundation models to endoscopic depth estimation recently. However, such approaches typically under-perform training since they limit the parameter search to a low-rank subspace and alter the training dynamics. Therefore, we propose a full-parameter and parameter-efficient learning framework for endoscopic depth estimation. At the first stage, the subspace of attention, convolution and multi-layer perception are adapted simultaneously within different sub-spaces. At the second stage, a memory-efficient optimization is proposed for subspace composition and the performance is further improved in the united sub-space. Initial experiments on the SCARED dataset demonstrate that results at the first stage improves the performance from 10.2% to 4.1% for Sq Rel, Abs Rel, RMSE and RMSE log in the comparison with the state-of-the-art models. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 493,551 |
2005.10615 | Accelerated Convergence for Counterfactual Learning to Rank | Counterfactual Learning to Rank (LTR) algorithms learn a ranking model from logged user interactions, often collected using a production system. Employing such an offline learning approach has many benefits compared to an online one, but it is challenging as user feedback often contains high levels of bias. Unbiased LTR uses Inverse Propensity Scoring (IPS) to enable unbiased learning from logged user interactions. One of the major difficulties in applying Stochastic Gradient Descent (SGD) approaches to counterfactual learning problems is the large variance introduced by the propensity weights. In this paper we show that the convergence rate of SGD approaches with IPS-weighted gradients suffers from the large variance introduced by the IPS weights: convergence is slow, especially when there are large IPS weights. To overcome this limitation, we propose a novel learning algorithm, called CounterSample, that has provably better convergence than standard IPS-weighted gradient descent methods. We prove that CounterSample converges faster and complement our theoretical findings with empirical results by performing extensive experimentation in a number of biased LTR scenarios -- across optimizers, batch sizes, and different degrees of position bias. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 178,232 |
2410.20763 | Evaluating LLMs for Targeted Concept Simplification for Domain-Specific
Texts | One useful application of NLP models is to support people in reading complex text from unfamiliar domains (e.g., scientific articles). Simplifying the entire text makes it understandable but sometimes removes important details. On the contrary, helping adult readers understand difficult concepts in context can enhance their vocabulary and knowledge. In a preliminary human study, we first identify that lack of context and unfamiliarity with difficult concepts is a major reason for adult readers' difficulty with domain-specific text. We then introduce "targeted concept simplification," a simplification task for rewriting text to help readers comprehend text containing unfamiliar concepts. We also introduce WikiDomains, a new dataset of 22k definitions from 13 academic domains paired with a difficult concept within each definition. We benchmark the performance of open-source and commercial LLMs and a simple dictionary baseline on this task across human judgments of ease of understanding and meaning preservation. Interestingly, our human judges preferred explanations about the difficult concept more than simplification of the concept phrase. Further, no single model achieved superior performance across all quality dimensions, and automated metrics also show low correlations with human evaluations of concept simplification ($\sim0.2$), opening up rich avenues for research on personalized human reading comprehension support. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 502,951 |
2106.08503 | Understanding and Evaluating Racial Biases in Image Captioning | Image captioning is an important task for benchmarking visual reasoning and for enabling accessibility for people with vision impairments. However, as in many machine learning settings, social biases can influence image captioning in undesirable ways. In this work, we study bias propagation pathways within image captioning, focusing specifically on the COCO dataset. Prior work has analyzed gender bias in captions using automatically-derived gender labels; here we examine racial and intersectional biases using manual annotations. Our first contribution is in annotating the perceived gender and skin color of 28,315 of the depicted people after obtaining IRB approval. Using these annotations, we compare racial biases present in both manual and automatically-generated image captions. We demonstrate differences in caption performance, sentiment, and word choice between images of lighter versus darker-skinned people. Further, we find the magnitude of these differences to be greater in modern captioning systems compared to older ones, thus leading to concerns that without proper consideration and mitigation these differences will only become increasingly prevalent. Code and data is available at https://princetonvisualai.github.io/imagecaptioning-bias . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 241,317 |
1911.11623 | A Vietnamese information retrieval system for product-price | A price information retrieval (IR) system allows users to search and view differences among prices of specific products. Building product-price driven IR system is a challenging and active research area. Approaches entirely depending products information provided by shops via interface environment encounter limitations of database. While automatic systems specifically require product names and commercial websites for their input. For both paradigms, approaches of building product-price IR system for Vietnamese are still very limited. In this paper, we introduce an automatic Vietnamese IR system for product-price by identifying and storing Xpath patterns to extract prices of products from commercial websites. Experiments of our system show promising results. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 155,180 |
2112.13542 | Sparsest Univariate Learning Models Under Lipschitz Constraint | Beside the minimization of the prediction error, two of the most desirable properties of a regression scheme are stability and interpretability. Driven by these principles, we propose continuous-domain formulations for one-dimensional regression problems. In our first approach, we use the Lipschitz constant as a regularizer, which results in an implicit tuning of the overall robustness of the learned mapping. In our second approach, we control the Lipschitz constant explicitly using a user-defined upper-bound and make use of a sparsity-promoting regularizer to favor simpler (and, hence, more interpretable) solutions. The theoretical study of the latter formulation is motivated in part by its equivalence, which we prove, with the training of a Lipschitz-constrained two-layer univariate neural network with rectified linear unit (ReLU) activations and weight decay. By proving representer theorems, we show that both problems admit global minimizers that are continuous and piecewise-linear (CPWL) functions. Moreover, we propose efficient algorithms that find the sparsest solution of each problem: the CPWL mapping with the least number of linear regions. Finally, we illustrate numerically the outcome of our formulations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 273,278 |
2304.14838 | Vision-based Target Pose Estimation with Multiple Markers for the
Perching of UAVs | Autonomous Nano Aerial Vehicles have been increasingly popular in surveillance and monitoring operations due to their efficiency and maneuverability. Once a target location has been reached, drones do not have to remain active during the mission. It is possible for the vehicle to perch and stop its motors in such situations to conserve energy, as well as maintain a static position in unfavorable flying conditions. In the perching target estimation phase, the steady and accuracy of a visual camera with markers is a significant challenge. It is rapidly detectable from afar when using a large marker, but when the drone approaches, it quickly disappears as out of camera view. In this paper, a vision-based target poses estimation method using multiple markers is proposed to deal with the above-mentioned problems. First, a perching target with a small marker inside a larger one is designed to improve detection capability at wide and close ranges. Second, the relative poses of the flying vehicle are calculated from detected markers using a monocular camera. Next, a Kalman filter is applied to provide a more stable and reliable pose estimation, especially when the measurement data is missing due to unexpected reasons. Finally, we introduced an algorithm for merging the poses data from multi markers. The poses are then sent to the position controller to align the drone and the marker's center and steer it to perch on the target. The experimental results demonstrated the effectiveness and feasibility of the adopted approach. The drone can perch successfully onto the center of the markers with the attached 25mm-diameter rounded magnet. | false | false | false | false | false | false | false | true | false | false | true | true | false | false | false | false | false | false | 361,113 |
2201.00881 | Balanced Nonadaptive Redundancy Scheduling | Distributed computing systems implement redundancy to reduce the job completion time and variability. Despite a large body of work about computing redundancy, the analytical performance evaluation of redundancy techniques in queuing systems is still an open problem. In this work, we take one step forward to analyze the performance of scheduling policies in systems with redundancy. In particular, we study the pattern of shared servers among replicas of different jobs. To this end, we employ combinatorics and graph theory and define and derive performance indicators using the statistics of the overlaps. We consider two classical nonadaptive scheduling policies: random and round-robin. We then propose a scheduling policy based on combinatorial block designs. Compared with conventional scheduling, the proposed scheduling improves the performance indicators. We study the expansion property of the graphs associated with round-robin and block design-based policies. It turns out the superior performance of the block design-based policy results from better expansion properties of its associated graph. As indicated by the performance indicators, the simulation results show that the block design-based policy outperforms random and round-robin scheduling in different scenarios. Specifically, it reduces the average waiting time in the queue to up to 25% compared to the random policy and up to 100% compared to the round-robin policy. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 274,087 |
2206.08265 | Maximum Likelihood Training for Score-Based Diffusion ODEs by High-Order
Denoising Score Matching | Score-based generative models have excellent performance in terms of generation quality and likelihood. They model the data distribution by matching a parameterized score network with first-order data score functions. The score network can be used to define an ODE ("score-based diffusion ODE") for exact likelihood evaluation. However, the relationship between the likelihood of the ODE and the score matching objective is unclear. In this work, we prove that matching the first-order score is not sufficient to maximize the likelihood of the ODE, by showing a gap between the maximum likelihood and score matching objectives. To fill up this gap, we show that the negative likelihood of the ODE can be bounded by controlling the first, second, and third-order score matching errors; and we further present a novel high-order denoising score matching method to enable maximum likelihood training of score-based diffusion ODEs. Our algorithm guarantees that the higher-order matching error is bounded by the training error and the lower-order errors. We empirically observe that by high-order score matching, score-based diffusion ODEs achieve better likelihood on both synthetic data and CIFAR-10, while retaining the high generation quality. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,061 |
2410.11744 | DySpec: Faster Speculative Decoding with Dynamic Token Tree Structure | While speculative decoding has recently appeared as a promising direction for accelerating the inference of large language models (LLMs), the speedup and scalability are strongly bounded by the token acceptance rate. Prevalent methods usually organize predicted tokens as independent chains or fixed token trees, which fails to generalize to diverse query distributions. In this paper, we propose DySpec, a faster speculative decoding algorithm with a novel dynamic token tree structure. We begin by bridging the draft distribution and acceptance rate from intuitive and empirical clues, and successfully show that the two variables are strongly correlated. Based on this, we employ a greedy strategy to dynamically expand the token tree at run time. Theoretically, we show that our method can achieve optimal results under mild assumptions. Empirically, DySpec yields a higher acceptance rate and speedup than fixed trees. DySpec can drastically improve the throughput and reduce the latency of token generation across various data distribution and model sizes, which significantly outperforms strong competitors, including Specinfer and Sequoia. Under low temperature setting, DySpec can improve the throughput up to 9.1$\times$ and reduce the latency up to 9.4$\times$ on Llama2-70B. Under high temperature setting, DySpec can also improve the throughput up to 6.21$\times$, despite the increasing difficulty of speculating more than one token per step for draft model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 498,697 |
1904.12904 | Neuromorphic Acceleration for Approximate Bayesian Inference on Neural
Networks via Permanent Dropout | As neural networks have begun performing increasingly critical tasks for society, ranging from driving cars to identifying candidates for drug development, the value of their ability to perform uncertainty quantification (UQ) in their predictions has risen commensurately. Permanent dropout, a popular method for neural network UQ, involves injecting stochasticity into the inference phase of the model and creating many predictions for each of the test data. This shifts the computational and energy burden of deep neural networks from the training phase to the inference phase. Recent work has demonstrated near-lossless conversion of classical deep neural networks to their spiking counterparts. We use these results to demonstrate the feasibility of conducting the inference phase with permanent dropout on spiking neural networks, mitigating the technique's computational and energy burden, which is essential for its use at scale or on edge platforms. We demonstrate the proposed approach via the Nengo spiking neural simulator on a combination drug therapy dataset for cancer treatment, where UQ is critical. Our results indicate that the spiking approximation gives a predictive distribution practically indistinguishable from that given by the classical network. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 129,245 |
2001.07960 | A Fixation-based 360{\deg} Benchmark Dataset for Salient Object
Detection | Fixation prediction (FP) in panoramic contents has been widely investigated along with the booming trend of virtual reality (VR) applications. However, another issue within the field of visual saliency, salient object detection (SOD), has been seldom explored in 360{\deg} (or omnidirectional) images due to the lack of datasets representative of real scenes with pixel-level annotations. Toward this end, we collect 107 equirectangular panoramas with challenging scenes and multiple object classes. Based on the consistency between FP and explicit saliency judgements, we further manually annotate 1,165 salient objects over the collected images with precise masks under the guidance of real human eye fixation maps. Six state-of-the-art SOD models are then benchmarked on the proposed fixation-based 360{\deg} image dataset (F-360iSOD), by applying a multiple cubic projection-based fine-tuning method. Experimental results show a limitation of the current methods when used for SOD in panoramic images, which indicates the proposed dataset is challenging. Key issues for 360{\deg} SOD is also discussed. The proposed dataset is available at https://github.com/PanoAsh/F-360iSOD. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 161,169 |
1910.01688 | Bayesian Optimization for Materials Design with Mixed Quantitative and
Qualitative Variables | Although Bayesian Optimization (BO) has been employed for accelerating materials design in computational materials engineering, existing works are restricted to problems with quantitative variables. However, real designs of materials systems involve both qualitative and quantitative design variables representing material compositions, microstructure morphology, and processing conditions. For mixed-variable problems, existing Bayesian Optimization (BO) approaches represent qualitative factors by dummy variables first and then fit a standard Gaussian process (GP) model with numerical variables as the surrogate model. This approach is restrictive theoretically and fails to capture complex correlations between qualitative levels. We present in this paper the integration of a novel latent-variable (LV) approach for mixed-variable GP modeling with the BO framework for materials design. LVGP is a fundamentally different approach that maps qualitative design variables to underlying numerical LV in GP, which has strong physical justification. It provides flexible parameterization and representation of qualitative factors and shows superior modeling accuracy compared to the existing methods. We demonstrate our approach through testing with numerical examples and materials design examples. It is found that in all test examples the mapped LVs provide intuitive visualization and substantial insight into the nature and effects of the qualitative factors. Though materials designs are used as examples, the method presented is generic and can be utilized for other mixed variable design optimization problems that involve expensive physics-based simulations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 147,999 |
2112.05393 | A Self-supervised Mixed-curvature Graph Neural Network | Graph representation learning received increasing attentions in recent years. Most of existing methods ignore the complexity of the graph structures and restrict graphs in a single constant-curvature representation space, which is only suitable to particular kinds of graph structure indeed. Additionally, these methods follow the supervised or semi-supervised learning paradigm, and thereby notably limit their deployment on the unlabeled graphs in real applications. To address these aforementioned limitations, we take the first attempt to study the self-supervised graph representation learning in the mixed-curvature spaces. In this paper, we present a novel Self-supervised Mixed-curvature Graph Neural Network (SelfMGNN). Instead of working on one single constant-curvature space, we construct a mixed-curvature space via the Cartesian product of multiple Riemannian component spaces and design hierarchical attention mechanisms for learning and fusing the representations across these component spaces. To enable the self-supervisd learning, we propose a novel dual contrastive approach. The mixed-curvature Riemannian space actually provides multiple Riemannian views for the contrastive learning. We introduce a Riemannian projector to reveal these views, and utilize a well-designed Riemannian discriminator for the single-view and cross-view contrastive learning within and across the Riemannian views. Finally, extensive experiments show that SelfMGNN captures the complicated graph structures in reality and outperforms state-of-the-art baselines. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 270,831 |
2409.02938 | CortexCompile: Harnessing Cortical-Inspired Architectures for Enhanced
Multi-Agent NLP Code Synthesis | Current approaches to automated code generation often rely on monolithic models that lack real-time adaptability and scalability. This limitation is particularly evident in complex programming tasks that require dynamic adjustment and efficiency. The integration of neuroscience principles into Natural Language Processing (NLP) has the potential to revolutionize automated code generation. This paper presents CortexCompile, a novel modular system inspired by the specialized functions of the human brain's cortical regions. By emulating the distinct roles of the Prefrontal Cortex, Parietal Cortex, Temporal Lobe, and Motor Cortex, CortexCompile achieves significant advancements in scalability, efficiency, and adaptability compared to traditional monolithic models like GPT-4o. The system's architecture features a Task Orchestration Agent that manages dynamic task delegation and parallel processing, facilitating the generation of highly accurate and optimized code across increasingly complex programming tasks. Experimental evaluations demonstrate that CortexCompile consistently outperforms GPT-4o in development time, accuracy, and user satisfaction, particularly in tasks involving real-time strategy games and first-person shooters. These findings underscore the viability of neuroscience-inspired architectures in addressing the limitations of current NLP models, paving the way for more efficient and human-like AI systems. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 485,876 |
2312.09536 | Riveter: Measuring Power and Social Dynamics Between Entities | Riveter provides a complete easy-to-use pipeline for analyzing verb connotations associated with entities in text corpora. We prepopulate the package with connotation frames of sentiment, power, and agency, which have demonstrated usefulness for capturing social phenomena, such as gender bias, in a broad range of corpora. For decades, lexical frameworks have been foundational tools in computational social science, digital humanities, and natural language processing, facilitating multifaceted analysis of text corpora. But working with verb-centric lexica specifically requires natural language processing skills, reducing their accessibility to other researchers. By organizing the language processing pipeline, providing complete lexicon scores and visualizations for all entities in a corpus, and providing functionality for users to target specific research questions, Riveter greatly improves the accessibility of verb lexica and can facilitate a broad range of future research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 415,772 |
2304.14720 | A Federated Reinforcement Learning Framework for Link Activation in
Multi-link Wi-Fi Networks | Next-generation Wi-Fi networks are looking forward to introducing new features like multi-link operation (MLO) to both achieve higher throughput and lower latency. However, given the limited number of available channels, the use of multiple links by a group of contending Basic Service Sets (BSSs) can result in higher interference and channel contention, thus potentially leading to lower performance and reliability. In such a situation, it could be better for all contending BSSs to use less links if that contributes to reduce channel access contention. Recently, reinforcement learning (RL) has proven its potential for optimizing resource allocation in wireless networks. However, the independent operation of each wireless network makes difficult -- if not almost impossible -- for each individual network to learn a good configuration. To solve this issue, in this paper, we propose the use of a Federated Reinforcement Learning (FRL) framework, i.e., a collaborative machine learning approach to train models across multiple distributed agents without exchanging data, to collaboratively learn the the best MLO-Link Allocation (LA) strategy by a group of neighboring BSSs. The simulation results show that the FRL-based decentralized MLO-LA strategy achieves a better throughput fairness, and so a higher reliability -- because it allows the different BSSs to find a link allocation strategy which maximizes the minimum achieved data rate -- compared to fixed, random and RL-based MLO-LA schemes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 361,070 |
2012.08019 | Understanding graph embedding methods and their applications | Graph analytics can lead to better quantitative understanding and control of complex networks, but traditional methods suffer from high computational cost and excessive memory requirements associated with the high-dimensionality and heterogeneous characteristics of industrial size networks. Graph embedding techniques can be effective in converting high-dimensional sparse graphs into low-dimensional, dense and continuous vector spaces, preserving maximally the graph structure properties. Another type of emerging graph embedding employs Gaussian distribution-based graph embedding with important uncertainty estimation. The main goal of graph embedding methods is to pack every node's properties into a vector with a smaller dimension, hence, node similarity in the original complex irregular spaces can be easily quantified in the embedded vector spaces using standard metrics. The generated nonlinear and highly informative graph embeddings in the latent space can be conveniently used to address different downstream graph analytics tasks (e.g., node classification, link prediction, community detection, visualization, etc.). In this Review, we present some fundamental concepts in graph analytics and graph embedding methods, focusing in particular on random walk-based and neural network-based methods. We also discuss the emerging deep learning-based dynamic graph embedding methods. We highlight the distinct advantages of graph embedding methods in four diverse applications, and present implementation details and references to open-source software as well as available databases in the Appendix for the interested readers to start their exploration into graph analytics. | false | false | false | true | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 211,629 |
1111.5799 | Spatial Throughput of Mobile Ad Hoc Networks Powered by Energy
Harvesting | Designing mobiles to harvest ambient energy such as kinetic activities or electromagnetic radiation will enable wireless networks to be self sustaining besides alleviating global warming. In this paper, the spatial throughput of a mobile ad hoc network powered by energy harvesting is analyzed using a stochastic-geometry model. In this model, transmitters are distributed as a Poisson point process and energy arrives at each transmitter randomly with a uniform average rate called the energy arrival rate; upon harvesting sufficient energy, each transmitter transmits with fixed power to an intended receiver under an outage-probability constraint for a target signal-to-interference-and-noise ratio. It is assumed that transmitters store energy in batteries with infinite capacity. By applying the random-walk theory, the probability that a transmitter transmits, called the transmission probability, is proved to be equal to one if the energy-arrival rate exceeds transmission power or otherwise is equal to their ratio. This result and tools from stochastic geometry are applied to maximize the network throughput for a given energy-arrival rate by optimizing transmission power. The maximum network throughput is shown to be proportional to the optimal transmission probability, which is equal to one if the transmitter density is below a derived function of the energy-arrival rate or otherwise is smaller than one and solves a given polynomial equation. Last, the limits of the maximum network throughput are obtained for the extreme cases of high energy-arrival rates and dense networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 13,167 |
1907.07477 | AVDNet: A Small-Sized Vehicle Detection Network for Aerial Visual Data | Detection of small-sized targets in aerial views is a challenging task due to the smallness of vehicle size, complex background, and monotonic object appearances. In this letter, we propose a one-stage vehicle detection network (AVDNet) to robustly detect small-sized vehicles in aerial scenes. In AVDNet, we introduced ConvRes residual blocks at multiple scales to alleviate the problem of vanishing features for smaller objects caused because of the inclusion of deeper convolutional layers. These residual blocks, along with enlarged output feature map, ensure the robust representation of the salient features for small sized objects. Furthermore, we proposed a recurrent-feature aware visualization (RFAV) technique to analyze the network behavior. We also created a new airborne image data set (ABD) by annotating 1396 new objects in 79 aerial images for our experiments. The effectiveness of AVDNet is validated on VEDAI, DLR- 3K, DOTA, and the combined (VEDAI, DLR-3K, DOTA, and ABD) data set. Experimental results demonstrate the significant performance improvement of the proposed method over state-of-the-art detection techniques in terms of mAP, computation, and space complexity. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 138,882 |
2401.12499 | On the Fundamental Tradeoff of Joint Communication and Quickest Change
Detection with State-Independent Data Channels | In this work, we take the initiative in studying the information-theoretic tradeoff between communication and quickest change detection (QCD) under an integrated sensing and communication setting. We formally establish a joint communication and sensing problem for the quickest change detection. We assume a broadcast channel with a transmitter, a communication receiver, and a QCD detector in which only the detection channel is state dependent. For the problem setting, by utilizing constant subblock-composition codes and a modified CuSum detection rule, which we call subblock CuSum (SCS), we provide an inner bound on the information-theoretic tradeoff between communication rate and change point detection delay in the asymptotic regime of vanishing false alarm rate. We further provide a partial converse that matches our inner bound for a certain class of codes. This implies that the SCS detection strategy is asymptotically optimal for our codes as the false alarm rate constraint vanishes. We also present some canonical examples of the tradeoff region for a binary channel, a scalar Gaussian channel, and a MIMO Gaussian channel. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 423,405 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.