id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2408.09015
AdaRank: Disagreement Based Module Rank Prediction for Low-rank Adaptation
With the rise of language and multimodal models of ever-increasing size, pretraining a general-purpose foundational model and adapting it to downstream tasks has become common practice. To this end, adaptation efficiency can be a critical bottleneck given the large model sizes, hence efficient finetuning methods such as LoRA have become prevalent. However, LoRA is typically applied with the same rank across all model layers, despite mounting evidence from transfer learning literature that during finetuning, later layers diverge more from pretrained weights. Inspired by the theory and observations around feature learning and module criticality, we develop a simple model disagreement based technique to predict the rank of a given module relative to the other modules. Empirically, AdaRank generalizes notably better on unseen data than using uniform ranks with the same number of parameters. Compared to prior work, AdaRank has the unique advantage of leaving the pretraining and adaptation stages completely intact: no need for any additional objectives or regularizers, which can hinder adaptation accuracy and performance. Our code is publicly available at https://github.com/google-research/google-research/tree/master/adaptive_low_rank.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
481,247
2206.07483
Blind Estimation of a Doubly Selective OFDM Channel: A Deep Learning Algorithm and Theory
We provide a new generation solution to the fundamental old problem of a doubly selective fading channel estimation for orthogonal frequency division multiplexing (OFDM) systems. For systems based on OFDM, we propose a deep learning (DL)-based blind doubly selective channel estimator. This estimator does require no pilot symbols, unlike the corresponding state-of-the-art estimators, even during the estimation of a deep fading doubly selective channel. We also provide the first of its kind theory on the testing mean squared error (MSE) performance of our investigated blind OFDM channel estimator based on over-parameterized ReLU FNNs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
302,756
2407.15224
PUFFLE: Balancing Privacy, Utility, and Fairness in Federated Learning
Training and deploying Machine Learning models that simultaneously adhere to principles of fairness and privacy while ensuring good utility poses a significant challenge. The interplay between these three factors of trustworthiness is frequently underestimated and remains insufficiently explored. Consequently, many efforts focus on ensuring only two of these factors, neglecting one in the process. The decentralization of the datasets and the variations in distributions among the clients exacerbate the complexity of achieving this ethical trade-off in the context of Federated Learning (FL). For the first time in FL literature, we address these three factors of trustworthiness. We introduce PUFFLE, a high-level parameterised approach that can help in the exploration of the balance between utility, privacy, and fairness in FL scenarios. We prove that PUFFLE can be effective across diverse datasets, models, and data distributions, reducing the model unfairness up to 75%, with a maximum reduction in the utility of 17% in the worst-case scenario, while maintaining strict privacy guarantees during the FL training.
false
false
false
false
true
false
true
false
false
false
false
false
true
true
false
false
false
false
475,083
2105.10585
Properties of the After Kernel
The Neural Tangent Kernel (NTK) is the wide-network limit of a kernel defined using neural networks at initialization, whose embedding is the gradient of the output of the network with respect to its parameters. We study the "after kernel", which is defined using the same embedding, except after training, for neural networks with standard architectures, on binary classification problems extracted from MNIST and CIFAR-10, trained using SGD in a standard way. For some dataset-architecture pairs, after a few epochs of neural network training, a hard-margin SVM using the network's after kernel is much more accurate than when the network's initial kernel is used. For networks with an architecture similar to VGG, the after kernel is more "global", in the sense that it is less invariant to transformations of input images that disrupt the global structure of the image while leaving the local statistics largely intact. For fully connected networks, the after kernel is less global in this sense. The after kernel tends to be more invariant to small shifts, rotations and zooms; data augmentation does not improve these invariances. The (finite approximation to the) conjugate kernel, obtained using the last layer of hidden nodes, sometimes, but not always, provides a good approximation to the NTK and the after kernel. Training a network with a larger learning rate (while holding the training error constant) produces a better kernel, as measured by the test error of a hard-margin SVM. The after kernels of networks trained with larger learning rates tend to be more global, and more invariant to small shifts, rotations and zooms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
236,436
1701.08190
Comparative Study Of Data Mining Query Languages
Since formulation of Inductive Database (IDB) problem, several Data Mining (DM) languages have been proposed, confirming that KDD process could be supported via inductive queries (IQ) answering. This paper reviews the existing DM languages. We are presenting important primitives of the DM language and classifying our languages according to primitives' satisfaction. In addition, we presented languages' syntaxes and tried to apply each one to a database sample to test a set of KDD operations. This study allows us to highlight languages capabilities and limits, which is very useful for future work and perspectives.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
67,421
1804.08138
Complex Network Analysis of Men Single ATP Tennis Matches
Who are the most significant players in the history of men tennis? Is the official ATP ranking system fair in evaluating players scores? Which players deserved the most contemplation looking at their match records? Which players have never faced yet and are likely to play against in the future? Those are just some of the questions developed in this paper supported by data updated at April 2018. In order to give an answer to the aforementioned questions, complex network science techniques have been applied to some representations of the network of men singles tennis matches. Additionally, a new predictive algorithm is proposed in order to forecast the winner of a match.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
95,696
2110.01812
UHP-SOT: An Unsupervised High-Performance Single Object Tracker
An unsupervised online object tracking method that exploits both foreground and background correlations is proposed and named UHP-SOT (Unsupervised High-Performance Single Object Tracker) in this work. UHP-SOT consists of three modules: 1) appearance model update, 2) background motion modeling, and 3) trajectory-based box prediction. A state-of-the-art discriminative correlation filters (DCF) based tracker is adopted by UHP-SOT as the first module. We point out shortcomings of using the first module alone such as failure in recovering from tracking loss and inflexibility in object box adaptation and then propose the second and third modules to overcome them. Both are novel in single object tracking (SOT). We test UHP-SOT on two popular object tracking benchmarks, TB-50 and TB-100, and show that it outperforms all previous unsupervised SOT methods, achieves a performance comparable with the best supervised deep-learning-based SOT methods, and operates at a fast speed (i.e. 22.7-32.0 FPS on a CPU).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
258,901
2403.00886
Evaluating and Correcting Performative Effects of Decision Support Systems via Causal Domain Shift
When predicting a target variable $Y$ from features $X$, the prediction $\hat{Y}$ can be performative: an agent might act on this prediction, affecting the value of $Y$ that we eventually observe. Performative predictions are deliberately prevalent in algorithmic decision support, where a Decision Support System (DSS) provides a prediction for an agent to affect the value of the target variable. When deploying a DSS in high-stakes settings (e.g. healthcare, law, predictive policing, or child welfare screening) it is imperative to carefully assess the performative effects of the DSS. In the case that the DSS serves as an alarm for a predicted negative outcome, naive retraining of the prediction model is bound to result in a model that underestimates the risk, due to effective workings of the previous model. In this work, we propose to model the deployment of a DSS as causal domain shift and provide novel cross-domain identification results for the conditional expectation $E[Y | X]$, allowing for pre- and post-hoc assessment of the deployment of the DSS, and for retraining of a model that assesses the risk under a baseline policy where the DSS is not deployed. Using a running example, we empirically show that a repeated regression procedure provides a practical framework for estimating these quantities, even when the data is affected by sample selection bias and selective labelling, offering for a practical, unified solution for multiple forms of target variable bias.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
434,174
2109.02899
Blockchains through ontologies: the case study of the Ethereum ERC721 standard in OASIS (Extended Version)
Blockchains are gaining momentum due to the interest of industries and people in \emph{decentralized applications} (Dapps), particularly in those for trading assets through digital certificates secured on blockchain, called tokens. As a consequence, providing a clear unambiguous description of any activities carried out on blockchains has become crucial, and we feel the urgency to achieve that description at least for trading. This paper reports on how to leverage the \emph{Ontology for Agents, Systems, and Integration of Services} ("\ONT{}") as a general means for the semantic representation of smart contracts stored on blockchain as software agents. Special attention is paid to non-fungible tokens (NFTs), whose management through the ERC721 standard is presented as a case study.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
253,890
2109.03805
Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking
Panoptic scene understanding and tracking of dynamic agents are essential for robots and automated vehicles to navigate in urban environments. As LiDARs provide accurate illumination-independent geometric depictions of the scene, performing these tasks using LiDAR point clouds provides reliable predictions. However, existing datasets lack diversity in the type of urban scenes and have a limited number of dynamic object instances which hinders both learning of these tasks as well as credible benchmarking of the developed methods. In this paper, we introduce the large-scale Panoptic nuScenes benchmark dataset that extends our popular nuScenes dataset with point-wise groundtruth annotations for semantic segmentation, panoptic segmentation, and panoptic tracking tasks. To facilitate comparison, we provide several strong baselines for each of these tasks on our proposed dataset. Moreover, we analyze the drawbacks of the existing metrics for panoptic tracking and propose the novel instance-centric PAT metric that addresses the concerns. We present exhaustive experiments that demonstrate the utility of Panoptic nuScenes compared to existing datasets and make the online evaluation server available at nuScenes.org. We believe that this extension will accelerate the research of novel methods for scene understanding of dynamic urban environments.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
254,174
2406.16013
Database-Augmented Query Representation for Information Retrieval
Information retrieval models that aim to search for the documents relevant to the given query have shown many successes, which have been applied to diverse tasks. However, the query provided by the user is oftentimes very short, which challenges the retrievers to correctly fetch relevant documents. To tackle this, existing studies have proposed expanding the query with a couple of additional (user-related) features related to the query. Yet, they may be suboptimal to effectively augment the query, though there is plenty of information available to augment it in a relational database. Motivated by this, we present a novel retrieval framework called Database-Augmented Query representation (DAQu), which augments the original query with various (query-related) metadata across multiple tables. In addition, as the number of features in the metadata can be very large and there is no order among them, we encode them with our graph-based set encoding strategy, which considers hierarchies of features in the database without order. We validate DAQu in diverse retrieval scenarios that can incorporate metadata from the relational database, demonstrating that ours significantly enhances overall retrieval performance, compared to existing query augmentation methods.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
466,962
2310.16252
Near-Optimal Pure Exploration in Matrix Games: A Generalization of Stochastic Bandits & Dueling Bandits
We study the sample complexity of identifying the pure strategy Nash equilibrium (PSNE) in a two-player zero-sum matrix game with noise. Formally, we are given a stochastic model where any learner can sample an entry $(i,j)$ of the input matrix $A\in[-1,1]^{n\times m}$ and observe $A_{i,j}+\eta$ where $\eta$ is a zero-mean 1-sub-Gaussian noise. The aim of the learner is to identify the PSNE of $A$, whenever it exists, with high probability while taking as few samples as possible. Zhou et al. (2017) presents an instance-dependent sample complexity lower bound that depends only on the entries in the row and column in which the PSNE lies. We design a near-optimal algorithm whose sample complexity matches the lower bound, up to log factors. The problem of identifying the PSNE also generalizes the problem of pure exploration in stochastic multi-armed bandits and dueling bandits, and our result matches the optimal bounds, up to log factors, in both the settings.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
402,641
1810.08187
Coded Caching for Heterogeneous Systems: An Optimization Perspective
In cache-aided networks, the server populates the cache memories at the users during low-traffic periods, in order to reduce the delivery load during peak-traffic hours. In turn, there exists a fundamental trade-off between the delivery load on the server and the cache sizes at the users. In this paper, we study this trade-off in a multicast network where the server is connected to users with unequal cache sizes and the number of users is less than or equal to the number of library files. We propose centralized uncoded placement and linear delivery schemes which are optimized by solving a linear program. Additionally, we derive a lower bound on the delivery memory trade-off with uncoded placement that accounts for the heterogeneity in cache sizes. We explicitly characterize this trade-off for the case of three end-users, as well as an arbitrary number of end-users when the total memory size at the users is small, and when it is large. Next, we consider a system where the server is connected to the users via rate limited links of different capacities and the server assigns the users' cache sizes subject to a total cache budget. We characterize the optimal cache sizes that minimize the delivery completion time with uncoded placement and linear delivery. In particular, the optimal memory allocation balances between assigning larger cache sizes to users with low capacity links and uniform memory allocation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
110,777
2203.15867
Image Retrieval from Contextual Descriptions
The ability to integrate context, including perceptual and temporal cues, plays a pivotal role in grounding the meaning of a linguistic utterance. In order to measure to what extent current vision-and-language models master this ability, we devise a new multimodal challenge, Image Retrieval from Contextual Descriptions (ImageCoDe). In particular, models are tasked with retrieving the correct image from a set of 10 minimally contrastive candidates based on a contextual description. As such, each description contains only the details that help distinguish between images. Because of this, descriptions tend to be complex in terms of syntax and discourse and require drawing pragmatic inferences. Images are sourced from both static pictures and video frames. We benchmark several state-of-the-art models, including both cross-encoders such as ViLBERT and bi-encoders such as CLIP, on ImageCoDe. Our results reveal that these models dramatically lag behind human performance: the best variant achieves an accuracy of 20.9 on video frames and 59.4 on static pictures, compared with 90.8 in humans. Furthermore, we experiment with new model variants that are better equipped to incorporate visual and temporal context into their representations, which achieve modest gains. Our hope is that ImageCoDE will foster progress in grounded language understanding by encouraging models to focus on fine-grained visual differences.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
288,569
2112.14040
Deep neural networks for solving forward and inverse problems of (2+1)-dimensional nonlinear wave equations with rational solitons
In this paper, we investigate the forward problems on the data-driven rational solitons for the (2+1)-dimensional KP-I equation and spin-nonlinear Schr\"odinger (spin-NLS) equation via the deep neural networks leaning. Moreover, the inverse problems of the (2+1)-dimensional KP-I equation and spin-NLS equation are studied via deep learning. The main idea of the data-driven forward and inverse problems is to use the deep neural networks with the activation function to approximate the solutions of the considered (2+1)-dimensional nonlinear wave equations by optimizing the chosen loss functions related to the considered nonlinear wave equations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
273,438
1206.0555
Synergy-based Hand Pose Sensing: Reconstruction Enhancement
Low-cost sensing gloves for reconstruction posture provide measurements which are limited under several regards. They are generated through an imperfectly known model, are subject to noise, and may be less than the number of Degrees of Freedom (DoFs) of the hand. Under these conditions, direct reconstruction of the hand posture is an ill-posed problem, and performance can be very poor. This paper examines the problem of estimating the posture of a human hand using(low-cost) sensing gloves, and how to improve their performance by exploiting the knowledge on how humans most frequently use their hands. To increase the accuracy of pose reconstruction without modifying the glove hardware - hence basically at no extra cost - we propose to collect, organize, and exploit information on the probabilistic distribution of human hand poses in common tasks. We discuss how a database of such an a priori information can be built, represented in a hierarchy of correlation patterns or postural synergies, and fused with glove data in a consistent way, so as to provide a good hand pose reconstruction in spite of insufficient and inaccurate sensing data. Simulations and experiments on a low-cost glove are reported which demonstrate the effectiveness of the proposed techniques.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
16,304
2003.00439
Differential Evolution with Individuals Redistribution for Real Parameter Single Objective Optimization
Differential Evolution (DE) is quite powerful for real parameter single objective optimization. However, the ability of extending or changing search area when falling into a local optimum is still required to be developed in DE for accommodating extremely complicated fitness landscapes with a huge number of local optima. We propose a new flow of DE, termed DE with individuals redistribution, in which a process of individuals redistribution will be called when progress on fitness is low for generations. In such a process, mutation and crossover are standardized, while trial vectors are all kept in selection. Once diversity exceeds a predetermined threshold, our opposition replacement is executed, then algorithm behavior returns to original mode. In our experiments based on two benchmark test suites, we apply individuals redistribution in ten DE algorithms. Versions of the ten DE algorithms based on individuals redistribution are compared with not only original version but also version based on complete restart, where individuals redistribution and complete restart are based on the same entry criterion. Experimental results indicate that, for most of the DE algorithms, version based on individuals redistribution performs better than both original version and version based on complete restart.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
166,305
1812.03304
Real-time Acceleration-continuous Path-constrained Trajectory Planning With Built-in Tradability Between Cruise and Time-optimal Motions
In this paper, a novel real-time acceleration-continuous path-constrained trajectory planning algorithm is proposed with an appealing built-in tradability mechanism between cruise motion and time-optimal motion. Different from existing approaches, the proposed approach smoothens time-optimal trajectories with bang-bang input structures to generate acceleration-continuous trajectories while preserving the completeness property. More importantly, a novel built-in tradability mechanism is proposed and embedded into the trajectory planning framework, so that the proportion of the cruise motion and time-optimal motion can be flexibly adjusted by changing a user-specified functional parameter. Thus, the user can easily apply the trajectory planning algorithm for various tasks with different requirements on motion efficiency and cruise proportion. Moreover, it is shown that feasible trajectories are computed more quickly than optimal trajectories. Rigorous mathematical analysis and proofs are provided for these aforementioned results. Comparative simulation and experimental results on omnidirectional wheeled mobile robots demonstrate the capability of the proposed algorithm in terms of flexible tunning between cruise and time-optimal motions, as well as higher computational efficiency.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
115,981
1805.00713
CrisisMMD: Multimodal Twitter Datasets from Natural Disasters
During natural and man-made disasters, people use social media platforms such as Twitter to post textual and multime- dia content to report updates about injured or dead people, infrastructure damage, and missing or found people among other information types. Studies have revealed that this on- line information, if processed timely and effectively, is ex- tremely useful for humanitarian organizations to gain situational awareness and plan relief operations. In addition to the analysis of textual content, recent studies have shown that imagery content on social media can boost disaster response significantly. Despite extensive research that mainly focuses on textual content to extract useful information, limited work has focused on the use of imagery content or the combination of both content types. One of the reasons is the lack of labeled imagery data in this domain. Therefore, in this paper, we aim to tackle this limitation by releasing a large multi-modal dataset collected from Twitter during different natural disasters. We provide three types of annotations, which are useful to address a number of crisis response and management tasks for different humanitarian organizations.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
96,494
2209.04114
An Artificial Chemistry Implementation of a Gene Regulatory Network
Gene Regulatory Networks are networks of interactions in biological organisms responsible for determining the production levels of proteins and peptides. Proteins are workers of a cell factory, and their production defines the goal of a cell and its development. Various attempts have been made to model such networks both to understand these biological systems better and to use inspiration from understanding them to solve computational problems. In this work, a biologically more realistic model for gene regulatory networks is proposed, which incorporates Cellular Automata and Artificial Chemistry to model the interactions between regulatory proteins called the Transcription Factors and the regulatory sites of genes. The result of this work shows complex dynamics close to what can be observed in nature. Here, an analysis of the impact of the initial states of the system on the produced dynamics is performed, showing that such evolvable models can be directed towards producing desired protein dynamics.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
316,689
2202.00182
Semi-supervised 3D Object Detection via Temporal Graph Neural Networks
3D object detection plays an important role in autonomous driving and other robotics applications. However, these detectors usually require training on large amounts of annotated data that is expensive and time-consuming to collect. Instead, we propose leveraging large amounts of unlabeled point cloud videos by semi-supervised learning of 3D object detectors via temporal graph neural networks. Our insight is that temporal smoothing can create more accurate detection results on unlabeled data, and these smoothed detections can then be used to retrain the detector. We learn to perform this temporal reasoning with a graph neural network, where edges represent the relationship between candidate detections in different time frames. After semi-supervised learning, our method achieves state-of-the-art detection performance on the challenging nuScenes and H3D benchmarks, compared to baselines trained on the same amount of labeled data. Project and code are released at https://www.jianrenw.com/SOD-TGNN/.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
278,060
2211.12624
Improving Robust Generalization by Direct PAC-Bayesian Bound Minimization
Recent research in robust optimization has shown an overfitting-like phenomenon in which models trained against adversarial attacks exhibit higher robustness on the training set compared to the test set. Although previous work provided theoretical explanations for this phenomenon using a robust PAC-Bayesian bound over the adversarial test error, related algorithmic derivations are at best only loosely connected to this bound, which implies that there is still a gap between their empirical success and our understanding of adversarial robustness theory. To close this gap, in this paper we consider a different form of the robust PAC-Bayesian bound and directly minimize it with respect to the model posterior. The derivation of the optimal solution connects PAC-Bayesian learning to the geometry of the robust loss surface through a Trace of Hessian (TrH) regularizer that measures the surface flatness. In practice, we restrict the TrH regularizer to the top layer only, which results in an analytical solution to the bound whose computational cost does not depend on the network depth. Finally, we evaluate our TrH regularization approach over CIFAR-10/100 and ImageNet using Vision Transformers (ViT) and compare against baseline adversarial robustness algorithms. Experimental results show that TrH regularization leads to improved ViT robustness that either matches or surpasses previous state-of-the-art approaches while at the same time requires less memory and computational cost.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
332,171
2305.07883
Towards Generalizable Medical Image Segmentation with Pixel-wise Uncertainty Estimation
Deep neural networks (DNNs) achieve promising performance in visual recognition under the independent and identically distributed (IID) hypothesis. In contrast, the IID hypothesis is not universally guaranteed in numerous real-world applications, especially in medical image analysis. Medical image segmentation is typically formulated as a pixel-wise classification task in which each pixel is classified into a category. However, this formulation ignores the hard-to-classified pixels, e.g., some pixels near the boundary area, as they usually confuse DNNs. In this paper, we first explore that hard-to-classified pixels are associated with high uncertainty. Based on this, we propose a novel framework that utilizes uncertainty estimation to highlight hard-to-classified pixels for DNNs, thereby improving its generalization. We evaluate our method on two popular benchmarks: prostate and fundus datasets. The results of the experiment demonstrate that our method outperforms state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
364,069
2309.02186
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
Previous animatable 3D-aware GANs for human generation have primarily focused on either the human head or full body. However, head-only videos are relatively uncommon in real life, and full body generation typically does not deal with facial expression control and still has challenges in generating high-quality results. Towards applicable video avatars, we present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements. It is a generative model trained on unstructured 2D image collections without using 3D or video data. For the new task, we base our method on the generative radiance manifold representation and equip it with learnable facial and head-shoulder deformations. A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces, which is critical for portrait images. A pose deformation processing network is developed to generate plausible deformations for challenging regions such as long hair. Experiments show that our method, trained on unstructured 2D images, can generate diverse and high-quality 3D portraits with desired control over different properties.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
389,962
1007.3622
A generalized risk approach to path inference based on hidden Markov models
Motivated by the unceasing interest in hidden Markov models (HMMs), this paper re-examines hidden path inference in these models, using primarily a risk-based framework. While the most common maximum a posteriori (MAP), or Viterbi, path estimator and the minimum error, or Posterior Decoder (PD), have long been around, other path estimators, or decoders, have been either only hinted at or applied more recently and in dedicated applications generally unfamiliar to the statistical learning community. Over a decade ago, however, a family of algorithmically defined decoders aiming to hybridize the two standard ones was proposed (Brushe et al., 1998). The present paper gives a careful analysis of this hybridization approach, identifies several problems and issues with it and other previously proposed approaches, and proposes practical resolutions of those. Furthermore, simple modifications of the classical criteria for hidden path recognition are shown to lead to a new class of decoders. Dynamic programming algorithms to compute these decoders in the usual forward-backward manner are presented. A particularly interesting subclass of such estimators can be also viewed as hybrids of the MAP and PD estimators. Similar to previously proposed MAP-PD hybrids, the new class is parameterized by a small number of tunable parameters. Unlike their algorithmic predecessors, the new risk-based decoders are more clearly interpretable, and, most importantly, work "out of the box" in practice, which is demonstrated on some real bioinformatics tasks and data. Some further generalizations and applications are discussed in conclusion.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
7,086
2302.08969
Deep Reinforcement Learning for mmWave Initial Beam Alignment
We investigate the applicability of deep reinforcement learning algorithms to the adaptive initial access beam alignment problem for mmWave communications using the state-of-the-art proximal policy optimization algorithm as an example. In comparison to recent unsupervised learning based approaches developed to tackle this problem, deep reinforcement learning has the potential to address a new and wider range of applications, since, in principle, no (differentiable) model of the channel and/or the whole system is required for training, and only agent-environment interactions are necessary to learn an algorithm (be it online or using a recorded dataset). We show that, although the chosen off-the-shelf deep reinforcement learning agent fails to perform well when trained on realistic problem sizes, introducing action space shaping in the form of beamforming modules vastly improves the performance, without sacrificing much generalizability. Using this add-on, the agent is able to deliver competitive performance to various state-of-the-art methods on simulated environments, even under realistic problem sizes. This demonstrates that through well-directed modification, deep reinforcement learning may have a chance to compete with other approaches in this area, opening up many straightforward extensions to other/similar scenarios.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
346,249
1701.03342
A Study on Arbitrarily Varying Channels with Causal Side Information at the Encoder
In this work, we study two models of arbitrarily varying channels, when causal side information is available at the encoder in a causal manner. First, we study the arbitrarily varying channel (AVC) with input and state constraints, when the encoder has state information in a causal manner. Lower and upper bounds on the random code capacity are developed. A lower bound on the deterministic code capacity is established in the case of a message-averaged input constraint. In the setting where a state constraint is imposed on the jammer, while the user is under no constraints, the random code bounds coincide, and the random code capacity is determined. Furthermore, for this scenario, a generalized non-symmetrizability condition is stated, under which the deterministic code capacity coincides with the random code capacity. A second model considered in our work is the arbitrarily varying degraded broadcast channel with causal side information at the encoder (without constraints). We establish inner and outer bounds on both the random code capacity region and the deterministic code capacity region. The capacity region is then determined for a class of channels satisfying a condition on the mutual informations between the strategy variables and the channel outputs. As an example, we show that the condition holds for the arbitrarily varying binary symmetric broadcast channel, and we find the corresponding capacity region.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
66,691
2211.05896
An improved method of delta summation for faster current value selection across filtered subsets of interval and temporal relational data
Aggregation in relational databases is accomplished through hashing and sorting interval data, which is computationally expensive and scales poorly as the data volumes grow. In this paper, we show how quantitative interval and time-series data in relational attributes can be represented using delta summary values rather than absolute values. The need for sorting to determine the row corresponding to some maximum timestamp is negated, reducing the time complexity from at least O(n log(n)) towards O(n) and improving query execution times. We illustrate this new method in the relational algebra, present the implementation algorithmically, and test an implementation in two leading RDBMS products against the use of normal equivalents. We found this delta summation technique to be most effective for use cases with additive, numerical data upon which it is necessary to frequently obtain the latest values, and where the row cardinalities are in the order of 10^5. Our experiments found the proposed new delta summation technique could execute faster than the equivalent standard selection method by up to 22.4%, reducing the overall query cost in some circumstances by up to 24.0%, reducing I/O load by up to 60.6%, but with increased query costs for write operations, an increase in CPU time and memory allocation, uncertain performance with very low or very high cardinalities and inconsistent results across different RDBMS platforms.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
329,704
1607.07770
Approximate Policy Iteration for Budgeted Semantic Video Segmentation
This paper formulates and presents a solution to the new problem of budgeted semantic video segmentation. Given a video, the goal is to accurately assign a semantic class label to every pixel in the video within a specified time budget. Typical approaches to such labeling problems, such as Conditional Random Fields (CRFs), focus on maximizing accuracy but do not provide a principled method for satisfying a time budget. For video data, the time required by CRF and related methods is often dominated by the time to compute low-level descriptors of supervoxels across the video. Our key contribution is the new budgeted inference framework for CRF models that intelligently selects the most useful subsets of descriptors to run on subsets of supervoxels within the time budget. The objective is to maintain an accuracy as close as possible to the CRF model with no time bound, while remaining within the time budget. Our second contribution is the algorithm for learning a policy for the sparse selection of supervoxels and their descriptors for budgeted CRF inference. This learning algorithm is derived by casting our problem in the framework of Markov Decision Processes, and then instantiating a state-of-the-art policy learning algorithm known as Classification-Based Approximate Policy Iteration. Our experiments on multiple video datasets show that our learning approach and framework is able to significantly reduce computation time, and maintain competitive accuracy under varying budgets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
59,072
1303.3733
Adaptive Reduced-Rank MBER Linear Receive Processing for Large Multiuser MIMO Systems
In this work, we propose a novel adaptive reduced-rank strategy based on joint interpolation, decimation and filtering (JIDF) for large multiuser multiple-input multiple-output (MIMO) systems. In this scheme, a reduced-rank framework is proposed for linear receive processing and multiuser interference suppression according to the minimization of the bit error rate (BER) cost function. We present a structure with multiple processing branches that performs dimensionality reduction, where each branch contains a group of jointly optimized interpolation and decimation units, followed by a linear receive filter. We then develop stochastic gradient (SG) algorithms to compute the parameters of the interpolation and receive filters along with a low-complexity decimation technique. Simulation results are presented for time-varying environments and show that the proposed MBER-JIDF receive processing strategy and algorithms achieve a superior performance to existing methods at a reduced complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
22,948
2209.07618
Differentiable Bilevel Programming for Stackelberg Congestion Games
In a Stackelberg congestion game (SCG), a leader aims to maximize their own gain by anticipating and manipulating the equilibrium state at which the followers settle by playing a congestion game. Often formulated as bilevel programs, large-scale SCGs are well known for their intractability and complexity. Here, we attempt to tackle this computational challenge by marrying traditional methodologies with the latest differentiable programming techniques in machine learning. The core idea centers on replacing the lower-level equilibrium problem with a smooth evolution trajectory defined by the imitative logit dynamic (ILD), which we prove converges to the equilibrium of the congestion game under mild conditions. Building upon this theoretical foundation, we propose two new local search algorithms for SCGs. The first is a gradient descent algorithm that obtains the derivatives by unrolling ILD via differentiable programming. Thanks to the smoothness of ILD, the algorithm promises both efficiency and scalability. The second algorithm adds a heuristic twist by cutting short the followers' evolution trajectory. Behaviorally, this means that, instead of anticipating the followers' best response at equilibrium, the leader seeks to approximate that response by only looking ahead a limited number of steps. Our numerical experiments are carried out over various instances of classic SCG applications, ranging from toy benchmarks to large-scale real-world examples. The results show the proposed algorithms are reliable and scalable local solvers that deliver high-quality solutions with greater regularity and significantly less computational effort compared to the many incumbents included in our study.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
true
false
false
true
317,817
1910.10318
Winning the ICCV 2019 Learning to Drive Challenge
Autonomous driving has a significant impact on society. Predicting vehicle trajectories, specifically, angle and speed, is important for safe and comfortable driving. This work focuses on fusing inputs from camera sensors and visual map data which lead to significant improvement in performance and plays a key role in winning the challenge. We use pre-trained CNN's for processing image frames, a neural network for fusing the image representation with visual map data, and train a sequence model for time series prediction. We demonstrate the best performing MSE angle and best performance overall, to win the ICCV 2019 Learning to Drive challenge. We make our models and code publicly available.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
150,453
1602.06167
Deployment of 5G Networking Infrastructure with Machine Type Communication Considerations
Designing optimal strategies to deploy small cell stations is crucial to meet the quality-of-service requirements in next-generation cellular networks with constrained deployment costs. In this paper, a general deployment framework is proposed to jointly optimize the locations of backhaul aggregate nodes, small base stations, machine aggregators, and multi-hop wireless backhaul links to accommodate both human-type and machine-type communications. The goal is to provide deployment solutions with best coverage performance under cost constraints. The formulated problem is shown to be a multi-objective integer programming for which it is challenging to obtain the optimal solutions. To solve the problem, a heuristic algorithm is proposed by combining Lagrangian relaxation, the weighted sum method, the $\epsilon$-constraint method and tabu search to obtain both the solutions and bounds, for the objective function. Simulation results show that the proposed framework can provide solutions with better performance compared with conventional deployment models in scenarios where available fiber connections are scarce. Furthermore, the gap between obtained solutions and the lower bounds is quite tight.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
52,335
2406.07657
OPTune: Efficient Online Preference Tuning
Reinforcement learning with human feedback~(RLHF) is critical for aligning Large Language Models (LLMs) with human preference. Compared to the widely studied offline version of RLHF, \emph{e.g.} direct preference optimization (DPO), recent works have shown that the online variants achieve even better alignment. However, online alignment requires on-the-fly generation of new training data, which is costly, hard to parallelize, and suffers from varying quality and utility. In this paper, we propose a more efficient data exploration strategy for online preference tuning (OPTune), which does not rely on human-curated or pre-collected teacher responses but dynamically samples informative responses for on-policy preference alignment. During data generation, OPTune only selects prompts whose (re)generated responses can potentially provide more informative and higher-quality training signals than the existing responses. In the training objective, OPTune reweights each generated response (pair) by its utility in improving the alignment so that learning can be focused on the most helpful samples. Throughout our evaluations, OPTune'd LLMs maintain the instruction-following benefits provided by standard preference tuning whilst enjoying 1.27-1.56x faster training speed due to the efficient data exploration strategy.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
463,144
0911.2284
A New Look at the Classical Entropy of Written English
A simple method for finding the entropy and redundancy of a reasonable long sample of English text by direct computer processing and from first principles according to Shannon theory is presented. As an example, results on the entropy of the English language have been obtained based on a total of 20.3 million characters of written English, considering symbols from one to five hundred characters in length. Besides a more realistic value of the entropy of English, a new perspective on some classic entropy-related concepts is presented. This method can also be extended to other Latin languages. Some implications for practical applications such as plagiarism-detection software, and the minimum number of words that should be used in social Internet network messaging, are discussed.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
4,922
1705.10480
Preliminary results on Ontology-based Open Data Publishing
Despite the current interest in Open Data publishing, a formal and comprehensive methodology supporting an organization in deciding which data to publish and carrying out precise procedures for publishing high-quality data, is still missing. In this paper we argue that the Ontology-based Data Management paradigm can provide a formal basis for a principled approach to publish high quality, semantically annotated Open Data. We describe two main approaches to using an ontology for this endeavor, and then we present some technical results on one of the approaches, called bottom-up, where the specification of the data to be published is given in terms of the sources, and specific techniques allow deriving suitable annotations for interpreting the published data under the light of the ontology.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
false
74,408
2109.07563
Non-smooth Bayesian Optimization in Tuning Problems
Building surrogate models is one common approach when we attempt to learn unknown black-box functions. Bayesian optimization provides a framework which allows us to build surrogate models based on sequential samples drawn from the function and find the optimum. Tuning algorithmic parameters to optimize the performance of large, complicated "black-box" application codes is a specific important application, which aims at finding the optima of black-box functions. Within the Bayesian optimization framework, the Gaussian process model produces smooth or continuous sample paths. However, the black-box function in the tuning problem is often non-smooth. This difficult tuning problem is worsened by the fact that we usually have limited sequential samples from the black-box function. Motivated by these issues encountered in tuning, we propose a novel additive Gaussian process model called clustered Gaussian process (cGP), where the additive components are induced by clustering. In the examples we studied, the performance can be improved by as much as 90% among repetitive experiments. By using this surrogate model, we want to capture the non-smoothness of the black-box function. In addition to an algorithm for constructing this model, we also apply the model to several artificial and real applications to evaluate it.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
255,564
1104.1556
Benchmarking the Quality of Diffusion-Weighted Images
We present a novel method that allows for measuring the quality of diffusion-weighted MR images dependent on the image resolution and the image noise. For this purpose, we introduce a new thresholding technique so that noise and the signal can automatically be estimated from a single data set. Thus, no user interaction as well as no double acquisition technique, which requires a time-consuming proper geometrical registration, is needed. As a coarser image resolution or slice thickness leads to a higher signal-to-noise ratio (SNR), our benchmark determines a resolution-independent quality measure so that images with different resolutions can be adequately compared. To evaluate our method, a set of diffusion-weighted images from different vendors is used. It is shown that the quality can efficiently be determined and that the automatically computed SNR is comparable to the SNR which is measured manually in a manually selected region of interest.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
9,921
1508.01352
Resilience of Networks Formed of Interdependent Modular Networks
Many infrastructure networks have a modular structure and are also interdependent. While significant research has explored the resilience of interdependent networks, there has been no analysis of the effects of modularity. Here we develop a theoretical framework for attacks on interdependent modular networks and support our results by simulations. We focus on the case where each network has the same number of communities and the dependency links are restricted to be between pairs of communities of different networks. This is very realistic for infrastructure across cities. Each city has its own infrastructures and different infrastructures are dependent within the city. However, each infrastructure is connected within and between cities. For example, a power grid will connect many cities as will a communication network, yet a power station and communication tower that are interdependent will likely be in the same city. It has been shown that single networks are very susceptible to the failure of the interconnected nodes (between communities) Shai et al. and that attacks on these nodes are more crippling than attacks based on betweenness da Cunha et al. In our example of cities these nodes have long range links which are more likely to fail. For both treelike and looplike interdependent modular networks we find distinct regimes depending on the number of modules, $m$. (i) In the case where there are fewer modules with strong intraconnections, the system first separates into modules in an abrupt first-order transition and then each module undergoes a second percolation transition. (ii) When there are more modules with many interconnections between them, the system undergoes a single transition. Overall, we find that modular structure can influence the type of transitions observed in interdependent networks and should be considered in attempts to make interdependent networks more resilient.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
45,784
1704.06687
Scatteract: Automated extraction of data from scatter plots
Charts are an excellent way to convey patterns and trends in data, but they do not facilitate further modeling of the data or close inspection of individual data points. We present a fully automated system for extracting the numerical values of data points from images of scatter plots. We use deep learning techniques to identify the key components of the chart, and optical character recognition together with robust regression to map from pixels to the coordinate system of the chart. We focus on scatter plots with linear scales, which already have several interesting challenges. Previous work has done fully automatic extraction for other types of charts, but to our knowledge this is the first approach that is fully automatic for scatter plots. Our method performs well, achieving successful data extraction on 89% of the plots in our test set.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
72,204
2012.03049
Urban Heat Islands: Beating the Heat with Multi-Modal Spatial Analysis
In today's highly urbanized environment, the Urban Heat Island (UHI) phenomenon is increasingly prevalent where surface temperatures in urbanized areas are found to be much higher than surrounding rural areas. Excessive levels of heat stress leads to problems at various levels, ranging from the individual to the world. At the individual level, UHI could lead to the human body being unable to cope and break-down in terms of core functions. At the world level, UHI potentially contributes to global warming and adversely affects the environment. Using a multi-modal dataset comprising remote sensory imagery, geo-spatial data and population data, we proposed a framework for investigating how UHI is affected by a city's urban form characteristics through the use of statistical modelling. Using Singapore as a case study, we demonstrate the usefulness of this framework and discuss our main findings in understanding the effects of UHI and urban form characteristics.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
209,966
1911.11214
Examining the Role of Clickbait Headlines to Engage Readers with Reliable Health-related Information
Clickbait headlines are frequently used to attract readers to read articles. Although this headline type has turned out to be a technique to engage readers with misleading items, it is still unknown whether the technique can be used to attract readers to reliable pieces. This study takes the opportunity to test its efficacy to engage readers with reliable health articles. A set of online surveys would be conducted to test readers' engagement with and perception about clickbait headlines with reliable articles. After that, we would design an automation system to generate clickabit headlines to maximize user engagement.
false
false
false
false
false
true
false
false
true
false
false
false
false
true
false
false
false
false
155,049
2210.11006
SimpleClick: Interactive Image Segmentation with Simple Vision Transformers
Click-based interactive image segmentation aims at extracting objects with a limited user clicking. A hierarchical backbone is the de-facto architecture for current methods. Recently, the plain, non-hierarchical Vision Transformer (ViT) has emerged as a competitive backbone for dense prediction tasks. This design allows the original ViT to be a foundation model that can be finetuned for downstream tasks without redesigning a hierarchical backbone for pretraining. Although this design is simple and has been proven effective, it has not yet been explored for interactive image segmentation. To fill this gap, we propose SimpleClick, the first interactive segmentation method that leverages a plain backbone. Based on the plain backbone, we introduce a symmetric patch embedding layer that encodes clicks into the backbone with minor modifications to the backbone itself. With the plain backbone pretrained as a masked autoencoder (MAE), SimpleClick achieves state-of-the-art performance. Remarkably, our method achieves 4.15 NoC@90 on SBD, improving 21.8% over the previous best result. Extensive evaluation on medical images demonstrates the generalizability of our method. We further develop an extremely tiny ViT backbone for SimpleClick and provide a detailed computational analysis, highlighting its suitability as a practical annotation tool.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
325,145
2408.09437
Hindi-BEIR : A Large Scale Retrieval Benchmark in Hindi
Given the large number of Hindi speakers worldwide, there is a pressing need for robust and efficient information retrieval systems for Hindi. Despite ongoing research, there is a lack of comprehensive benchmark for evaluating retrieval models in Hindi. To address this gap, we introduce the Hindi version of the BEIR benchmark, which includes a subset of English BEIR datasets translated to Hindi, existing Hindi retrieval datasets, and synthetically created datasets for retrieval. The benchmark is comprised of $15$ datasets spanning across $8$ distinct tasks. We evaluate state-of-the-art multilingual retrieval models on this benchmark to identify task and domain-specific challenges and their impact on retrieval performance. By releasing this benchmark and a set of relevant baselines, we enable researchers to understand the limitations and capabilities of current Hindi retrieval models, promoting advancements in this critical area. The datasets from Hindi-BEIR are publicly available.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
481,438
1110.6590
New constructions of WOM codes using the Wozencraft ensemble
In this paper we give several new constructions of WOM codes. The novelty in our constructions is the use of the so called Wozencraft ensemble of linear codes. Specifically, we obtain the following results. We give an explicit construction of a two-write Write-Once-Memory (WOM for short) code that approaches capacity, over the binary alphabet. More formally, for every \epsilon>0, 0<p<1 and n =(1/\epsilon)^{O(1/p\epsilon)} we give a construction of a two-write WOM code of length n and capacity H(p)+1-p-\epsilon. Since the capacity of a two-write WOM code is max_p (H(p)+1-p), we get a code that is \epsilon-close to capacity. Furthermore, encoding and decoding can be done in time O(n^2.poly(log n)) and time O(n.poly(log n)), respectively, and in logarithmic space. We obtain a new encoding scheme for 3-write WOM codes over the binary alphabet. Our scheme achieves rate 1.809-\epsilon, when the block length is exp(1/\epsilon). This gives a better rate than what could be achieved using previous techniques. We highlight a connection to linear seeded extractors for bit-fixing sources. In particular we show that obtaining such an extractor with seed length O(log n) can lead to improved parameters for 2-write WOM codes. We then give an application of existing constructions of extractors to the problem of designing encoding schemes for memory with defects.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
12,818
1901.02029
Spectra of random networks with arbitrary degrees
We derive a message passing method for computing the spectra of locally tree-like networks and an approximation to it that allows us to compute closed-form expressions or fast numerical approximates for the spectral density of random graphs with arbitrary node degrees -- the so-called configuration model. We find the latter approximation to work well for all but the sparsest of networks. We also derive bounds on the position of the band edges of the spectrum, which are important for identifying structural phase transitions in networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
118,104
2206.02270
Estimating building energy efficiency from street view imagery, aerial imagery, and land surface temperature data
Current methods to determine the energy efficiency of buildings require on-site visits of certified energy auditors which makes the process slow, costly, and geographically incomplete. To accelerate the identification of promising retrofit targets on a large scale, we propose to estimate building energy efficiency from widely available and remotely sensed data sources only, namely street view, aerial view, footprint, and satellite-borne land surface temperature (LST) data. After collecting data for almost 40,000 buildings in the United Kingdom, we combine these data sources by training multiple end-to-end deep learning models with the objective to classify buildings as energy efficient (EU rating A-D) or inefficient (EU rating E-G). After evaluating the trained models quantitatively as well as qualitatively, we extend our analysis by studying the predictive power of each data source in an ablation study. We find that the end-to-end deep learning model trained on all four data sources achieves a macro-averaged F1 score of 64.64% and outperforms the k-NN and SVM-based baseline models by 14.13 to 12.02 percentage points, respectively. Thus, this work shows the potential and complementary nature of remotely sensed data in predicting energy efficiency and opens up new opportunities for future work to integrate additional data sources.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
300,821
2211.13090
TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision
Video copy localization aims to precisely localize all the copied segments within a pair of untrimmed videos in video retrieval applications. Previous methods typically start from frame-to-frame similarity matrix generated by cosine similarity between frame-level features of the input video pair, and then detect and refine the boundaries of copied segments on similarity matrix under temporal constraints. In this paper, we propose TransVCL: an attention-enhanced video copy localization network, which is optimized directly from initial frame-level features and trained end-to-end with three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for similarity matrix generation, and a temporal alignment module for copied segments localization. In contrast to previous methods demanding the handcrafted similarity matrix, TransVCL incorporates long-range temporal information between feature sequence pair using self- and cross- attention layers. With the joint design and optimization of three components, the similarity matrix can be learned to present more discriminative copied patterns, leading to significant improvements over previous methods on segment-level labeled datasets (VCSL and VCDB). Besides the state-of-the-art performance in fully supervised setting, the attention architecture facilitates TransVCL to further exploit unlabeled or simply video-level labeled data. Additional experiments of supplementing video-level labeled datasets including SVD and FIVR reveal the high flexibility of TransVCL from full supervision to semi-supervision (with or without video-level annotation). Code is publicly available at https://github.com/transvcl/TransVCL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
332,343
2402.13418
Efficiently Predicting Mutational Effect on Homologous Proteins by Evolution Encoding
Predicting protein properties is paramount for biological and medical advancements. Current protein engineering mutates on a typical protein, called the wild-type, to construct a family of homologous proteins and study their properties. Yet, existing methods easily neglect subtle mutations, failing to capture the effect on the protein properties. To this end, we propose EvolMPNN, Evolution-aware Message Passing Neural Network, an efficient model to learn evolution-aware protein embeddings. EvolMPNN samples sets of anchor proteins, computes evolutionary information by means of residues and employs a differentiable evolution-aware aggregation scheme over these sampled anchors. This way, EvolMPNN can efficiently utilise a novel message-passing method to capture the mutation effect on proteins with respect to the anchor proteins. Afterwards, the aggregated evolution-aware embeddings are integrated with sequence embeddings to generate final comprehensive protein embeddings. Our model shows up to 6.4% better than state-of-the-art methods and attains 36X inference speedup in comparison with large pre-trained models. Code and models are available at https://github.com/zhiqiangzhongddu/EvolMPNN.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
431,231
2007.08988
Online Invariance Selection for Local Feature Descriptors
To be invariant, or not to be invariant: that is the question formulated in this work about local descriptors. A limitation of current feature descriptors is the trade-off between generalization and discriminative power: more invariance means less informative descriptors. We propose to overcome this limitation with a disentanglement of invariance in local descriptors and with an online selection of the most appropriate invariance given the context. Our framework consists in a joint learning of multiple local descriptors with different levels of invariance and of meta descriptors encoding the regional variations of an image. The similarity of these meta descriptors across images is used to select the right invariance when matching the local descriptors. Our approach, named Local Invariance Selection at Runtime for Descriptors (LISRD), enables descriptors to adapt to adverse changes in images, while remaining discriminative when invariance is not required. We demonstrate that our method can boost the performance of current descriptors and outperforms state-of-the-art descriptors in several matching tasks, when evaluated on challenging datasets with day-night illumination as well as viewpoint changes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
187,805
1903.07973
Deep Eikonal Solvers
A deep learning approach to numerically approximate the solution to the Eikonal equation is introduced. The proposed method is built on the fast marching scheme which comprises of two components: a local numerical solver and an update scheme. We replace the formulaic local numerical solver with a trained neural network to provide highly accurate estimates of local distances for a variety of different geometries and sampling conditions. Our learning approach generalizes not only to flat Euclidean domains but also to curved surfaces enabled by the incorporation of certain invariant features in the neural network architecture. We show a considerable gain in performance, validated by smaller errors and higher orders of accuracy for the numerical solutions of the Eikonal equation computed on different surfaces The proposed approach leverages the approximation power of neural networks to enhance the performance of numerical algorithms, thereby, connecting the somewhat disparate themes of numerical geometry and learning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
124,746
2408.10195
SpaRP: Fast 3D Object Reconstruction and Pose Estimation from Sparse Views
Open-world 3D generation has recently attracted considerable attention. While many single-image-to-3D methods have yielded visually appealing outcomes, they often lack sufficient controllability and tend to produce hallucinated regions that may not align with users' expectations. In this paper, we explore an important scenario in which the input consists of one or a few unposed 2D images of a single object, with little or no overlap. We propose a novel method, SpaRP, to reconstruct a 3D textured mesh and estimate the relative camera poses for these sparse-view images. SpaRP distills knowledge from 2D diffusion models and finetunes them to implicitly deduce the 3D spatial relationships between the sparse views. The diffusion model is trained to jointly predict surrogate representations for camera poses and multi-view images of the object under known poses, integrating all information from the input sparse views. These predictions are then leveraged to accomplish 3D reconstruction and pose estimation, and the reconstructed 3D model can be used to further refine the camera poses of input views. Through extensive experiments on three datasets, we demonstrate that our method not only significantly outperforms baseline methods in terms of 3D reconstruction quality and pose prediction accuracy but also exhibits strong efficiency. It requires only about 20 seconds to produce a textured mesh and camera poses for the input views. Project page: https://chaoxu.xyz/sparp.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
true
481,756
2403.14548
DINO-Tracker: Taming DINO for Self-Supervised Point Tracking in a Single Video
We present DINO-Tracker -- a new framework for long-term dense tracking in video. The pillar of our approach is combining test-time training on a single video, with the powerful localized semantic features learned by a pre-trained DINO-ViT model. Specifically, our framework simultaneously adopts DINO's features to fit to the motion observations of the test video, while training a tracker that directly leverages the refined features. The entire framework is trained end-to-end using a combination of self-supervised losses, and regularization that allows us to retain and benefit from DINO's semantic prior. Extensive evaluation demonstrates that our method achieves state-of-the-art results on known benchmarks. DINO-tracker significantly outperforms self-supervised methods and is competitive with state-of-the-art supervised trackers, while outperforming them in challenging cases of tracking under long-term occlusions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
440,126
2205.07960
Meta AI at Arabic Hate Speech 2022: MultiTask Learning with Self-Correction for Hate Speech Classification
In this paper, we tackle the Arabic Fine-Grained Hate Speech Detection shared task and demonstrate significant improvements over reported baselines for its three subtasks. The tasks are to predict if a tweet contains (1) Offensive language; and whether it is considered (2) Hate Speech or not and if so, then predict the (3) Fine-Grained Hate Speech label from one of six categories. Our final solution is an ensemble of models that employs multitask learning and a self-consistency correction method yielding 82.7% on the hate speech subtask -- reflecting a 3.4% relative improvement compared to previous work.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
296,773
2409.13904
High-dimensional learning of narrow neural networks
Recent years have been marked with the fast-pace diversification and increasing ubiquity of machine learning applications. Yet, a firm theoretical understanding of the surprising efficiency of neural networks to learn from high-dimensional data still proves largely elusive. In this endeavour, analyses inspired by statistical physics have proven instrumental, enabling the tight asymptotic characterization of the learning of neural networks in high dimensions, for a broad class of solvable models. This manuscript reviews the tools and ideas underlying recent progress in this line of work. We introduce a generic model -- the sequence multi-index model -- which encompasses numerous previously studied models as special instances. This unified framework covers a broad class of machine learning architectures with a finite number of hidden units, including multi-layer perceptrons, autoencoders, attention mechanisms; and tasks, including (un)supervised learning, denoising, contrastive learning, in the limit of large data dimension, and comparably large number of samples. We explicate in full detail the analysis of the learning of sequence multi-index models, using statistical physics techniques such as the replica method and approximate message-passing algorithms. This manuscript thus provides a unified presentation of analyses reported in several previous works, and a detailed overview of central techniques in the field of statistical physics of machine learning. This review should be a useful primer for machine learning theoreticians curious of statistical physics approaches; it should also be of value to statistical physicists interested in the transfer of such ideas to the study of neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
490,224
2004.13558
A Graph-constrained Changepoint Detection Approach for ECG Segmentation
Electrocardiogram (ECG) signal is the most commonly used non-invasive tool in the assessment of cardiovascular diseases. Segmentation of the ECG signal to locate its constitutive waves, in particular the R-peaks, is a key step in ECG processing and analysis. Over the years, several segmentation and QRS complex detection algorithms have been proposed with different features; however, their performance highly depends on applying preprocessing steps which makes them unreliable in real-time data analysis of ambulatory care settings and remote monitoring systems, where the collected data is highly noisy. Moreover, some issues still remain with the current algorithms in regard to the diverse morphological categories for the ECG signal and their high computation cost. In this paper, we introduce a novel graph-based optimal changepoint detection (GCCD) method for reliable detection of R-peak positions without employing any preprocessing step. The proposed model guarantees to compute the globally optimal changepoint detection solution. It is also generic in nature and can be applied to other time-series biomedical signals. Based on the MIT-BIH arrhythmia (MIT-BIH-AR) database, the proposed method achieves overall sensitivity Sen = 99.76, positive predictivity PPR = 99.68, and detection error rate DER = 0.55 which are comparable to other state-of-the-art approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
174,588
2409.00005
Csi-LLM: A Novel Downlink Channel Prediction Method Aligned with LLM Pre-Training
Downlink channel temporal prediction is a critical technology in massive multiple-input multiple-output (MIMO) systems. However, existing methods that rely on fixed-step historical sequences significantly limit the accuracy, practicality, and scalability of channel prediction. Recent advances have shown that large language models (LLMs) exhibit strong pattern recognition and reasoning abilities over complex sequences. The challenge lies in effectively aligning wireless communication data with the modalities used in natural language processing to fully harness these capabilities. In this work, we introduce Csi-LLM, a novel LLM-powered downlink channel prediction technique that models variable-step historical sequences. To ensure effective cross-modality application, we align the design and training of Csi-LLM with the processing of natural language tasks, leveraging the LLM's next-token generation capability for predicting the next step in channel state information (CSI). Simulation results demonstrate the effectiveness of this alignment strategy, with Csi-LLM consistently delivering stable performance improvements across various scenarios and showing significant potential in continuous multi-step prediction.
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
484,710
2310.17848
Boosting Data Analytics With Synthetic Volume Expansion
Synthetic data generation, a cornerstone of Generative Artificial Intelligence, promotes a paradigm shift in data science by addressing data scarcity and privacy while enabling unprecedented performance. As synthetic data becomes more prevalent, concerns emerge regarding the accuracy of statistical methods when applied to synthetic data in contrast to raw data. This article explores the effectiveness of statistical methods on synthetic data and the privacy risks of synthetic data. Regarding effectiveness, we present the Synthetic Data Generation for Analytics framework. This framework applies statistical approaches to high-quality synthetic data produced by generative models like tabular diffusion models, which, initially trained on raw data, benefit from insights from pertinent studies through transfer learning. A key finding within this framework is the generational effect, which reveals that the error rate of statistical methods on synthetic data decreases with the addition of more synthetic data but may eventually rise or stabilize. This phenomenon, stemming from the challenge of accurately mirroring raw data distributions, highlights a "reflection point"-an ideal volume of synthetic data defined by specific error metrics. Through three case studies, sentiment analysis, predictive modeling of structured data, and inference in tabular data, we validate the superior performance of this framework compared to conventional approaches. On privacy, synthetic data imposes lower risks while supporting the differential privacy standard. These studies underscore synthetic data's untapped potential in redefining data science's landscape.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
403,311
1909.05134
Robot Risk-Awareness by Formal Risk Reasoning and Planning
This paper proposes a formal robot motion risk reasoning framework and develops a risk-aware path planner that minimizes the proposed risk. While robots locomoting in unstructured or confined environments face a variety of risk, existing risk only focuses on collision with obstacles. Such risk is currently only addressed in ad hoc manners. Without a formal definition, ill-supported properties, e.g. additive or Markovian, are simply assumed. Relied on an incomplete and inaccurate representation of risk, risk-aware planners use ad hoc risk functions or chance constraints to minimize risk. The former inevitably has low fidelity when modeling risk, while the latter conservatively generates feasible path within a probability bound. Using propositional logic and probability theory, the proposed motion risk reasoning framework is formal. Building upon a universe of risk elements of interest, three major risk categories, i.e. locale-, action-, and traverse-dependent, are introduced. A risk-aware planner is also developed to plan minimum risk path based on the newly proposed risk framework. Results of the risk reasoning and planning are validated in physical experiments in real-world unstructured or confined environments. With the proposed fundamental risk reasoning framework, safety of robot locomotion could be explicitly reasoned, quantified, and compared. The risk-aware planner finds safe path in terms of the newly proposed risk framework and enables more risk-aware robot behavior in unstructured or confined environments.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
145,006
2401.04821
MoSECroT: Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer
Transformer-based pre-trained language models (PLMs) have achieved remarkable performance in various natural language processing (NLP) tasks. However, pre-training such models can take considerable resources that are almost only available to high-resource languages. On the contrary, static word embeddings are easier to train in terms of computing resources and the amount of data required. In this paper, we introduce MoSECroT Model Stitching with Static Word Embeddings for Crosslingual Zero-shot Transfer), a novel and challenging task that is especially relevant to low-resource languages for which static word embeddings are available. To tackle the task, we present the first framework that leverages relative representations to construct a common space for the embeddings of a source language PLM and the static word embeddings of a target language. In this way, we can train the PLM on source-language training data and perform zero-shot transfer to the target language by simply swapping the embedding layer. However, through extensive experiments on two classification datasets, we show that although our proposed framework is competitive with weak baselines when addressing MoSECroT, it fails to achieve competitive results compared with some strong baselines. In this paper, we attempt to explain this negative result and provide several thoughts on possible improvement.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
420,547
2412.15687
GraphDOP: Towards skilful data-driven medium-range weather forecasts learnt and initialised directly from observations
We introduce GraphDOP, a new data-driven, end-to-end forecast system developed at the European Centre for Medium-Range Weather Forecasts (ECMWF) that is trained and initialised exclusively from Earth System observations, with no physics-based (re)analysis inputs or feedbacks. GraphDOP learns the correlations between observed quantities - such as brightness temperatures from polar orbiters and geostationary satellites - and geophysical quantities of interest (that are measured by conventional observations), to form a coherent latent representation of Earth System state dynamics and physical processes, and is capable of producing skilful predictions of relevant weather parameters up to five days into the future.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
519,235
2102.06838
Learning Variable Impedance Control via Inverse Reinforcement Learning for Force-Related Tasks
Many manipulation tasks require robots to interact with unknown environments. In such applications, the ability to adapt the impedance according to different task phases and environment constraints is crucial for safety and performance. Although many approaches based on deep reinforcement learning (RL) and learning from demonstration (LfD) have been proposed to obtain variable impedance skills on contact-rich manipulation tasks, these skills are typically task-specific and could be sensitive to changes in task settings. This paper proposes an inverse reinforcement learning (IRL) based approach to recover both the variable impedance policy and reward function from expert demonstrations. We explore different action space of the reward functions to achieve a more general representation of expert variable impedance skills. Experiments on two variable impedance tasks (Peg-in-Hole and Cup-on-Plate) were conducted in both simulations and on a real FANUC LR Mate 200iD/7L industrial robot. The comparison results with behavior cloning and force-based IRL proved that the learned reward function in the gain action space has better transferability than in the force space. Experiment videos are available at https://msc.berkeley.edu/research/impedance-irl.html.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
219,894
1809.06488
In-Session Personalization for Talent Search
Previous efforts in recommendation of candidates for talent search followed the general pattern of receiving an initial search criteria and generating a set of candidates utilizing a pre-trained model. Traditionally, the generated recommendations are final, that is, the list of potential candidates is not modified unless the user explicitly changes his/her search criteria. In this paper, we are proposing a candidate recommendation model which takes into account the immediate feedback of the user, and updates the candidate recommendations at each step. This setting also allows for very uninformative initial search queries, since we pinpoint the user's intent due to the feedback during the search session. To achieve our goal, we employ an intent clustering method based on topic modeling which separates the candidate space into meaningful, possibly overlapping, subsets (which we call intent clusters) for each position. On top of the candidate segments, we apply a multi-armed bandit approach to choose which intent cluster is more appropriate for the current session. We also present an online learning scheme which updates the intent clusters within the session, due to user feedback, to achieve further personalization. Our offline experiments as well as the results from the online deployment of our solution demonstrate the benefits of our proposed methodology.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
108,073
2011.13614
Multi-task MR Imaging with Iterative Teacher Forcing and Re-weighted Deep Learning
Noises, artifacts, and loss of information caused by the magnetic resonance (MR) reconstruction may compromise the final performance of the downstream applications. In this paper, we develop a re-weighted multi-task deep learning method to learn prior knowledge from the existing big dataset and then utilize them to assist simultaneous MR reconstruction and segmentation from the under-sampled k-space data. The multi-task deep learning framework is equipped with two network sub-modules, which are integrated and trained by our designed iterative teacher forcing scheme (ITFS) under the dynamic re-weighted loss constraint (DRLC). The ITFS is designed to avoid error accumulation by injecting the fully-sampled data into the training process. The DRLC is proposed to dynamically balance the contributions from the reconstruction and segmentation sub-modules so as to co-prompt the multi-task accuracy. The proposed method has been evaluated on two open datasets and one in vivo in-house dataset and compared to six state-of-the-art methods. Results show that the proposed method possesses encouraging capabilities for simultaneous and accurate MR reconstruction and segmentation.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
208,542
1712.08230
Block-Diagonal and LT Codes for Distributed Computing With Straggling Servers
We propose two coded schemes for the distributed computing problem of multiplying a matrix by a set of vectors. The first scheme is based on partitioning the matrix into submatrices and applying maximum distance separable (MDS) codes to each submatrix. For this scheme, we prove that up to a given number of partitions the communication load and the computational delay (not including the encoding and decoding delay) are identical to those of the scheme recently proposed by Li et al., based on a single, long MDS code. However, due to the use of shorter MDS codes, our scheme yields a significantly lower overall computational delay when the delay incurred by encoding and decoding is also considered. We further propose a second coded scheme based on Luby Transform (LT) codes under inactivation decoding. Interestingly, LT codes may reduce the delay over the partitioned scheme at the expense of an increased communication load. We also consider distributed computing under a deadline and show numerically that the proposed schemes outperform other schemes in the literature, with the LT code-based scheme yielding the best performance for the scenarios considered.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
87,154
1711.08362
RGB-D-based Human Motion Recognition with Deep Learning: A Survey
Human motion recognition is one of the most important branches of human-centered research activities. In recent years, motion recognition based on RGB-D data has attracted much attention. Along with the development in artificial intelligence, deep learning techniques have gained remarkable success in computer vision. In particular, convolutional neural networks (CNN) have achieved great success for image-based tasks, and recurrent neural networks (RNN) are renowned for sequence-based problems. Specifically, deep learning methods based on the CNN and RNN architectures have been adopted for motion recognition using RGB-D data. In this paper, a detailed overview of recent advances in RGB-D-based motion recognition is presented. The reviewed methods are broadly categorized into four groups, depending on the modality adopted for recognition: RGB-based, depth-based, skeleton-based and RGB+D-based. As a survey focused on the application of deep learning to RGB-D-based motion recognition, we explicitly discuss the advantages and limitations of existing techniques. Particularly, we highlighted the methods of encoding spatial-temporal-structural information inherent in video sequence, and discuss potential directions for future research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
85,192
2011.08019
On the Effectiveness of Vision Transformers for Zero-shot Face Anti-Spoofing
The vulnerability of face recognition systems to presentation attacks has limited their application in security-critical scenarios. Automatic methods of detecting such malicious attempts are essential for the safe use of facial recognition technology. Although various methods have been suggested for detecting such attacks, most of them over-fit the training set and fail in generalizing to unseen attacks and environments. In this work, we use transfer learning from the vision transformer model for the zero-shot anti-spoofing task. The effectiveness of the proposed approach is demonstrated through experiments in publicly available datasets. The proposed approach outperforms the state-of-the-art methods in the zero-shot protocols in the HQ-WMCA and SiW-M datasets by a large margin. Besides, the model achieves a significant boost in cross-database performance as well.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
206,749
2502.10410
Auto-Evaluation: A Critical Measure in Driving Improvements in Quality and Safety of AI-Generated Lesson Resources
As a publicly funded body in the UK, Oak National Academy is in a unique position to innovate within this field as we have a comprehensive curriculum of approximately 13,000 open education resources (OER) for all National Curriculum subjects, designed and quality-assured by expert, human teachers. This has provided the corpus of content needed for building a high-quality AI-powered lesson planning tool, Aila, that is free to use and, therefore, accessible to all teachers across the country. Furthermore, using our evidence-informed curriculum principles, we have codified and exemplified each component of lesson design. To assess the quality of lessons produced by Aila at scale, we have developed an AI-powered auto-evaluation agent,facilitating informed improvements to enhance output quality. Through comparisons between human and auto-evaluations, we have begun to refine this agent further to increase its accuracy, measured by its alignment with an expert human evaluator. In this paper we present this iterative evaluation process through an illustrative case study focused on one quality benchmark - the level of challenge within multiple-choice quizzes. We also explore the contribution that this may make to similar projects and the wider sector.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
533,857
2111.03030
Exact Representation of Sparse Networks with Symmetric Nonnegative Embeddings
Many models for undirected graphs are based on factorizing the graph's adjacency matrix; these models find a vector representation of each node such that the predicted probability of a link between two nodes increases with the similarity (dot product) of their associated vectors. Recent work has shown that these models are unable to capture key structures in real-world graphs, particularly heterophilous structures, wherein links occur between dissimilar nodes. In contrast, a factorization with two vectors per node, based on logistic principal components analysis (LPCA), has been proven not only to represent such structures, but also to provide exact low-rank factorization of any graph with bounded max degree. However, this bound has limited applicability to real-world networks, which often have power law degree distributions with high max degree. Further, the LPCA model lacks interpretability since its asymmetric factorization does not reflect the undirectedness of the graph. We address these issues in two ways. First, we prove a new bound for the LPCA model in terms of arboricity rather than max degree; this greatly increases the bound's applicability to many sparse real-world networks. Second, we propose an alternative graph model whose factorization is symmetric and nonnegative, which allows for link predictions to be interpreted in terms of node clusters. We show that the bounds for exact representation in the LPCA model extend to our new model. On the empirical side, our model is optimized effectively on real-world graphs with gradient descent on a cross-entropy loss. We demonstrate its effectiveness on a variety of foundational tasks, such as community detection and link prediction.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
265,033
2007.00642
All in the Exponential Family: Bregman Duality in Thermodynamic Variational Inference
The recently proposed Thermodynamic Variational Objective (TVO) leverages thermodynamic integration to provide a family of variational inference objectives, which both tighten and generalize the ubiquitous Evidence Lower Bound (ELBO). However, the tightness of TVO bounds was not previously known, an expensive grid search was used to choose a "schedule" of intermediate distributions, and model learning suffered with ostensibly tighter bounds. In this work, we propose an exponential family interpretation of the geometric mixture curve underlying the TVO and various path sampling methods, which allows us to characterize the gap in TVO likelihood bounds as a sum of KL divergences. We propose to choose intermediate distributions using equal spacing in the moment parameters of our exponential family, which matches grid search performance and allows the schedule to adaptively update over the course of training. Finally, we derive a doubly reparameterized gradient estimator which improves model learning and allows the TVO to benefit from more refined bounds. To further contextualize our contributions, we provide a unified framework for understanding thermodynamic integration and the TVO using Taylor series remainders.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
185,169
1211.3233
New algorithm for footstep localization using seismic sensors in an indoor environment
In this study, we consider the use of seismic sensors for footstep localization in indoor environments. A popular strategy of localization is to use the measured differences in arrival times of source signals at multiple pairs of receivers. In the literature, most algorithms that are based on time differences of arrival (TDOA) assume that the propagation velocity is a constant as a function of the source position, which is valid for air propagation or even for narrow band signals. However a bounded medium such as a concrete slab (encountered in indoor environement) is usually dispersive and damped. In this study, we demonstrate that under such conditions, the concrete slab can be assimilated to a thin plate; considering a Kelvin-Voigt damping model, we introduce the notion of {\em perceived propagation velocity}, which decreases when the source-sensor distance increases. This peculiar behaviour precludes any possibility to rely on existing localization methods in indoor environment. Therefore, a new localization algorithm that is adapted to a damped and dispersive medium is proposed, using only on the sign of the measured TDOA (SO-TDOA). A simulation and some experimental results are included, to define the performance of this SO-TDOA algorithm.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
19,731
1802.05312
Learning Deep Disentangled Embeddings with the F-Statistic Loss
Deep-embedding methods aim to discover representations of a domain that make explicit the domain's class structure and thereby support few-shot learning. Disentangling methods aim to make explicit compositional or factorial structure. We combine these two active but independent lines of research and propose a new paradigm suitable for both goals. We propose and evaluate a novel loss function based on the $F$ statistic, which describes the separation of two or more distributions. By ensuring that distinct classes are well separated on a subset of embedding dimensions, we obtain embeddings that are useful for few-shot learning. By not requiring separation on all dimensions, we encourage the discovery of disentangled representations. Our embedding method matches or beats state-of-the-art, as evaluated by performance on recall@$k$ and few-shot learning tasks. Our method also obtains performance superior to a variety of alternatives on disentangling, as evaluated by two key properties of a disentangled representation: modularity and explicitness. The goal of our work is to obtain more interpretable, manipulable, and generalizable deep representations of concepts and categories.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
90,410
1912.04321
Learning to Code: Coded Caching via Deep Reinforcement Learning
We consider a system comprising a file library and a network with a server and multiple users equipped with cache memories. The system operates in two phases: a prefetching phase, where users load their caches with parts of contents from the library, and a delivery phase, where users request files from the library and the server needs to send the uncached parts of the requested files to the users. For the case where the users' caches are arbitrarily loaded, we propose an algorithm based on deep reinforcement learning to minimize the delay of delivering requested contents to the users in the delivery phase. Simulation results demonstrate that our proposed deep reinforcement learning agent learns a coded delivery strategy for sending the requests to the users, which slightly outperforms the state-of-the-art performance in terms of delivery delay, while drastically reducing the computational complexity.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
156,810
1910.10958
Malware Classification using Deep Learning based Feature Extraction and Wrapper based Feature Selection Technique
In the case of malware analysis, categorization of malicious files is an essential part after malware detection. Numerous static and dynamic techniques have been reported so far for categorizing malware. This research presents a deep learning-based malware detection (DLMD) technique based on static methods for classifying different malware families. The proposed DLMD technique uses both the byte and ASM files for feature engineering, thus classifying malware families. First, features are extracted from byte files using two different Deep Convolutional Neural Networks (CNN). After that, essential and discriminative opcode features are selected using a wrapper-based mechanism, where Support Vector Machine (SVM) is used as a classifier. The idea is to construct a hybrid feature space by combining the different feature spaces to overcome the shortcoming of particular feature space and thus, reduce the chances of missing a malware. Finally, the hybrid feature space is used to train a Multilayer Perceptron, which classifies all nine different malware families. Experimental results show that proposed DLMD technique achieves log-loss of 0.09 for ten independent runs. Moreover, the proposed DLMD technique's performance is compared against different classifiers and shows its effectiveness in categorizing malware. The relevant code and database can be found at https://github.com/cyberhunters/Malware-Detection-Using-Machine-Learning.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
150,641
2005.08035
Single-participant structural connectivity matrices lead to greater accuracy in classification of participants than function in autism in MRI
In this work, we introduce a technique of deriving symmetric connectivity matrices from regional histograms of grey-matter volume estimated from T1-weighted MRIs. We then validated the technique by inputting the connectivity matrices into a convolutional neural network (CNN) to classify between participants with autism and age-, motion-, and intracranial-volume-matched controls from six different databases (29,288 total connectomes, mean age = 30.72, range 0.42-78.00, including 1555 subjects with autism). We compared this method to similar classifications of the same participants using fMRI connectivity matrices as well as univariate estimates of grey-matter volumes. We further applied graph-theoretical metrics on output class activation maps to identify areas of the matrices that the CNN preferentially used to make the classification, focusing particularly on hubs. Our results gave AUROCs of 0.7298 (69.71% accuracy) when classifying by only structural connectivity, 0.6964 (67.72% accuracy) when classifying by only functional connectivity, and 0.7037 (66.43% accuracy) when classifying by univariate grey matter volumes. Combining structural and functional connectivities gave an AUROC of 0.7354 (69.40% accuracy). Graph analysis of class activation maps revealed no distinguishable network patterns for functional inputs, but did reveal localized differences between groups in bilateral Heschl's gyrus and upper vermis for structural connectivity. This work provides a simple means of feature extraction for inputting large numbers of structural MRIs into machine learning models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
177,476
2110.09101
Vega: A 10-Core SoC for IoT End-Nodes with DNN Acceleration and Cognitive Wake-Up From MRAM-Based State-Retentive Sleep Mode
The Internet-of-Things requires end-nodes with ultra-low-power always-on capability for a long battery lifetime, as well as high performance, energy efficiency, and extreme flexibility to deal with complex and fast-evolving near-sensor analytics algorithms (NSAAs). We present Vega, an IoT end-node SoC capable of scaling from a 1.7 $\mathrm{\mu}$W fully retentive cognitive sleep mode up to 32.2 GOPS (@ 49.4 mW) peak performance on NSAAs, including mobile DNN inference, exploiting 1.6 MB of state-retentive SRAM, and 4 MB of non-volatile MRAM. To meet the performance and flexibility requirements of NSAAs, the SoC features 10 RISC-V cores: one core for SoC and IO management and a 9-cores cluster supporting multi-precision SIMD integer and floating-point computation. Vega achieves SoA-leading efficiency of 615 GOPS/W on 8-bit INT computation (boosted to 1.3TOPS/W for 8-bit DNN inference with hardware acceleration). On floating-point (FP) compuation, it achieves SoA-leading efficiency of 79 and 129 GFLOPS/W on 32- and 16-bit FP, respectively. Two programmable machine-learning (ML) accelerators boost energy efficiency in cognitive sleep and active states, respectively.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
261,677
2402.09136
DolphCoder: Echo-Locating Code Large Language Models with Diverse and Multi-Objective Instruction Tuning
Code Large Language Models (Code LLMs) have demonstrated outstanding performance in code-related tasks. Several instruction tuning approaches have been proposed to boost the code generation performance of pre-trained Code LLMs. In this paper, we introduce a diverse instruction model (DolphCoder) with self-evaluating for code generation. It learns diverse instruction targets and combines a code evaluation objective to enhance its code generation ability. Our model achieves superior performance on the HumanEval and MBPP benchmarks, demonstrating new insights for future code instruction tuning work. Our key findings are: (1) Augmenting more diverse responses with distinct reasoning paths increases the code capability of LLMs. (2) Improving one's ability to evaluate the correctness of code solutions also enhances their ability to create it.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
429,385
2201.10449
An adaptive closed-loop ECoG decoder for long-term and stable bimanual control of an exoskeleton by a tetraplegic
Brain-computer interfaces (BCIs) still face many challenges to step out of laboratories to be used in real-life applications. A key one persists in the high performance control of diverse effectors for complex tasks, using chronic and safe recorders. This control must be robust over time and of high decoding performance without continuous recalibration of the decoders. In the article, asynchronous control of an exoskeleton by a tetraplegic patient using a chronically implanted epidural electrocorticography (EpiCoG) implant is demonstrated. For this purpose, an adaptive online tensor-based decoder: the Recursive Exponentially Weighted Markov-Switching multi-Linear Model (REW-MSLM) was developed. We demonstrated over a period of 6 months the stability of the 8-dimensional alternative bimanual control of the exoskeleton and its virtual avatar using REW-MSLM without recalibration of the decoder.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
276,999
2102.04680
Tr\"aumerAI: Dreaming Music with StyleGAN
The goal of this paper to generate a visually appealing video that responds to music with a neural network so that each frame of the video reflects the musical characteristics of the corresponding audio clip. To achieve the goal, we propose a neural music visualizer directly mapping deep music embeddings to style embeddings of StyleGAN, named Tr\"aumerAI, which consists of a music auto-tagging model using short-chunk CNN and StyleGAN2 pre-trained on WikiArt dataset. Rather than establishing an objective metric between musical and visual semantics, we manually labeled the pairs in a subjective manner. An annotator listened to 100 music clips of 10 seconds long and selected an image that suits the music among the 200 StyleGAN-generated examples. Based on the collected data, we trained a simple transfer function that converts an audio embedding to a style embedding. The generated examples show that the mapping between audio and video makes a certain level of intra-segment similarity and inter-segment dissimilarity.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,188
2001.10330
Parameter Calibration in Crowd Simulation Models using Approximate Bayesian Computation
Simulation models for pedestrian crowds are a ubiquitous tool in research and industry. It is crucial that the parameters of these models are calibrated carefully and ultimately it will be of interest to compare competing models to decide which model is best suited for a particular purpose. In this contribution, I demonstrate how Approximate Bayesian Computation (ABC), which is already a popular tool in other areas of science, can be used for model fitting and model selection in a pedestrian dynamics context. I fit two different models for pedestrian dynamics to data on a crowd passing in one direction through a bottleneck. One model describes movement in continuous-space, the other model is a cellular automaton and thus describes movement in discrete-space. In addition, I compare models to data using two metrics. The first is based on egress times and the second on the velocity of pedestrians in front of the bottleneck. My results show that while model fitting is successful, a substantial degree of uncertainty about the value of some model parameters remains after model fitting. Importantly, the choice of metric in model fitting can influence parameter estimates. Model selection is inconclusive for the egress time metric but supports the continuous-space model for the velocity-based metric. These findings show that ABC is a flexible approach and highlight the difficulties associated with model fitting and model selection for pedestrian dynamics. ABC requires many simulation runs and choosing appropriate metrics for comparing data to simulations requires careful attention. Despite this, I suggest ABC is a promising tool, because it is versatile and easily implemented for the growing number of openly available crowd simulators and data sets.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
161,792
2204.04013
Mel-spectrogram features for acoustic vehicle detection and speed estimation
The paper addresses acoustic vehicle detection and speed estimation from single sensor measurements. We predict the vehicle's pass-by instant by minimizing clipped vehicle-to-microphone distance, which is predicted from the mel-spectrogram of input audio, in a supervised learning approach. In addition, mel-spectrogram-based features are used directly for vehicle speed estimation, without introducing any intermediate features. The results show that the proposed features can be used for accurate vehicle detection and speed estimation, with an average error of 7.87 km/h. If we formulate speed estimation as a classification problem, with a 10 km/h discretization interval, the proposed method attains the average accuracy of 48.7% for correct class prediction and 91.0% when an offset of one class is allowed. The proposed method is evaluated on a dataset of 304 urban-environment on-field recordings of ten different vehicles.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
290,507
1803.00439
Synchronization and Aggregation of Nonlinear Power Systems with Consideration of Bus Network Structures
We study nonlinear power systems consisting of generators, generator buses, and non-generator buses. First, looking at a generator and its bus' variables jointly, we introduce a synchronization concept for a pair of such joint generators and buses. We show that this concept is related to graph symmetry. Next, we extend, in two ways, the synchronization from a pair to a partition of all generators in the networks and show that they are related to either graph symmetry or equitable partitions. Finally, we show how an exact reduced model can be obtained by aggregating the generators and associated buses in the network when the original system is synchronized with respect to a partition, provided that the initial condition respects the partition. Additionally, the aggregation-based reduced model is again a power system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
91,679
2407.16166
Robust Privacy Amidst Innovation with Large Language Models Through a Critical Assessment of the Risks
This study examines integrating EHRs and NLP with large language models (LLMs) to improve healthcare data management and patient care. It focuses on using advanced models to create secure, HIPAA-compliant synthetic patient notes for biomedical research. The study used de-identified and re-identified MIMIC III datasets with GPT-3.5, GPT-4, and Mistral 7B to generate synthetic notes. Text generation employed templates and keyword extraction for contextually relevant notes, with one-shot generation for comparison. Privacy assessment checked PHI occurrence, while text utility was tested using an ICD-9 coding task. Text quality was evaluated with ROUGE and cosine similarity metrics to measure semantic similarity with source notes. Analysis of PHI occurrence and text utility via the ICD-9 coding task showed that the keyword-based method had low risk and good performance. One-shot generation showed the highest PHI exposure and PHI co-occurrence, especially in geographic location and date categories. The Normalized One-shot method achieved the highest classification accuracy. Privacy analysis revealed a critical balance between data utility and privacy protection, influencing future data use and sharing. Re-identified data consistently outperformed de-identified data. This study demonstrates the effectiveness of keyword-based methods in generating privacy-protecting synthetic clinical notes that retain data usability, potentially transforming clinical data-sharing practices. The superior performance of re-identified over de-identified data suggests a shift towards methods that enhance utility and privacy by using dummy PHIs to perplex privacy attacks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
475,485
2205.11374
Looking for a Handsome Carpenter! Debiasing GPT-3 Job Advertisements
The growing capability and availability of generative language models has enabled a wide range of new downstream tasks. Academic research has identified, quantified and mitigated biases present in language models but is rarely tailored to downstream tasks where wider impact on individuals and society can be felt. In this work, we leverage one popular generative language model, GPT-3, with the goal of writing unbiased and realistic job advertisements. We first assess the bias and realism of zero-shot generated advertisements and compare them to real-world advertisements. We then evaluate prompt-engineering and fine-tuning as debiasing methods. We find that prompt-engineering with diversity-encouraging prompts gives no significant improvement to bias, nor realism. Conversely, fine-tuning, especially on unbiased real advertisements, can improve realism and reduce bias.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
298,113
0710.3621
Numerical removal of water-vapor effects from THz-TDS measurements
One source of disturbance in a pulsed T-ray signal is attributed to ambient water vapor. Water molecules in the gas phase selectively absorb T-rays at discrete frequencies corresponding to their molecular rotational transitions. This results in prominent resonances spread over the T-ray spectrum, and in the time domain the T-ray signal is observed as fluctuations after the main pulse. These effects are generally undesired, since they may mask critical spectroscopic data. So, ambient water vapor is commonly removed from the T-ray path by using a closed chamber during the measurement. Yet, in some applications a closed chamber is not applicable. This situation, therefore, motivates the need for another method to reduce these unwanted artifacts. This paper presents a study on a computational means to address the problem. Initially, a complex frequency response of water vapor is modeled from a spectroscopic catalog. Using a deconvolution technique, together with fine tuning of the strength of each resonance, parts of the water-vapor response are removed from a measured T-ray signal, with minimal signal distortion.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
801
2109.12985
Synerise at RecSys 2021: Twitter user engagement prediction with a fast neural model
In this paper we present our 2nd place solution to ACM RecSys 2021 Challenge organized by Twitter. The challenge aims to predict user engagement for a set of tweets, offering an exceptionally large data set of 1 billion data points sampled from over four weeks of real Twitter interactions. Each data point contains multiple sources of information, such as tweet text along with engagement features, user features, and tweet features. The challenge brings the problem close to a real production environment by introducing strict latency constraints in the model evaluation phase: the average inference time for single tweet engagement prediction is limited to 6ms on a single CPU core with 64GB memory. Our proposed model relies on extensive feature engineering performed with methods such as the Efficient Manifold Density Estimator (EMDE) - our previously introduced algorithm based on Locality Sensitive Hashing method, and novel Fourier Feature Encoding, among others. In total, we create numerous features describing a user's Twitter account status and the content of a tweet. In order to adhere to the strict latency constraints, the underlying model is a simple residual feed-forward neural network. The system is a variation of our previous methods which proved successful in KDD Cup 2021, WSDM Challenge 2021, and SIGIR eCom Challenge 2020. We release the source code at: https://github.com/Synerise/recsys-challenge-2021
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
257,481
2404.07593
Diffusion posterior sampling for simulation-based inference in tall data settings
Determining which parameters of a non-linear model best describe a set of experimental data is a fundamental problem in science and it has gained much traction lately with the rise of complex large-scale simulators. The likelihood of such models is typically intractable, which is why classical MCMC methods can not be used. Simulation-based inference (SBI) stands out in this context by only requiring a dataset of simulations to train deep generative models capable of approximating the posterior distribution that relates input parameters to a given observation. In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model. The proposed method is built upon recent developments from the flourishing score-based diffusion literature and allows to estimate the tall data posterior distribution, while simply using information from a score network trained for a single context observation. We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
445,899
2010.16185
Optimization Algorithm-Based Approach for Modelling Large Deflection of Cantilever Beam Subjected to Tip Load
Beam mechanism and beam theory have attracted substantial attention from researchers, as they have been widely used in many fields such as compliant mechanisms and soft robots. The modeling of beam mechanisms becomes complicated due to the geometric nonlinearity that is proved to be significant with large deflection. A new method, called optimization algorithm-based approach (OABA), is proposed to predict the large deflection of cantilever beams, in which an optimization algorithm is exploited to find the locus of the beam tip. With the derived locus of the beam tip, the deflection curve of the cantilever beam can be calculated. The optimization algorithm in this paper is embodied in a particle swarm optimization (PSO) algorithm. Experimental results show that the proposed method can precisely predict the deflection of the uniform and non-uniform cantilever beams. The maximum error is limited to 4.35% when the normalized maximum transverse deflection reaches 0.75. Given that, the proposed OABA would be a universal approach to model the large deflection of cantilever beams subjected to tip loads.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
203,990
2403.03359
RACE-SM: Reinforcement Learning Based Autonomous Control for Social On-Ramp Merging
Autonomous parallel-style on-ramp merging in human controlled traffic continues to be an existing issue for autonomous vehicle control. Existing non-learning based solutions for vehicle control rely on rules and optimization primarily. These methods have been seen to present significant challenges. Recent advancements in Deep Reinforcement Learning have shown promise and have received significant academic interest however the available learning based approaches show inadequate attention to other highway vehicles and often rely on inaccurate road traffic assumptions. In addition, the parallel-style case is rarely considered. A novel learning based model for acceleration and lane change decision making that explicitly considers the utility to both the ego vehicle and its surrounding vehicles which may be cooperative or uncooperative to produce behaviour that is socially acceptable is proposed. The novel reward function makes use of Social Value Orientation to weight the vehicle's level of social cooperation and is divided into ego vehicle and surrounding vehicle utility which are weighted according to the model's designated Social Value Orientation. A two-lane highway with an on-ramp divided into a taper-style and parallel-style section is considered. Simulation results indicated the importance of considering surrounding vehicles in reward function design and show that the proposed model matches or surpasses those in literature in terms of collisions while also introducing socially courteous behaviour avoiding near misses and anti-social behaviour through direct consideration of the effect of merging on surrounding vehicles.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
435,157
2204.11397
Tensorial tomographic differential phase-contrast microscopy
We report Tensorial Tomographic Differential Phase-Contrast microscopy (T2DPC), a quantitative label-free tomographic imaging method for simultaneous measurement of phase and anisotropy. T2DPC extends differential phase-contrast microscopy, a quantitative phase imaging technique, to highlight the vectorial nature of light. The method solves for permittivity tensor of anisotropic samples from intensity measurements acquired with a standard microscope equipped with an LED matrix, a circular polarizer, and a polarization-sensitive camera. We demonstrate accurate volumetric reconstructions of refractive index, birefringence, and orientation for various validation samples, and show that the reconstructed polarization structures of a biological specimen are predictive of pathology.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
293,128
2206.03799
Dyna-DM: Dynamic Object-aware Self-supervised Monocular Depth Maps
Self-supervised monocular depth estimation has been a subject of intense study in recent years, because of its applications in robotics and autonomous driving. Much of the recent work focuses on improving depth estimation by increasing architecture complexity. This paper shows that state-of-the-art performance can also be achieved by improving the learning process rather than increasing model complexity. More specifically, we propose (i) disregarding small potentially dynamic objects when training, and (ii) employing an appearance-based approach to separately estimate object pose for truly dynamic objects. We demonstrate that these simplifications reduce GPU memory usage by 29% and result in qualitatively and quantitatively improved depth maps. The code is available at https://github.com/kieran514/Dyna-DM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
301,411
1811.10728
Optimization of Information-Seeking Dialogue Strategy for Argumentation-Based Dialogue System
Argumentation-based dialogue systems, which can handle and exchange arguments through dialogue, have been widely researched. It is required that these systems have sufficient supporting information to argue their claims rationally; however, the systems often do not have enough of such information in realistic situations. One way to fill in the gap is acquiring such missing information from dialogue partners (information-seeking dialogue). Existing information-seeking dialogue systems are based on handcrafted dialogue strategies that exhaustively examine missing information. However, the proposed strategies are not specialized in collecting information for constructing rational arguments. Moreover, the number of system's inquiry candidates grows in accordance with the size of the argument set that the system deal with. In this paper, we formalize the process of information-seeking dialogue as Markov decision processes (MDPs) and apply deep reinforcement learning (DRL) for automatically optimizing a dialogue strategy. By utilizing DRL, our dialogue strategy can successfully minimize objective functions, the number of turns it takes for our system to collect necessary information in a dialogue. We conducted dialogue experiments using two datasets from different domains of argumentative dialogue. Experimental results show that the proposed formalization based on MDP works well, and the policy optimized by DRL outperformed existing heuristic dialogue strategies.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
114,560
1803.09353
Stochastic bandits robust to adversarial corruptions
We introduce a new model of stochastic bandits with adversarial corruptions which aims to capture settings where most of the input follows a stochastic pattern but some fraction of it can be adversarially changed to trick the algorithm, e.g., click fraud, fake reviews and email spam. The goal of this model is to encourage the design of bandit algorithms that (i) work well in mixed adversarial and stochastic models, and (ii) whose performance deteriorates gracefully as we move from fully stochastic to fully adversarial models. In our model, the rewards for all arms are initially drawn from a distribution and are then altered by an adaptive adversary. We provide a simple algorithm whose performance gracefully degrades with the total corruption the adversary injected in the data, measured by the sum across rounds of the biggest alteration the adversary made in the data in that round; this total corruption is denoted by $C$. Our algorithm provides a guarantee that retains the optimal guarantee (up to a logarithmic term) if the input is stochastic and whose performance degrades linearly to the amount of corruption $C$, while crucially being agnostic to it. We also provide a lower bound showing that this linear degradation is necessary if the algorithm achieves optimal performance in the stochastic setting (the lower bound works even for a known amount of corruption, a special case in which our algorithm achieves optimal performance without the extra logarithm).
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
93,476
2311.15582
Lightly Weighted Automatic Audio Parameter Extraction for the Quality Assessment of Consensus Auditory-Perceptual Evaluation of Voice
The Consensus Auditory-Perceptual Evaluation of Voice is a widely employed tool in clinical voice quality assessment that is significant for streaming communication among clinical professionals and benchmarking for the determination of further treatment. Currently, because the assessment relies on experienced clinicians, it tends to be inconsistent, and thus, difficult to standardize. To address this problem, we propose to leverage lightly weighted automatic audio parameter extraction, to increase the clinical relevance, reduce the complexity, and enhance the interpretability of voice quality assessment. The proposed method utilizes age, sex, and five audio parameters: jitter, absolute jitter, shimmer, harmonic-to-noise ratio (HNR), and zero crossing. A classical machine learning approach is employed. The result reveals that our approach performs similar to state-of-the-art (SOTA) methods, and outperforms the latent representation obtained by using popular audio pre-trained models. This approach provide insights into the feasibility of different feature extraction approaches for voice evaluation. Audio parameters such as jitter and the HNR are proven to be suitable for characterizing voice quality attributes, such as roughness and strain. Conversely, pre-trained models exhibit limitations in effectively addressing noise-related scorings. This study contributes toward more comprehensive and precise voice quality evaluations, achieved by a comprehensively exploring diverse assessment methodologies.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
410,568
1804.00968
In-depth Question classification using Convolutional Neural Networks
Convolutional neural networks for computer vision are fairly intuitive. In a typical CNN used in image classification, the first layers learn edges, and the following layers learn some filters that can identify an object. But CNNs for Natural Language Processing are not used often and are not completely intuitive. We have a good idea about what the convolution filters learn for the task of text classification, and to that, we propose a neural network structure that will be able to give good results in less time. We will be using convolutional neural networks to predict the primary or broader topic of a question, and then use separate networks for each of these predicted topics to accurately classify their sub-topics.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
94,154
2205.00763
Data-driven emotional body language generation for social robotics
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration, since humans attribute, and perhaps subconsciously anticipate, such traces to perceive an agent as engaging, trustworthy, and socially present. Robotic emotional body language needs to be believable, nuanced and relevant to the context. We implemented a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions and can generate numerous new ones of similar believability and lifelikeness. The framework uses the Conditional Variational Autoencoder model and a sampling approach based on the geometric properties of the model's latent space to condition the generative process on targeted levels of valence and arousal. The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones, and the emotional conditioning was adequately differentiable between most levels except the pairs of neutral-positive valence and low-medium arousal. Furthermore, an exploratory analysis of the results reveals a possible impact of the conditioning on the perceived dominance of the robot, as well as on the participants' attention.
true
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
294,366
2206.07162
Category-Agnostic 6D Pose Estimation with Conditional Neural Processes
We present a novel meta-learning approach for 6D pose estimation on unknown objects. In contrast to ``instance-level" and ``category-level" pose estimation methods, our algorithm learns object representation in a category-agnostic way, which endows it with strong generalization capabilities across object categories. Specifically, we employ a neural process-based meta-learning approach to train an encoder to capture texture and geometry of an object in a latent representation, based on very few RGB-D images and ground-truth keypoints. The latent representation is then used by a simultaneously meta-trained decoder to predict the 6D pose of the object in new images. Furthermore, we propose a novel geometry-aware decoder for the keypoint prediction using a Graph Neural Network (GNN), which explicitly takes geometric constraints specific to each object into consideration. To evaluate our algorithm, extensive experiments are conducted on the \linemod dataset, and on our new fully-annotated synthetic datasets generated from Multiple Categories in Multiple Scenes (MCMS). Experimental results demonstrate that our model performs well on unseen objects with very different shapes and appearances. Remarkably, our model also shows robust performance on occluded scenes although trained fully on data without occlusion. To our knowledge, this is the first work exploring \textbf{cross-category level} 6D pose estimation.
false
false
false
false
true
false
true
true
false
false
false
true
false
false
false
false
false
false
302,621
2205.10355
Deep Quality Estimation: Creating Surrogate Models for Human Quality Ratings
Human ratings are abstract representations of segmentation quality. To approximate human quality ratings on scarce expert data, we train surrogate quality estimation models. We evaluate on a complex multi-class segmentation problem, specifically glioma segmentation, following the BraTS annotation protocol. The training data features quality ratings from 15 expert neuroradiologists on a scale ranging from 1 to 6 stars for various computer-generated and manual 3D annotations. Even though the networks operate on 2D images and with scarce training data, we can approximate segmentation quality within a margin of error comparable to human intra-rater reliability. Segmentation quality prediction has broad applications. While an understanding of segmentation quality is imperative for successful clinical translation of automatic segmentation quality algorithms, it can play an essential role in training new segmentation models. Due to the split-second inference times, it can be directly applied within a loss function or as a fully-automatic dataset curation mechanism in a federated learning setting.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
297,668
2111.14447
ZeroCap: Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic
Recent text-to-image matching models apply contrastive learning to large corpora of uncurated pairs of images and sentences. While such models can provide a powerful score for matching and subsequent zero-shot tasks, they are not capable of generating caption given an image. In this work, we repurpose such models to generate a descriptive text given an image at inference time, without any further training or tuning steps. This is done by combining the visual-semantic model with a large language model, benefiting from the knowledge in both web-scale models. The resulting captions are much less restrictive than those obtained by supervised captioning methods. Moreover, as a zero-shot learning method, it is extremely flexible and we demonstrate its ability to perform image arithmetic in which the inputs can be either images or text, and the output is a sentence. This enables novel high-level vision capabilities such as comparing two images or solving visual analogy tests. Our code is available at: https://github.com/YoadTew/zero-shot-image-to-text.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
false
268,602
2012.07462
Learned Video Codec with Enriched Reconstruction for CLIC P-frame Coding
This paper proposes a learning-based video codec, specifically used for Challenge on Learned Image Compression (CLIC, CVPRWorkshop) 2020 P-frame coding. More specifically, we designed a compressor network with Refine-Net for coding residual signals and motion vectors. Also, for motion estimation, we introduced a hierarchical, attention-based ME-Net. To verify our design, we conducted an extensive ablation study on our modules and different input formats. Our video codec demonstrates its performance by using the perfect reference frame at the decoder side specified by the CLIC P-frame Challenge. The experimental result shows that our proposed codec is very competitive with the Challenge top performers in terms of quality metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
211,463