id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2106.00839 | Algorithmic Insurance | As machine learning algorithms start to get integrated into the decision-making process of companies and organizations, insurance products are being developed to protect their owners from liability risk. Algorithmic liability differs from human liability since it is based on a single model compared to multiple heterogeneous decision-makers and its performance is known a priori for a given set of data. Traditional actuarial tools for human liability do not take these properties into consideration, primarily focusing on the distribution of historical claims. We propose, for the first time, a quantitative framework to estimate the risk exposure of insurance contracts for machine-driven liability, introducing the concept of algorithmic insurance. Specifically, we present an optimization formulation to estimate the risk exposure of a binary classification model given a pre-defined range of premiums. We adjust the formulation to account for uncertainty in the resulting losses using robust optimization. Our approach outlines how properties of the model, such as accuracy, interpretability, and generalizability, can influence the insurance contract evaluation. To showcase a practical implementation of the proposed framework, we present a case study of medical malpractice in the context of breast cancer detection. Our analysis focuses on measuring the effect of the model parameters on the expected financial loss and identifying the aspects of algorithmic performance that predominantly affect the risk of the contract. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 238,271 |
1812.09338 | Position Bias Estimation for Unbiased Learning-to-Rank in eCommerce
Search | The Unbiased Learning-to-Rank framework has been recently proposed as a general approach to systematically remove biases, such as position bias, from learning-to-rank models. The method takes two steps - estimating click propensities and using them to train unbiased models. Most common methods proposed in the literature for estimating propensities involve some degree of intervention in the live search engine. An alternative approach proposed recently uses an Expectation Maximization (EM) algorithm to estimate propensities by using ranking features for estimating relevances. In this work we propose a novel method to directly estimate propensities which does not use any intervention in live search or rely on modeling relevance. Rather, we take advantage of the fact that the same query-document pair may naturally change ranks over time. This typically occurs for eCommerce search because of change of popularity of items over time, existence of time dependent ranking features, or addition or removal of items to the index (an item getting sold or a new item being listed). However, our method is general and can be applied to any search engine for which the rank of the same document may naturally change over time for the same query. We derive a simple likelihood function that depends on propensities only, and by maximizing the likelihood we are able to get estimates of the propensities. We apply this method to eBay search data to estimate click propensities for web and mobile search and compare these with estimates using the EM method. We also use simulated data to show that the method gives reliable estimates of the "true" simulated propensities. Finally, we train an unbiased learning-to-rank model for eBay search using the estimated propensities and show that it outperforms both baselines - one without position bias correction and one with position bias correction using the EM method. | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 117,133 |
2307.10314 | Mood Classification of Bangla Songs Based on Lyrics | Music can evoke various emotions, and with the advancement of technology, it has become more accessible to people. Bangla music, which portrays different human emotions, lacks sufficient research. The authors of this article aim to analyze Bangla songs and classify their moods based on the lyrics. To achieve this, this research has compiled a dataset of 4000 Bangla song lyrics, genres, and used Natural Language Processing and the Bert Algorithm to analyze the data. Among the 4000 songs, 1513 songs are represented for the sad mood, 1362 for the romantic mood, 886 for happiness, and the rest 239 are classified as relaxation. By embedding the lyrics of the songs, the authors have classified the songs into four moods: Happy, Sad, Romantic, and Relaxed. This research is crucial as it enables a multi-class classification of songs' moods, making the music more relatable to people's emotions. The article presents the automated result of the four moods accurately derived from the song lyrics. | false | false | true | false | false | true | true | false | true | false | false | false | false | false | false | false | false | false | 380,514 |
2408.04723 | Survey: Transformer-based Models in Data Modality Conversion | Transformers have made significant strides across various artificial intelligence domains, including natural language processing, computer vision, and audio processing. This success has naturally garnered considerable interest from both academic and industry researchers. Consequently, numerous Transformer variants (often referred to as X-formers) have been developed for these fields. However, a thorough and systematic review of these modality-specific conversions remains lacking. Modality Conversion involves the transformation of data from one form of representation to another, mimicking the way humans integrate and interpret sensory information. This paper provides a comprehensive review of transformer-based models applied to the primary modalities of text, vision, and speech, discussing their architectures, conversion methodologies, and applications. By synthesizing the literature on modality conversion, this survey aims to underline the versatility and scalability of transformers in advancing AI-driven content generation and understanding. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 479,503 |
1611.00538 | An application of incomplete pairwise comparison matrices for ranking
top tennis players | Pairwise comparison is an important tool in multi-attribute decision making. Pairwise comparison matrices (PCM) have been applied for ranking criteria and for scoring alternatives according to a given criterion. Our paper presents a special application of incomplete PCMs: ranking of professional tennis players based on their results against each other. The selected 25 players have been on the top of the ATP rankings for a shorter or longer period in the last 40 years. Some of them have never met on the court. One of the aims of the paper is to provide ranking of the selected players, however, the analysis of incomplete pairwise comparison matrices is also in the focus. The eigenvector method and the logarithmic least squares method were used to calculate weights from incomplete PCMs. In our results the top three players of four decades were Nadal, Federer and Sampras. Some questions have been raised on the properties of incomplete PCMs and remains open for further investigation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 63,248 |
2009.05133 | Finite-Alphabet Wiener Filter Precoding for mmWave Massive MU-MIMO
Systems | Power consumption of multi-user (MU) precoding is a major concern in all-digital massive MU multiple-input multiple-output (MIMO) base-stations with hundreds of antenna elements operating at millimeter-wave (mmWave) frequencies. We propose to replace part of the linear Wiener filter (WF) precoding matrix by a finite-alphabet WF precoding (FAWP) matrix, which enables the use of low-precision hardware that consumes low power and area. To minimize the performance loss of our approach, we present methods that efficiently compute FAWP matrices that best mimic the WF precoder. Our results show that FAWP matrices approach infinite-precision error-rate and error-vector magnitude performance with only 3-bit precoding weights, even when operating in realistic mmWave channels. Hence, FAWP is a promising approach to substantially reduce power consumption and silicon area in all-digital mmWave massive MU-MIMO systems. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 195,228 |
2305.07982 | Zero-shot Faithful Factual Error Correction | Faithfully correcting factual errors is critical for maintaining the integrity of textual knowledge bases and preventing hallucinations in sequence-to-sequence models. Drawing on humans' ability to identify and correct factual errors, we present a zero-shot framework that formulates questions about input claims, looks for correct answers in the given evidence, and assesses the faithfulness of each correction based on its consistency with the evidence. Our zero-shot framework outperforms fully-supervised approaches, as demonstrated by experiments on the FEVER and SciFact datasets, where our outputs are shown to be more faithful. More importantly, the decomposability nature of our framework inherently provides interpretability. Additionally, to reveal the most suitable metrics for evaluating factual error corrections, we analyze the correlation between commonly used metrics with human judgments in terms of three different dimensions regarding intelligibility and faithfulness. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 364,109 |
1903.07050 | DSPG: Decentralized Simultaneous Perturbations Gradient Descent Scheme | Distributed descent-based methods are an essential toolset to solving optimization problems in multi-agent system scenarios. Here the agents seek to optimize a global objective function through mutual cooperation. Oftentimes, cooperation is achieved over a wireless communication network that is prone to delays and errors. There are many scenarios wherein the objective function is either non-differentiable or merely observable. In this paper, we present a cross-entropy based distributed stochastic approximation algorithm (SA) that finds a minimum of the objective, using only samples. We call this algorithm Decentralized Simultaneous Perturbation Stochastic Gradient, with Constant Sensitivity Parameters (DSPG). This algorithm is a two fold improvement over the classic Simultaneous Perturbation Stochastic Approximations (SPSA) algorithm. Specifically, DSPG allows for (i) the use of old information from other agents and (ii) easy implementation through the use simple hyper-parameter choices. We analyze the biases and variances that arise due to these two allowances. We show that the biases due to communication delays can be countered by a careful choice of algorithm hyper-parameters. The variance of the gradient estimator and its effect on the rate of convergence is studied. We present numerical results supporting our theory. Finally, we discuss an application to the stochastic consensus problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,524 |
2402.10050 | On-Demand Myoelectric Control Using Wake Gestures to Eliminate False
Activations During Activities of Daily Living | While myoelectric control has recently become a focus of increased research as a possible flexible hands-free input modality, current control approaches are prone to inadvertent false activations in real-world conditions. In this work, a novel myoelectric control paradigm -- on-demand myoelectric control -- is proposed, designed, and evaluated, to reduce the number of unrelated muscle movements that are incorrectly interpreted as input gestures . By leveraging the concept of wake gestures, users were able to switch between a dedicated control mode and a sleep mode, effectively eliminating inadvertent activations during activities of daily living (ADLs). The feasibility of wake gestures was demonstrated in this work through two online ubiquitous EMG control tasks with varying difficulty levels; dismissing an alarm and controlling a robot. The proposed control scheme was able to appropriately ignore almost all non-targeted muscular inputs during ADLs (>99.9%) while maintaining sufficient sensitivity for reliable mode switching during intentional wake gesture elicitation. These results highlight the potential of wake gestures as a critical step towards enabling ubiquitous myoelectric control-based on-demand input for a wide range of applications. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 429,784 |
1804.09394 | Design-Oriented Transient Stability Analysis of Grid-Connected
Converters with Power Synchronization Control | The power synchronization control (PSC) has been increasingly used with voltage-source converters (VSCs) connected to the weak ac grid. This paper presents an in-depth analysis on the transient stability of the PSC-VSC by means of the phase portrait. It is revealed that the PSC-VSC will maintain synchronization with the grid as long as there are equilibrium points after the transient disturbance. In contrast, during grid faults without any equilibrium points, the critical clearing angle (CCA) for the PSC-VSC is identified, which is found equal to the power angle at the unstable equilibrium point of the post-fault operation. This fixed CCA facilitates the design of power system protection. Moreover, it is also found that the PSC-VSC can still re-synchronize with the grid after around one cycle of oscillation, even if the fault-clearing angle is beyond the CCA. This feature reduces the risk of system collapse caused by the delayed fault clearance. These findings are corroborated by simulations and experimental tests. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 95,962 |
2402.05296 | Classifying spam emails using agglomerative hierarchical clustering and
a topic-based approach | Spam emails are unsolicited, annoying and sometimes harmful messages which may contain malware, phishing or hoaxes. Unlike most studies that address the design of efficient anti-spam filters, we approach the spam email problem from a different and novel perspective. Focusing on the needs of cybersecurity units, we follow a topic-based approach for addressing the classification of spam email into multiple categories. We propose SPEMC-15K-E and SPEMC-15K-S, two novel datasets with approximately 15K emails each in English and Spanish, respectively, and we label them using agglomerative hierarchical clustering into 11 classes. We evaluate 16 pipelines, combining four text representation techniques -Term Frequency-Inverse Document Frequency (TF-IDF), Bag of Words, Word2Vec and BERT- and four classifiers: Support Vector Machine, N\"aive Bayes, Random Forest and Logistic Regression. Experimental results show that the highest performance is achieved with TF-IDF and LR for the English dataset, with a F1 score of 0.953 and an accuracy of 94.6%, and while for the Spanish dataset, TF-IDF with NB yields a F1 score of 0.945 and 98.5% accuracy. Regarding the processing time, TF-IDF with LR leads to the fastest classification, processing an English and Spanish spam email in and on average, respectively. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 427,807 |
2107.04372 | A Robust Deep Ensemble Classifier for Figurative Language Detection | Recognition and classification of Figurative Language (FL) is an open problem of Sentiment Analysis in the broader field of Natural Language Processing (NLP) due to the contradictory meaning contained in phrases with metaphorical content. The problem itself contains three interrelated FL recognition tasks: sarcasm, irony and metaphor which, in the present paper, are dealt with advanced Deep Learning (DL) techniques. First, we introduce a data prepossessing framework towards efficient data representation formats so that to optimize the respective inputs to the DL models. In addition, special features are extracted in order to characterize the syntactic, expressive, emotional and temper content reflected in the respective social media text references. These features aim to capture aspects of the social network user's writing method. Finally, features are fed to a robust, Deep Ensemble Soft Classifier (DESC) which is based on the combination of different DL techniques. Using three different benchmark datasets (one of them containing various FL forms) we conclude that the DESC model achieves a very good performance, worthy of comparison with relevant methodologies and state-of-the-art technologies in the challenging field of FL recognition. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 245,440 |
2212.08228 | SADM: Sequence-Aware Diffusion Model for Longitudinal Medical Image
Generation | Human organs constantly undergo anatomical changes due to a complex mix of short-term (e.g., heartbeat) and long-term (e.g., aging) factors. Evidently, prior knowledge of these factors will be beneficial when modeling their future state, i.e., via image generation. However, most of the medical image generation tasks only rely on the input from a single image, thus ignoring the sequential dependency even when longitudinal data is available. Sequence-aware deep generative models, where model input is a sequence of ordered and timestamped images, are still underexplored in the medical imaging domain that is featured by several unique challenges: 1) Sequences with various lengths; 2) Missing data or frame, and 3) High dimensionality. To this end, we propose a sequence-aware diffusion model (SADM) for the generation of longitudinal medical images. Recently, diffusion models have shown promising results in high-fidelity image generation. Our method extends this new technique by introducing a sequence-aware transformer as the conditional module in a diffusion model. The novel design enables learning longitudinal dependency even with missing data during training and allows autoregressive generation of a sequence of images during inference. Our extensive experiments on 3D longitudinal medical images demonstrate the effectiveness of SADM compared with baselines and alternative methods. The code is available at https://github.com/ubc-tea/SADM-Longitudinal-Medical-Image-Generation. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 336,670 |
cs/9810017 | General Theory of Image Normalization | We give a systematic, abstract formulation of the image normalization method as applied to a general group of image transformations, and then illustrate the abstract analysis by applying it to the hierarchy of viewing transformations of a planar object. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 540,426 |
2411.03064 | Exploiting the Segment Anything Model (SAM) for Lung Segmentation in
Chest X-ray Images | Segment Anything Model (SAM), a new AI model from Meta AI released in April 2023, is an ambitious tool designed to identify and separate individual objects within a given image through semantic interpretation. The advanced capabilities of SAM are the result of its training with millions of images and masks, and a few days after its release, several researchers began testing the model on medical images to evaluate its performance in this domain. With this perspective in focus -- i.e., optimizing work in the healthcare field -- this work proposes the use of this new technology to evaluate and study chest X-ray images. The approach adopted for this work, with the aim of improving the model's performance for lung segmentation, involved a transfer learning process, specifically the fine-tuning technique. After applying this adjustment, a substantial improvement was observed in the evaluation metrics used to assess SAM's performance compared to the masks provided by the datasets. The results obtained by the model after the adjustments were satisfactory and similar to cutting-edge neural networks, such as U-Net. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 505,782 |
2206.08084 | An Improved Normed-Deformable Convolution for Crowd Counting | In recent years, crowd counting has become an important issue in computer vision. In most methods, the density maps are generated by convolving with a Gaussian kernel from the ground-truth dot maps which are marked around the center of human heads. Due to the fixed geometric structures in CNNs and indistinct head-scale information, the head features are obtained incompletely. Deformable convolution is proposed to exploit the scale-adaptive capabilities for CNN features in the heads. By learning the coordinate offsets of the sampling points, it is tractable to improve the ability to adjust the receptive field. However, the heads are not uniformly covered by the sampling points in the deformable convolution, resulting in loss of head information. To handle the non-uniformed sampling, an improved Normed-Deformable Convolution (\textit{i.e.,}NDConv) implemented by Normed-Deformable loss (\textit{i.e.,}NDloss) is proposed in this paper. The offsets of the sampling points which are constrained by NDloss tend to be more even. Then, the features in the heads are obtained more completely, leading to better performance. Especially, the proposed NDConv is a light-weight module which shares similar computation burden with Deformable Convolution. In the extensive experiments, our method outperforms state-of-the-art methods on ShanghaiTech A, ShanghaiTech B, UCF\_QNRF, and UCF\_CC\_50 dataset, achieving 61.4, 7.8, 91.2, and 167.2 MAE, respectively. The code is available at https://github.com/bingshuangzhuzi/NDConv | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 303,000 |
2212.02758 | Tackling Data Heterogeneity in Federated Learning with Class Prototypes | Data heterogeneity across clients in federated learning (FL) settings is a widely acknowledged challenge. In response, personalized federated learning (PFL) emerged as a framework to curate local models for clients' tasks. In PFL, a common strategy is to develop local and global models jointly - the global model (for generalization) informs the local models, and the local models (for personalization) are aggregated to update the global model. A key observation is that if we can improve the generalization ability of local models, then we can improve the generalization of global models, which in turn builds better personalized models. In this work, we consider class imbalance, an overlooked type of data heterogeneity, in the classification setting. We propose FedNH, a novel method that improves the local models' performance for both personalization and generalization by combining the uniformity and semantics of class prototypes. FedNH initially distributes class prototypes uniformly in the latent space and smoothly infuses the class semantics into class prototypes. We show that imposing uniformity helps to combat prototype collapse while infusing class semantics improves local models. Extensive experiments were conducted on popular classification datasets under the cross-device setting. Our results demonstrate the effectiveness and stability of our method over recent works. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 334,876 |
2403.03573 | Time-optimal Point-to-point Motion Planning: A Two-stage Approach | This paper proposes a two-stage approach to formulate the time-optimal point-to-point motion planning problem, involving a first stage with a fixed time grid and a second stage with a variable time grid. The proposed approach brings benefits through its straightforward optimal control problem formulation with a fixed and low number of control steps for manageable computational complexity and the avoidance of interpolation errors associated with time scaling, especially when aiming to reach a distant goal. Additionally, an asynchronous nonlinear model predictive control (NMPC) update scheme is integrated with this two-stage approach to address delayed and fluctuating computation times, facilitating online replanning. The effectiveness of the proposed two-stage approach and NMPC implementation is demonstrated through numerical examples centered on autonomous navigation with collision avoidance. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 435,251 |
2002.08536 | Debiased Off-Policy Evaluation for Recommendation Systems | Efficient methods to evaluate new algorithms are critical for improving interactive bandit and reinforcement learning systems such as recommendation systems. A/B tests are reliable, but are time- and money-consuming, and entail a risk of failure. In this paper, we develop an alternative method, which predicts the performance of algorithms given historical data that may have been generated by a different algorithm. Our estimator has the property that its prediction converges in probability to the true performance of a counterfactual algorithm at a rate of $\sqrt{N}$, as the sample size $N$ increases. We also show a correct way to estimate the variance of our prediction, thus allowing the analyst to quantify the uncertainty in the prediction. These properties hold even when the analyst does not know which among a large number of potentially important state variables are actually important. We validate our method by a simulation experiment about reinforcement learning. We finally apply it to improve advertisement design by a major advertisement company. We find that our method produces smaller mean squared errors than state-of-the-art methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 164,783 |
2304.14299 | A Probabilistic Attention Model with Occlusion-aware Texture Regression
for 3D Hand Reconstruction from a Single RGB Image | Recently, deep learning based approaches have shown promising results in 3D hand reconstruction from a single RGB image. These approaches can be roughly divided into model-based approaches, which are heavily dependent on the model's parameter space, and model-free approaches, which require large numbers of 3D ground truths to reduce depth ambiguity and struggle in weakly-supervised scenarios. To overcome these issues, we propose a novel probabilistic model to achieve the robustness of model-based approaches and reduced dependence on the model's parameter space of model-free approaches. The proposed probabilistic model incorporates a model-based network as a prior-net to estimate the prior probability distribution of joints and vertices. An Attention-based Mesh Vertices Uncertainty Regression (AMVUR) model is proposed to capture dependencies among vertices and the correlation between joints and mesh vertices to improve their feature representation. We further propose a learning based occlusion-aware Hand Texture Regression model to achieve high-fidelity texture reconstruction. We demonstrate the flexibility of the proposed probabilistic model to be trained in both supervised and weakly-supervised scenarios. The experimental results demonstrate our probabilistic model's state-of-the-art accuracy in 3D hand and texture reconstruction from a single image in both training schemes, including in the presence of severe occlusions. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 360,889 |
1901.10401 | Spatial and Temporal Analysis of Direct Communications from Static
Devices to Mobile Vehicles | This paper proposes a framework to analyze an emerging wireless architecture where vehicles collect data from devices. Using stochastic geometry, the devices are modeled by a planar Poisson point process. Independently, roads and vehicles are modeled by a Poisson line process and a Cox point process, respectively. For any given time, a vehicle is assumed to communicate with a roadside device in a disk of radius $ \nu $ centered at the vehicle, which is referred to as the coverage disk. We study the proposed network by analyzing its short-term and long-term behaviors based on its space and time performance metrics, respectively. As short-term analysis, we explicitly derive the signal-to-interference ratio distribution of the typical vehicle and the area spectral efficiency of the proposed network. As long-term analysis, we derive the area fraction of the coverage disks and then compute the latency of the network by deriving the distribution of the minimum waiting time of a typical device to be covered by a disk. Leveraging these properties, we analyze various trade-off relationships and optimize the network utility. We further investigate these trade-offs using comparison with existing cellular networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 120,006 |
2501.00911 | Aligning LLMs with Domain Invariant Reward Models | Aligning large language models (LLMs) to human preferences is challenging in domains where preference data is unavailable. We address the problem of learning reward models for such target domains by leveraging feedback collected from simpler source domains, where human preferences are easier to obtain. Our key insight is that, while domains may differ significantly, human preferences convey \emph{domain-agnostic} concepts that can be effectively captured by a reward model. We propose \method, a framework that trains domain-invariant reward models by optimizing a dual loss: a domain loss that minimizes the divergence between source and target distribution, and a source loss that optimizes preferences on the source domain. We show \method is a general approach that we evaluate and analyze across 4 distinct settings: (1) Cross-lingual transfer (accuracy: $0.621 \rightarrow 0.661$), (2) Clean-to-noisy (accuracy: $0.671 \rightarrow 0.703$), (3) Few-shot-to-full transfer (accuracy: $0.845 \rightarrow 0.920$), and (4) Simple-to-complex tasks transfer (correlation: $0.508 \rightarrow 0.556$). Our code, models and data are available at \url{https://github.com/portal-cornell/dial}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 521,852 |
1304.0102 | Entanglement Zoo II: Examples in Physics and Cognition | We have recently presented a general scheme enabling quantum modeling of different types of situations that violate Bell's inequalities. In this paper, we specify this scheme for a combination of two concepts. We work out a quantum Hilbert space model where 'entangled measurements' occur in addition to the expected 'entanglement between the component concepts', or 'state entanglement'. We extend this result to a macroscopic physical entity, the 'connected vessels of water', which maximally violates Bell's inequalities. We enlighten the structural and conceptual analogies between the cognitive and physical situations which are both examples of a nonlocal non-marginal box modeling in our classification. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,368 |
2501.02476 | Noise-Tolerant Hybrid Prototypical Learning with Noisy Web Data | We focus on the challenging problem of learning an unbiased classifier from a large number of potentially relevant but noisily labeled web images given only a few clean labeled images. This problem is particularly practical because it reduces the expensive annotation costs by utilizing freely accessible web images with noisy labels. Typically, prototypes are representative images or features used to classify or identify other images. However, in the few clean and many noisy scenarios, the class prototype can be severely biased due to the presence of irrelevant noisy images. The resulting prototypes are less compact and discriminative, as previous methods do not take into account the diverse range of images in the noisy web image collections. On the other hand, the relation modeling between noisy and clean images is not learned for the class prototype generation in an end-to-end manner, which results in a suboptimal class prototype. In this article, we introduce a similarity maximization loss named SimNoiPro. Our SimNoiPro first generates noise-tolerant hybrid prototypes composed of clean and noise-tolerant prototypes and then pulls them closer to each other. Our approach considers the diversity of noisy images by explicit division and overcomes the optimization discrepancy issue. This enables better relation modeling between clean and noisy images and helps extract judicious information from the noisy image set. The evaluation results on two extended few-shot classification benchmarks confirm that our SimNoiPro outperforms prior methods in measuring image relations and cleaning noisy data. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 522,504 |
2411.02447 | qGDP: Quantum Legalization and Detailed Placement for Superconducting
Quantum Computers | Noisy Intermediate-Scale Quantum (NISQ) computers are currently limited by their qubit numbers, which hampers progress towards fault-tolerant quantum computing. A major challenge in scaling these systems is crosstalk, which arises from unwanted interactions among neighboring components such as qubits and resonators. An innovative placement strategy tailored for superconducting quantum computers can systematically address crosstalk within the constraints of limited substrate areas. Legalization is a crucial stage in placement process, refining post-global-placement configurations to satisfy design constraints and enhance layout quality. However, existing legalizers are not supported to legalize quantum placements. We aim to address this gap with qGDP, developed to meticulously legalize quantum components by adhering to quantum spatial constraints and reducing resonator crossing to alleviate various crosstalk effects. Our results indicate that qGDP effectively legalizes and fine-tunes the layout, addressing the quantum-specific spatial constraints inherent in various device topologies. By evaluating diverse NISQ benchmarks. qGDP consistently outperforms state-of-the-art legalization engines, delivering substantial improvements in fidelity and reducing spatial violation, with average gains of 34.4x and 16.9x, respectively. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 505,504 |
2004.08123 | Batch Clustering for Multilingual News Streaming | Nowadays, digital news articles are widely available, published by various editors and often written in different languages. This large volume of diverse and unorganized information makes human reading very difficult or almost impossible. This leads to a need for algorithms able to arrange high amount of multilingual news into stories. To this purpose, we extend previous works on Topic Detection and Tracking, and propose a new system inspired from newsLens. We process articles per batch, looking for monolingual local topics which are then linked across time and languages. Here, we introduce a novel "replaying" strategy to link monolingual local topics into stories. Besides, we propose new fine tuned multilingual embedding using SBERT to create crosslingual stories. Our system gives monolingual state-of-the-art results on dataset of Spanish and German news and crosslingual state-of-the-art results on English, Spanish and German news. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 172,977 |
2406.01297 | When Can LLMs Actually Correct Their Own Mistakes? A Critical Survey of
Self-Correction of LLMs | Self-correction is an approach to improving responses from large language models (LLMs) by refining the responses using LLMs during inference. Prior work has proposed various self-correction frameworks using different sources of feedback, including self-evaluation and external feedback. However, there is still no consensus on the question of when LLMs can correct their own mistakes, as recent studies also report negative results. In this work, we critically survey broad papers and discuss the conditions required for successful self-correction. We first find that prior studies often do not define their research questions in detail and involve impractical frameworks or unfair evaluations that over-evaluate self-correction. To tackle these issues, we categorize research questions in self-correction research and provide a checklist for designing appropriate experiments. Our critical survey based on the newly categorized research questions shows that (1) no prior work demonstrates successful self-correction with feedback from prompted LLMs, except for studies in tasks that are exceptionally suited for self-correction, (2) self-correction works well in tasks that can use reliable external feedback, and (3) large-scale fine-tuning enables self-correction. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 460,254 |
2309.09102 | CppFlow: Generative Inverse Kinematics for Efficient and Robust
Cartesian Path Planning | In this work we present CppFlow - a novel and performant planner for the Cartesian Path Planning problem, which finds valid trajectories up to 129x faster than current methods, while also succeeding on more difficult problems where others fail. At the core of the proposed algorithm is the use of a learned, generative Inverse Kinematics solver, which is able to efficiently produce promising entire candidate solution trajectories on the GPU. Precise, valid solutions are then found through classical approaches such as differentiable programming, global search, and optimization. In combining approaches from these two paradigms we get the best of both worlds - efficient approximate solutions from generative AI which are made exact using the guarantees of traditional planning and optimization. We evaluate our system against other state of the art methods on a set of established baselines as well as new ones introduced in this work and find that our method significantly outperforms others in terms of the time to find a valid solution and planning success rate, and performs comparably in terms of trajectory length over time. The work is made open source and available for use upon acceptance. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 392,470 |
2011.14878 | Explaining by Removing: A Unified Framework for Model Explanation | Researchers have proposed a wide variety of model explanation approaches, but it remains unclear how most methods are related or when one method is preferable to another. We describe a new unified class of methods, removal-based explanations, that are based on the principle of simulating feature removal to quantify each feature's influence. These methods vary in several respects, so we develop a framework that characterizes each method along three dimensions: 1) how the method removes features, 2) what model behavior the method explains, and 3) how the method summarizes each feature's influence. Our framework unifies 26 existing methods, including several of the most widely used approaches: SHAP, LIME, Meaningful Perturbations, and permutation tests. This newly understood class of explanation methods has rich connections that we examine using tools that have been largely overlooked by the explainability literature. To anchor removal-based explanations in cognitive psychology, we show that feature removal is a simple application of subtractive counterfactual reasoning. Ideas from cooperative game theory shed light on the relationships and trade-offs among different methods, and we derive conditions under which all removal-based explanations have information-theoretic interpretations. Through this analysis, we develop a unified framework that helps practitioners better understand model explanation tools, and that offers a strong theoretical foundation upon which future explainability research can build. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 208,917 |
1910.08711 | Correlation Maximized Structural Similarity Loss for Semantic
Segmentation | Most semantic segmentation models treat semantic segmentation as a pixel-wise classification task and use a pixel-wise classification error as their optimization criterions. However, the pixel-wise error ignores the strong dependencies among the pixels in an image, which limits the performance of the model. Several ways to incorporate the structure information of the objects have been investigated, \eg, conditional random fields (CRF), image structure priors based methods, and generative adversarial network (GAN). Nevertheless, these methods usually require extra model branches or additional memories, and some of them show limited improvements. In contrast, we propose a simple yet effective structural similarity loss (SSL) to encode the structure information of the objects, which only requires a few additional computational resources in the training phase. Inspired by the widely-used structural similarity (SSIM) index in image quality assessment, we use the linear correlation between two images to quantify their structural similarity. And the goal of the proposed SSL is to pay more attention to the positions, whose associated predictions lead to a low degree of linear correlation between two corresponding regions in the ground truth map and the predicted map. Thus the model can achieve a strong structural similarity between the two maps through minimizing the SSL over the whole map. The experimental results demonstrate that our method can achieve substantial and consistent improvements in performance on the PASCAL VOC 2012 and Cityscapes datasets. The code will be released soon. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 149,948 |
2001.00459 | Joint Robust Voicing Detection and Pitch Estimation Based on Residual
Harmonics | This paper focuses on the problem of pitch tracking in noisy conditions. A method using harmonic information in the residual signal is presented. The proposed criterion is used both for pitch estimation, as well as for determining the voicing segments of speech. In the experiments, the method is compared to six state-of-the-art pitch trackers on the Keele and CSTR databases. The proposed technique is shown to be particularly robust to additive noise, leading to a significant improvement in adverse conditions. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 159,218 |
2207.03198 | Dynamic Complementarity Conditions and Whole-Body Trajectory
Optimization for Humanoid Robot Locomotion | The paper presents a planner to generate walking trajectories by using the centroidal dynamics and the full kinematics of a humanoid robot. The interaction between the robot and the walking surface is modeled explicitly via new conditions, the \emph{Dynamical Complementarity Constraints}. The approach does not require a predefined contact sequence and generates the footsteps automatically. We characterize the robot control objective via a set of tasks, and we address it by solving an optimal control problem. We show that it is possible to achieve walking motions automatically by specifying a minimal set of references, such as a constant desired center of mass velocity and a reference point on the ground. Furthermore, we analyze how the contact modelling choices affect the computational time. We validate the approach by generating and testing walking trajectories for the humanoid robot iCub. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 306,761 |
2305.04536 | LMPT: Prompt Tuning with Class-Specific Embedding Loss for Long-tailed
Multi-Label Visual Recognition | Long-tailed multi-label visual recognition (LTML) task is a highly challenging task due to the label co-occurrence and imbalanced data distribution. In this work, we propose a unified framework for LTML, namely prompt tuning with class-specific embedding loss (LMPT), capturing the semantic feature interactions between categories by combining text and image modality data and improving the performance synchronously on both head and tail classes. Specifically, LMPT introduces the embedding loss function with class-aware soft margin and re-weighting to learn class-specific contexts with the benefit of textual descriptions (captions), which could help establish semantic relationships between classes, especially between the head and tail classes. Furthermore, taking into account the class imbalance, the distribution-balanced loss is adopted as the classification loss function to further improve the performance on the tail classes without compromising head classes. Extensive experiments are conducted on VOC-LT and COCO-LT datasets, which demonstrates that our method significantly surpasses the previous state-of-the-art methods and zero-shot CLIP in LTML. Our codes are fully public at https://github.com/richard-peng-xia/LMPT. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 362,810 |
2203.02080 | An Efficient Subpopulation-based Membership Inference Attack | Membership inference attacks allow a malicious entity to predict whether a sample is used during training of a victim model or not. State-of-the-art membership inference attacks have shown to achieve good accuracy which poses a great privacy threat. However, majority of SOTA attacks require training dozens to hundreds of shadow models to accurately infer membership. This huge computation cost raises questions about practicality of these attacks on deep models. In this paper, we introduce a fundamentally different MI attack approach which obviates the need to train hundreds of shadow models. Simply put, we compare the victim model output on the target sample versus the samples from the same subpopulation (i.e., semantically similar samples), instead of comparing it with the output of hundreds of shadow models. The intuition is that the model response should not be significantly different between the target sample and its subpopulation if it was not a training sample. In cases where subpopulation samples are not available to the attacker, we show that training only a single generative model can fulfill the requirement. Hence, we achieve the state-of-the-art membership inference accuracy while significantly reducing the training computation cost. | false | false | false | false | true | false | true | false | false | false | false | true | true | false | false | false | false | false | 283,620 |
2203.07478 | Synergistic Scheduling of Learning and Allocation of Tasks in
Human-Robot Teams | We consider the problem of completing a set of $n$ tasks with a human-robot team using minimum effort. In many domains, teaching a robot to be fully autonomous can be counterproductive if there are finitely many tasks to be done. Rather, the optimal strategy is to weigh the cost of teaching a robot and its benefit -- how many new tasks it allows the robot to solve autonomously. We formulate this as a planning problem where the goal is to decide what tasks the robot should do autonomously (act), what tasks should be delegated to a human (delegate) and what tasks the robot should be taught (learn) so as to complete all the given tasks with minimum effort. This planning problem results in a search tree that grows exponentially with $n$ -- making standard graph search algorithms intractable. We address this by converting the problem into a mixed integer program that can be solved efficiently using off-the-shelf solvers with bounds on solution quality. To predict the benefit of learning, we propose a precondition prediction classifier. Given two tasks, this classifier predicts whether a skill trained on one will transfer to the other. Finally, we evaluate our approach on peg insertion and Lego stacking tasks, both in simulation and real-world, showing substantial savings in human effort. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 285,436 |
2309.09739 | Improving Neural Indoor Surface Reconstruction with Mask-Guided Adaptive
Consistency Constraints | 3D scene reconstruction from 2D images has been a long-standing task. Instead of estimating per-frame depth maps and fusing them in 3D, recent research leverages the neural implicit surface as a unified representation for 3D reconstruction. Equipped with data-driven pre-trained geometric cues, these methods have demonstrated promising performance. However, inaccurate prior estimation, which is usually inevitable, can lead to suboptimal reconstruction quality, particularly in some geometrically complex regions. In this paper, we propose a two-stage training process, decouple view-dependent and view-independent colors, and leverage two novel consistency constraints to enhance detail reconstruction performance without requiring extra priors. Additionally, we introduce an essential mask scheme to adaptively influence the selection of supervision constraints, thereby improving performance in a self-supervised paradigm. Experiments on synthetic and real-world datasets show the capability of reducing the interference from prior estimation errors and achieving high-quality scene reconstruction with rich geometric details. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 392,728 |
1910.08612 | Trajectory Design for Energy Minimization in UAV-enabled Wireless
Communications with Latency Constraints | This paper studies energy-efficient unmanned aerial vehicle (UAV)-enabled wireless communications, where the UAV acts as a flying base station (BS) to serve the ground users (GUs) within some predetermined latency constraints, e.g., requested timeout (RT). Our goal is to design the UAV trajectory to minimize the total energy consumption while satisfying the RT requirement and energy budget, which is accomplished via jointly optimizing the trajectory and UAV's velocities along subsequent hops. The corresponding optimization problem is difficult to solve due to its non-convexity and combinatorial nature. To overcome this difficulty, we solve the original problem via two consecutive steps. Firstly, we propose two algorithms, namely heuristic search, and dynamic programming (DP) to obtain a feasible set of trajectories without violating the GU's RT requirements based on the traveling salesman problem with time window (TSPTW). Then, they are compared with exhaustive search and traveling salesman problem (TSP) used as reference methods. While the exhaustive algorithm achieves the best performance at a high computation cost, the heuristic algorithm exhibits poorer performance with low complexity. As a result, the DP is proposed as a practical trade-off between the exhaustive and heuristic algorithms. Specifically, the DP algorithm results in near-optimal performance at a much lower complexity. Secondly, for given feasible trajectories, we propose an energy minimization problem via a joint optimization of the UAV's velocities along subsequent hops. Finally, numerical results are presented to demonstrate the effectiveness of our proposed algorithms. ... | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 149,912 |
1903.11040 | Adversarially Learned Abnormal Trajectory Classifier | We address the problem of abnormal event detection from trajectory data. In this paper, a new adversarial approach is proposed for building a deep neural network binary classifier, trained in an unsupervised fashion, that can distinguish normal from abnormal trajectory-based events without the need for setting manual detection threshold. Inspired by the generative adversarial network (GAN) framework, our GAN version is a discriminative one in which the discriminator is trained to distinguish normal and abnormal trajectory reconstruction errors given by a deep autoencoder. With urban traffic videos and their associated trajectories, our proposed method gives the best accuracy for abnormal trajectory detection. In addition, our model can easily be generalized for abnormal trajectory-based event detection and can still yield the best behavioural detection results as demonstrated on the CAVIAR dataset. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 125,420 |
1811.06442 | Energy Efficient Precoder in Multi-User MIMO Systems with Imperfect
Channel State Information | This article is on the energy efficient precoder design in multi-user multiple-input-multiple-output (MU-MIMO) systems which is also robust with respect to the imperfect channel state information (CSI) at the transmitters. In other words, we design the precoder matrix associated with each transmitter to maximize the general energy efficiency of the network. We investigate the problem in two conventional cases. The first case considers the statistical characterization for the channel estimation error that leads to a quadratically constrained quadratic program (QCQP) with a semi-closed-form solution. Then, we turn our attention to the case which considers only the uncertainty region for the channel estimation error; this case eventually results in a semi-definite program (SDP). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 113,527 |
2405.20044 | A Point-Neighborhood Learning Framework for Nasal Endoscope Image
Segmentation | Lesion segmentation on nasal endoscopic images is challenging due to its complex lesion features. Fully-supervised deep learning methods achieve promising performance with pixel-level annotations but impose a significant annotation burden on experts. Although weakly supervised or semi-supervised methods can reduce the labelling burden, their performance is still limited. Some weakly semi-supervised methods employ a novel annotation strategy that labels weak single-point annotations for the entire training set while providing pixel-level annotations for a small subset of the data. However, the relevant weakly semi-supervised methods only mine the limited information of the point itself, while ignoring its label property and surrounding reliable information. This paper proposes a simple yet efficient weakly semi-supervised method called the Point-Neighborhood Learning (PNL) framework. PNL incorporates the surrounding area of the point, referred to as the point-neighborhood, into the learning process. In PNL, we propose a point-neighborhood supervision loss and a pseudo-label scoring mechanism to explicitly guide the model's training. Meanwhile, we proposed a more reliable data augmentation scheme. The proposed method significantly improves performance without increasing the parameters of the segmentation neural network. Extensive experiments on the NPC-LES dataset demonstrate that PNL outperforms existing methods by a significant margin. Additional validation on colonoscopic polyp segmentation datasets confirms the generalizability of the proposed PNL. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 459,170 |
1812.06625 | Semi-supervised mp-MRI Data Synthesis with StitchLayer and Auxiliary
Distance Maximization | In this paper, we address the problem of synthesizing multi-parameter magnetic resonance imaging (mp-MRI) data, i.e. Apparent Diffusion Coefficients (ADC) and T2-weighted (T2w), containing clinically significant (CS) prostate cancer (PCa) via semi-supervised adversarial learning. Specifically, our synthesizer generates mp-MRI data in a sequential manner: first generating ADC maps from 128-d latent vectors, followed by translating them to the T2w images. The synthesizer is trained in a semisupervised manner. In the supervised training process, a limited amount of paired ADC-T2w images and the corresponding ADC encodings are provided and the synthesizer learns the paired relationship by explicitly minimizing the reconstruction losses between synthetic and real images. To avoid overfitting limited ADC encodings, an unlimited amount of random latent vectors and unpaired ADC-T2w Images are utilized in the unsupervised training process for learning the marginal image distributions of real images. To improve the robustness of synthesizing, we decompose the difficult task of generating full-size images into several simpler tasks which generate sub-images only. A StitchLayer is then employed to fuse sub-images together in an interlaced manner into a full-size image. To enforce the synthetic images to indeed contain distinguishable CS PCa lesions, we propose to also maximize an auxiliary distance of Jensen-Shannon divergence (JSD) between CS and nonCS images. Experimental results show that our method can effectively synthesize a large variety of mpMRI images which contain meaningful CS PCa lesions, display a good visual quality and have the correct paired relationship. Compared to the state-of-the-art synthesis methods, our method achieves a significant improvement in terms of both visual and quantitative evaluation metrics. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 116,658 |
2103.00123 | GRAD-MATCH: Gradient Matching based Data Subset Selection for Efficient
Deep Model Training | The great success of modern machine learning models on large datasets is contingent on extensive computational resources with high financial and environmental costs. One way to address this is by extracting subsets that generalize on par with the full data. In this work, we propose a general framework, GRAD-MATCH, which finds subsets that closely match the gradient of the training or validation set. We find such subsets effectively using an orthogonal matching pursuit algorithm. We show rigorous theoretical and convergence guarantees of the proposed algorithm and, through our extensive experiments on real-world datasets, show the effectiveness of our proposed framework. We show that GRAD-MATCH significantly and consistently outperforms several recent data-selection algorithms and achieves the best accuracy-efficiency trade-off. GRAD-MATCH is available as a part of the CORDS toolkit: \url{https://github.com/decile-team/cords}. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 222,154 |
2411.05740 | Bias correction and instrumental variables for direct data-driven
model-reference control | Managing noisy data is a central challenge in direct data-driven control design. We propose an approach for synthesizing model-reference controllers for linear time-invariant (LTI) systems using noisy state-input data, employing novel noise mitigation techniques. Specifically, we demonstrate that using data-based covariance parameterization of the controller enables bias-correction and instrumental variable techniques within the data-driven optimization, thus reducing measurement noise effects as data volume increases. The number of decision variables remains independent of dataset size, making this method scalable to large datasets. The approach's effectiveness is demonstrated with a numerical example. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 506,769 |
2406.12311 | Mixture of Scales: Memory-Efficient Token-Adaptive Binarization for
Large Language Models | Binarization, which converts weight parameters to binary values, has emerged as an effective strategy to reduce the size of large language models (LLMs). However, typical binarization techniques significantly diminish linguistic effectiveness of LLMs. To address this issue, we introduce a novel binarization technique called Mixture of Scales (BinaryMoS). Unlike conventional methods, BinaryMoS employs multiple scaling experts for binary weights, dynamically merging these experts for each token to adaptively generate scaling factors. This token-adaptive approach boosts the representational power of binarized LLMs by enabling contextual adjustments to the values of binary weights. Moreover, because this adaptive process only involves the scaling factors rather than the entire weight matrix, BinaryMoS maintains compression efficiency similar to traditional static binarization methods. Our experimental results reveal that BinaryMoS surpasses conventional binarization techniques in various natural language processing tasks and even outperforms 2-bit quantization methods, all while maintaining similar model size to static binarization techniques. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 465,337 |
2410.07209 | Behavior Cloning for Mini Autonomous Car Path Following | This article presents the implementation and evaluation of a behavior cloning approach for route following with autonomous cars. Behavior cloning is a machine-learning technique in which a neural network is trained to mimic the driving behavior of a human operator. Using camera data that captures the environment and the vehicle's movement, the neural network learns to predict the control actions necessary to follow a predetermined route. Mini-autonomous cars, which provide a good benchmark for use, are employed as a testing platform. This approach simplifies the control system by directly mapping the driver's movements to the control outputs, avoiding the need for complex algorithms. We performed an evaluation in a 13-meter sizer route, where our vehicle was evaluated. The results show that behavior cloning allows for a smooth and precise route, allowing it to be a full-sized vehicle and enabling an effective transition from small-scale experiments to real-world implementations. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 496,540 |
1406.5679 | Deep Fragment Embeddings for Bidirectional Image Sentence Mapping | We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 34,052 |
2003.00145 | Generalization of trace codes to places of higher degree | In this note, we give a construction of codes on algebraic function field $F/ \mathbb{F}_{q}$ using places of $F$ (not necessarily of degree one) and trace functions from various extensions of $\mathbb{F}_{q}$. This is a generalization of trace code of geometric Goppa codes to higher degree places. We compute a bound on the dimension of this code. Furthermore, we give a condition under which we get exact dimension of the code. We also determine a bound on the minimum distance of this code in terms of $B_{r}(F)$ ( the number of places of degree $r$ in $F$), $1 \leq r < \infty$. Few quasi-cyclic codes over $\mathbb{F}_{p}$ are also obtained as examples of these codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 166,203 |
2207.04295 | Explainable AI (XAI) in Biomedical Signal and Image Processing: Promises
and Challenges | Artificial intelligence has become pervasive across disciplines and fields, and biomedical image and signal processing is no exception. The growing and widespread interest on the topic has triggered a vast research activity that is reflected in an exponential research effort. Through study of massive and diverse biomedical data, machine and deep learning models have revolutionized various tasks such as modeling, segmentation, registration, classification and synthesis, outperforming traditional techniques. However, the difficulty in translating the results into biologically/clinically interpretable information is preventing their full exploitation in the field. Explainable AI (XAI) attempts to fill this translational gap by providing means to make the models interpretable and providing explanations. Different solutions have been proposed so far and are gaining increasing interest from the community. This paper aims at providing an overview on XAI in biomedical data processing and points to an upcoming Special Issue on Deep Learning in Biomedical Image and Signal Processing of the IEEE Signal Processing Magazine that is going to appear in March 2022. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 307,148 |
2303.03090 | Parallel Optimization with Hard Safety Constraints for Cooperative
Planning of Connected Autonomous Vehicles | The development of connected autonomous vehicles (CAVs) facilitates the enhancement of traffic efficiency in complicated scenarios. In unsignalized roundabout scenarios, difficulties remain unsolved in developing an effective and efficient coordination strategy for CAVs. In this paper, we formulate the cooperative autonomous driving problem of CAVs in the roundabout scenario as a constrained optimal control problem, and propose a computationally-efficient parallel optimization framework to generate strategies for CAVs such that the travel efficiency is improved with hard safety guarantees. All constraints involved in the roundabout scenario are addressed appropriately with convex approximation, such that the convexity property of the reformulated optimization problem is exhibited. Then, a parallel optimization algorithm is presented to solve the reformulated optimization problem, where an embodied iterative nearest neighbor search strategy to determine the optimal passing sequence in the roundabout scenario. It is noteworthy that the travel efficiency in the roundabout scenario is enhanced and the computation burden is considerably alleviated with the innovation development. We also examine the proposed method in CARLA simulator and perform thorough comparisons with a rule-based baseline and the commonly used IPOPT optimization solver to demonstrate the effectiveness and efficiency of the proposed approach. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | 349,598 |
2111.12862 | Coded Illumination for Improved Lensless Imaging | Mask-based lensless cameras can be flat, thin, and light-weight, which makes them suitable for novel designs of computational imaging systems with large surface areas and arbitrary shapes. Despite recent progress in lensless cameras, the quality of images recovered from the lensless cameras is often poor due to the ill-conditioning of the underlying measurement system. In this paper, we propose to use coded illumination to improve the quality of images reconstructed with lensless cameras. In our imaging model, the scene/object is illuminated by multiple coded illumination patterns as the lensless camera records sensor measurements. We designed and tested a number of illumination patterns and observed that shifting dots (and related orthogonal) patterns provide the best overall performance. We propose a fast and low-complexity recovery algorithm that exploits the separability and block-diagonal structure in our system. We present simulation results and hardware experiment results to demonstrate that our proposed method can significantly improve the reconstruction quality. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 268,099 |
2111.04107 | Structure-aware generation of drug-like molecules | Structure-based drug design involves finding ligand molecules that exhibit structural and chemical complementarity to protein pockets. Deep generative methods have shown promise in proposing novel molecules from scratch (de-novo design), avoiding exhaustive virtual screening of chemical space. Most generative de-novo models fail to incorporate detailed ligand-protein interactions and 3D pocket structures. We propose a novel supervised model that generates molecular graphs jointly with 3D pose in a discretised molecular space. Molecules are built atom-by-atom inside pockets, guided by structural information from crystallographic data. We evaluate our model using a docking benchmark and find that guided generation improves predicted binding affinities by 8% and drug-likeness scores by 10% over the baseline. Furthermore, our model proposes molecules with binding scores exceeding some known ligands, which could be useful in future wet-lab studies. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 265,389 |
2005.11257 | Epidemiologically and Socio-economically Optimal Policies via Bayesian
Optimization | Mass public quarantining, colloquially known as a lock-down, is a non-pharmaceutical intervention to check spread of disease. This paper presents ESOP (Epidemiologically and Socio-economically Optimal Policies), a novel application of active machine learning techniques using Bayesian optimization, that interacts with an epidemiological model to arrive at lock-down schedules that optimally balance public health benefits and socio-economic downsides of reduced economic activity during lock-down periods. The utility of ESOP is demonstrated using case studies with VIPER (Virus-Individual-Policy-EnviRonment), a stochastic agent-based simulator that this paper also proposes. However, ESOP is flexible enough to interact with arbitrary epidemiological simulators in a black-box manner, and produce schedules that involve multiple phases of lock-downs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 178,424 |
2311.09118 | WildlifeDatasets: An open-source toolkit for animal re-identification | In this paper, we present WildlifeDatasets (https://github.com/WildlifeDatasets/wildlife-datasets) - an open-source toolkit intended primarily for ecologists and computer-vision / machine-learning researchers. The WildlifeDatasets is written in Python, allows straightforward access to publicly available wildlife datasets, and provides a wide variety of methods for dataset pre-processing, performance analysis, and model fine-tuning. We showcase the toolkit in various scenarios and baseline experiments, including, to the best of our knowledge, the most comprehensive experimental comparison of datasets and methods for wildlife re-identification, including both local descriptors and deep learning approaches. Furthermore, we provide the first-ever foundation model for individual re-identification within a wide range of species - MegaDescriptor - that provides state-of-the-art performance on animal re-identification datasets and outperforms other pre-trained models such as CLIP and DINOv2 by a significant margin. To make the model available to the general public and to allow easy integration with any existing wildlife monitoring applications, we provide multiple MegaDescriptor flavors (i.e., Small, Medium, and Large) through the HuggingFace hub (https://huggingface.co/BVRA). | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 408,000 |
2105.07140 | NeuroGen: activation optimized image synthesis for discovery
neuroscience | Functional MRI (fMRI) is a powerful technique that has allowed us to characterize visual cortex responses to stimuli, yet such experiments are by nature constructed based on a priori hypotheses, limited to the set of images presented to the individual while they are in the scanner, are subject to noise in the observed brain responses, and may vary widely across individuals. In this work, we propose a novel computational strategy, which we call NeuroGen, to overcome these limitations and develop a powerful tool for human vision neuroscience discovery. NeuroGen combines an fMRI-trained neural encoding model of human vision with a deep generative network to synthesize images predicted to achieve a target pattern of macro-scale brain activation. We demonstrate that the reduction of noise that the encoding model provides, coupled with the generative network's ability to produce images of high fidelity, results in a robust discovery architecture for visual neuroscience. By using only a small number of synthetic images created by NeuroGen, we demonstrate that we can detect and amplify differences in regional and individual human brain response patterns to visual stimuli. We then verify that these discoveries are reflected in the several thousand observed image responses measured with fMRI. We further demonstrate that NeuroGen can create synthetic images predicted to achieve regional response patterns not achievable by the best-matching natural images. The NeuroGen framework extends the utility of brain encoding models and opens up a new avenue for exploring, and possibly precisely controlling, the human visual system. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 235,331 |
2408.12613 | Deceptive uses of Artificial Intelligence in elections strengthen
support for AI ban | All over the world, political parties, politicians, and campaigns explore how Artificial Intelligence (AI) can help them win elections. However, the effects of these activities are unknown. We propose a framework for assessing AI's impact on elections by considering its application in various campaigning tasks. The electoral uses of AI vary widely, carrying different levels of concern and need for regulatory oversight. To account for this diversity, we group AI-enabled campaigning uses into three categories -- campaign operations, voter outreach, and deception. Using this framework, we provide the first systematic evidence from a preregistered representative survey and two preregistered experiments (n=7,635) on how Americans think about AI in elections and the effects of specific campaigning choices. We provide three significant findings. 1) the public distinguishes between different AI uses in elections, seeing AI uses predominantly negative but objecting most strongly to deceptive uses; 2) deceptive AI practices can have adverse effects on relevant attitudes and strengthen public support for stopping AI development; 3) Although deceptive electoral uses of AI are intensely disliked, they do not result in substantial favorability penalties for the parties involved. There is a misalignment of incentives for deceptive practices and their externalities. We cannot count on public opinion to provide strong enough incentives for parties to forgo tactical advantages from AI-enabled deception. There is a need for regulatory oversight and systematic outside monitoring of electoral uses of AI. Still, regulators should account for the diversity of AI uses and not completely disincentivize their electoral use. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 482,810 |
1601.01566 | Automatic Calibration of a Robot Manipulator and Multi 3D Camera System | With 3D sensing becoming cheaper, environment-aware and visually-guided robot arms capable of safely working in collaboration with humans will become common. However, a reliable calibration is needed, both for camera internal calibration, as well as Eye-to-Hand calibration, to make sure the whole system functions correctly. We present a framework, using a novel combination of well proven methods, allowing a quick automatic calibration for the integration of systems consisting of the robot and a varying number of 3D cameras by using a standard checkerboard calibration grid. Our approach allows a quick camera-to-robot recalibration after any changes to the setup, for example when cameras or robot have been repositioned. Modular design of the system ensures flexibility regarding a number of sensors used as well as different hardware choices. The framework has been proven to work by practical experiments to analyze the quality of the calibration versus the number of positions of the checkerboard used for each of the calibration procedures. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 50,760 |
1112.4167 | Iterative Deterministic Equivalents for the Performance Analysis of
Communication Systems | In this article, we introduce iterative deterministic equivalents as a novel technique for the performance analysis of communication systems whose channels are modeled by complex combinations of independent random matrices. This technique extends the deterministic equivalent approach for the study of functionals of large random matrices to a broader class of random matrix models which naturally arise as channel models in wireless communications. We present two specific applications: First, we consider a multi-hop amplify-and-forward (AF) MIMO relay channel with noise at each stage and derive deterministic approximations of the mutual information after the Kth hop. Second, we study a MIMO multiple access channel (MAC) where the channel between each transmitter and the receiver is represented by the double-scattering channel model. We provide deterministic approximations of the mutual information, the signal-to-interference-plus-noise ratio (SINR) and sum-rate with minimum-mean-square-error (MMSE) detection and derive the asymptotically optimal precoding matrices. In both scenarios, the approximations can be computed by simple and provably converging fixed-point algorithms and are shown to be almost surely tight in the limit when the number of antennas at each node grows infinitely large. Simulations suggest that the approximations are accurate for realistic system dimensions. The technique of iterative deterministic equivalents can be easily extended to other channel models of interest and is, therefore, also a new contribution to the field of random matrix theory. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 13,511 |
2407.17468 | WildHallucinations: Evaluating Long-form Factuality in LLMs with
Real-World Entity Queries | While hallucinations of large language models (LLMs) prevail as a major challenge, existing evaluation benchmarks on factuality do not cover the diverse domains of knowledge that the real-world users of LLMs seek information about. To bridge this gap, we introduce WildHallucinations, a benchmark that evaluates factuality. It does so by prompting LLMs to generate information about entities mined from user-chatbot conversations in the wild. These generations are then automatically fact-checked against a systematically curated knowledge source collected from web search. Notably, half of these real-world entities do not have associated Wikipedia pages. We evaluate 118,785 generations from 15 LLMs on 7,919 entities. We find that LLMs consistently hallucinate more on entities without Wikipedia pages and exhibit varying hallucination rates across different domains. Finally, given the same base models, adding a retrieval component only slightly reduces hallucinations but does not eliminate hallucinations. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 475,987 |
1605.03096 | On the Capacity Region of Multiple Access Channel | The capacity region of a multiple access channel is discussed. It was found that orthogonal multiple access and non orthogonal multiple access have the same capacity region under the constraint of same sum power. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 55,712 |
1901.09957 | OpenHowNet: An Open Sememe-based Lexical Knowledge Base | In this paper, we present an open sememe-based lexical knowledge base OpenHowNet. Based on well-known HowNet, OpenHowNet comprises three components: core data which is composed of more than 100 thousand senses annotated with sememes, OpenHowNet Web which gives a brief introduction to OpenHowNet as well as provides online exhibition of OpenHowNet information, and OpenHowNet API which includes several useful APIs such as accessing OpenHowNet core data and drawing sememe tree structures of senses. In the main text, we first give some backgrounds including definition of sememe and details of HowNet. And then we introduce some previous HowNet and sememe-based research works. Last but not least, we detail the constituents of OpenHowNet and their basic features and functionalities. Additionally, we briefly make a summary and list some future works. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 119,885 |
1708.04571 | A Machine Learning Based Intrusion Detection System for Software Defined
5G Network | As an inevitable trend of future 5G networks, Software Defined architecture has many advantages in providing central- ized control and flexible resource management. But it is also confronted with various security challenges and potential threats with emerging services and technologies. As the focus of network security, Intrusion Detection Systems (IDS) are usually deployed separately without collaboration. They are also unable to detect novel attacks with limited intelligent abilities, which are hard to meet the needs of software defined 5G. In this paper, we propose an intelligent intrusion system taking the advances of software defined technology and artificial intelligence based on Software Defined 5G architecture. It flexibly combines security function mod- ules which are adaptively invoked under centralized management and control with a globle view. It can also deal with unknown intrusions by using machine learning algorithms. Evaluation results prove that the intelligent intrusion detection system achieves a better performance. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | true | 78,972 |
1511.02369 | On a class of $(\delta+\alpha u^2)$-constacyclic codes over
$\mathbb{F}_{q}[u]/\langle u^4\rangle$ | Let $\mathbb{F}_{q}$ be a finite field of cardinality $q$, $R=\mathbb{F}_{q}[u]/\langle u^4\rangle=\mathbb{F}_{q}+u\mathbb{F}_{q}+u^2\mathbb{F}_{q}+u^3\mathbb{F}_{q}$ $(u^4=0)$ which is a finite chain ring, and $n$ be a positive integer satisfying ${\rm gcd}(q,n)=1$. For any $\delta,\alpha\in \mathbb{F}_{q}^{\times}$, an explicit representation for all distinct $(\delta+\alpha u^2)$-constacyclic codes over $R$ of length $n$ is given, and the dual code for each of these codes is determined. For the case of $q=2^m$ and $\delta=1$, all self-dual $(1+\alpha u^2)$-constacyclic codes over $R$ of odd length $n$ are provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 48,623 |
1311.0351 | Rough matroids based on coverings | The introduction of covering-based rough sets has made a substantial contribution to the classical rough sets. However, many vital problems in rough sets, including attribution reduction, are NP-hard and therefore the algorithms for solving them are usually greedy. Matroid, as a generalization of linear independence in vector spaces, it has a variety of applications in many fields such as algorithm design and combinatorial optimization. An excellent introduction to the topic of rough matroids is due to Zhu and Wang. On the basis of their work, we study the rough matroids based on coverings in this paper. First, we investigate some properties of the definable sets with respect to a covering. Specifically, it is interesting that the set of all definable sets with respect to a covering, equipped with the binary relation of inclusion $\subseteq$, constructs a lattice. Second, we propose the rough matroids based on coverings, which are a generalization of the rough matroids based on relations. Finally, some properties of rough matroids based on coverings are explored. Moreover, an equivalent formulation of rough matroids based on coverings is presented. These interesting and important results exhibit many potential connections between rough sets and matroids. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 28,146 |
1411.0482 | Reduction of CRB in Arbitrary Pre-designed Arrays Using Alter an Element
Position | Simultaneous estimation of range and angle of close emitters usually requires a multidimensional search. This paper offers an algorithm to improve the position of an element of any array designed on the basis of some certain or random rules. In the proposed method one element moves on its original direction, i.e., keeping the vertical distance to each source, to reach the constellation with less CRB. The performance of this method has been demonstrated through simulation and a comparison of the CRB with receptive signals covariance matrix determinant has been made before and after the use of this method. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 37,258 |
2210.05338 | FusionDeepMF: A Dual Embedding based Deep Fusion Model for
Recommendation | Traditional Collaborative Filtering (CF) based methods are applied to understand the personal preferences of users/customers for items or products from the rating matrix. Usually, the rating matrix is sparse in nature. So there are some improved variants of the CF method that apply the increasing amount of side information to handle the sparsity problem. Only linear kernel or only non-linear kernel is applied in most of the available recommendation-related work to understand user-item latent feature embeddings from data. Only linear kernel or only non-linear kernel is not sufficient to learn complex user-item features from side information of users. Recently, some researchers have focused on hybrid models that learn some features with non-linear kernels and some other features with linear kernels. But it is very difficult to understand which features can be learned accurately with linear kernels or with non-linear kernels. To overcome this problem, we propose a novel deep fusion model named FusionDeepMF and the novel attempts of this model are i) learning user-item rating matrix and side information through linear and non-linear kernel simultaneously, ii) application of a tuning parameter determining the trade-off between the dual embeddings that are generated from linear and non-linear kernels. Extensive experiments on online review datasets establish that FusionDeepMF can be remarkably futuristic compared to other baseline approaches. Empirical evidence also shows that FusionDeepMF achieves better performances compared to the linear kernels of Matrix Factorization (MF) and the non-linear kernels of Multi-layer Perceptron (MLP). | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 322,813 |
1603.04586 | Optimal Sensing via Multi-armed Bandit Relaxations in Mixed
Observability Domains | Sequential decision making under uncertainty is studied in a mixed observability domain. The goal is to maximize the amount of information obtained on a partially observable stochastic process under constraints imposed by a fully observable internal state. An upper bound for the optimal value function is derived by relaxing constraints. We identify conditions under which the relaxed problem is a multi-armed bandit whose optimal policy is easily computable. The upper bound is applied to prune the search space in the original problem, and the effect on solution quality is assessed via simulation experiments. Empirical results show effective pruning of the search space in a target monitoring domain. | false | false | false | false | true | false | false | true | false | false | true | false | false | false | false | false | false | false | 53,263 |
2209.11870 | Leveraging Self-Supervised Training for Unintentional Action Recognition | Unintentional actions are rare occurrences that are difficult to define precisely and that are highly dependent on the temporal context of the action. In this work, we explore such actions and seek to identify the points in videos where the actions transition from intentional to unintentional. We propose a multi-stage framework that exploits inherent biases such as motion speed, motion direction, and order to recognize unintentional actions. To enhance representations via self-supervised training for the task of unintentional action recognition we propose temporal transformations, called Temporal Transformations of Inherent Biases of Unintentional Actions (T2IBUA). The multi-stage approach models the temporal information on both the level of individual frames and full clips. These enhanced representations show strong performance for unintentional action recognition tasks. We provide an extensive ablation study of our framework and report results that significantly improve over the state-of-the-art. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 319,324 |
2501.18377 | Using Read Promotion and Mixed Isolation Levels for Performant Yet
Serializable Execution of Transaction Programs | We propose a theory that can determine the lowest isolation level that can be allocated to each transaction program in an application in a mixed-isolation-level setting, to guarantee that all executions will be serializable and thus preserve all integrity constraints, even those that are not explicitly declared. This extends prior work applied to completely known transactions, to deal with the realistic situation where transactions are generated by running programs with parameters that are not known in advance. Using our theory, we propose an optimization method that allows for high throughput while ensuring that all executions are serializable. Our method is based on searching for application code modifications that are semantics-preserving while improving the isolation level allocation. We illustrate our approach to the SmallBank benchmark. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 528,680 |
2212.14117 | Improving a sequence-to-sequence nlp model using a reinforcement
learning policy algorithm | Nowadays, the current neural network models of dialogue generation(chatbots) show great promise for generating answers for chatty agents. But they are short-sighted in that they predict utterances one at a time while disregarding their impact on future outcomes. Modelling a dialogue's future direction is critical for generating coherent, interesting dialogues, a need that has led traditional NLP dialogue models that rely on reinforcement learning. In this article, we explain how to combine these objectives by using deep reinforcement learning to predict future rewards in chatbot dialogue. The model simulates conversations between two virtual agents, with policy gradient methods used to reward sequences that exhibit three useful conversational characteristics: the flow of informality, coherence, and simplicity of response (related to forward-looking function). We assess our model based on its diversity, length, and complexity with regard to humans. In dialogue simulation, evaluations demonstrated that the proposed model generates more interactive responses and encourages a more sustained successful conversation. This work commemorates a preliminary step toward developing a neural conversational model based on the long-term success of dialogues. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 338,508 |
2108.00034 | A Training-Based Mutual Information Lower Bound for Large-Scale Systems | We provide a mutual information lower bound that can be used to analyze the effect of training in models with unknown parameters. For large-scale systems, we show that this bound can be calculated using the difference between two derivatives of a conditional entropy function. The bound does not require explicit estimation of the unknown parameters. We provide a step-by-step process for computing the bound, and provide an example application. A comparison with known classical mutual information bounds is provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 248,574 |
1310.4734 | On Robustness Analysis of Stochastic Biochemical Systems by
Probabilistic Model Checking | This report proposes a novel framework for a rigorous robustness analysis of stochastic biochemical systems. The technique is based on probabilistic model checking. We adapt the general definition of robustness introduced by Kitano to the class of stochastic systems modelled as continuous time Markov Chains in order to extensively analyse and compare robustness of biological models with uncertain parameters. The framework utilises novel computational methods that enable to effectively evaluate the robustness of models with respect to quantitative temporal properties and parameters such as reaction rate constants and initial conditions. The framework is applied to gene regulation as an example of a central biological mechanism where intrinsic and extrinsic stochasticity plays crucial role due to low numbers of DNA and RNA molecules. Using our methods we have obtained a comprehensive and precise analysis of stochastic dynamics under parameter uncertainty. Furthermore, we apply our framework to compare several variants of two-component signalling networks from the perspective of robustness with respect to intrinsic noise caused by low populations of signalling components. We succeeded to extend previous studies performed on deterministic models (ODE) and show that stochasticity may significantly affect obtained predictions. Our case studies demonstrate that the framework can provide deeper insight into the role of key parameters in maintaining the system functionality and thus it significantly contributes to formal methods in computational systems biology. | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 27,835 |
1811.08201 | CGNet: A Light-weight Context Guided Network for Semantic Segmentation | The demand of applying semantic segmentation model on mobile devices has been increasing rapidly. Current state-of-the-art networks have enormous amount of parameters hence unsuitable for mobile devices, while other small memory footprint models follow the spirit of classification network and ignore the inherent characteristic of semantic segmentation. To tackle this problem, we propose a novel Context Guided Network (CGNet), which is a light-weight and efficient network for semantic segmentation. We first propose the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, we develop CGNet which captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. CGNet is also elaborately designed to reduce the number of parameters and save memory footprint. Under an equivalent number of parameters, the proposed CGNet significantly outperforms existing segmentation networks. Extensive experiments on Cityscapes and CamVid datasets verify the effectiveness of the proposed approach. Specifically, without any post-processing and multi-scale testing, the proposed CGNet achieves 64.8% mean IoU on Cityscapes with less than 0.5 M parameters. The source code for the complete system can be found at https://github.com/wutianyiRosun/CGNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 113,983 |
2002.02519 | Data-Driven False Data Injection Attacks Against Power Grids: A Random
Matrix Approach | We address the problem of constructing false data injection (FDI) attacks that can bypass the bad data detector (BDD) of a power grid. The attacker is assumed to have access to only power flow measurement data traces (collected over a limited period of time) and no other prior knowledge about the grid. Existing related algorithms are formulated under the assumption that the attacker has access to measurements collected over a long (asymptotically infinite) time period, which may not be realistic. We show that these approaches do not perform well when the attacker has a limited number of data samples only. We design an enhanced algorithm to construct FDI attack vectors in the face of limited measurements that can nevertheless bypass the BDD with high probability. The algorithm design is guided by results from random matrix theory. Furthermore, we characterize an important trade-off between the attack's BDD-bypass probability and its sparsity, which affects the spatial extent of the attack that must be achieved. Extensive simulations using data traces collected from the MATPOWER simulator and benchmark IEEE bus systems validate our findings. | false | false | false | false | false | false | false | false | false | true | true | false | true | false | false | false | false | false | 162,939 |
2110.14317 | Ask "Who", Not "What": Bitcoin Volatility Forecasting with Twitter Data | Understanding the variations in trading price (volatility), and its response to exogenous information, is a well-researched topic in finance. In this study, we focus on finding stable and accurate volatility predictors for a relatively new asset class of cryptocurrencies, in particular Bitcoin, using deep learning representations of public social media data obtained from Twitter. For our experiments, we extracted semantic information and user statistics from over 30 million Bitcoin-related tweets, in conjunction with 15-minute frequency price data over a horizon of 144 days. Using this data, we built several deep learning architectures that utilized different combinations of the gathered information. For each model, we conducted ablation studies to assess the influence of different components and feature sets over the prediction accuracy. We found statistical evidences for the hypotheses that: (i) temporal convolutional networks perform significantly better than both classical autoregressive models and other deep learning-based architectures in the literature, and (ii) tweet author meta-information, even detached from the tweet itself, is a better predictor of volatility than the semantic content and tweet volume statistics. We demonstrate how different information sets gathered from social media can be utilized in different architectures and how they affect the prediction results. As an additional contribution, we make our dataset public for future research. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 263,497 |
1612.05734 | Web-based Semantic Similarity for Emotion Recognition in Web Objects | In this project we propose a new approach for emotion recognition using web-based similarity (e.g. confidence, PMI and PMING). We aim to extract basic emotions from short sentences with emotional content (e.g. news titles, tweets, captions), performing a web-based quantitative evaluation of semantic proximity between each word of the analyzed sentence and each emotion of a psychological model (e.g. Plutchik, Ekman, Lovheim). The phases of the extraction include: text preprocessing (tokenization, stop words, filtering), search engine automated query, HTML parsing of results (i.e. scraping), estimation of semantic proximity, ranking of emotions according to proximity measures. The main idea is that, since it is possible to generalize semantic similarity under the assumption that similar concepts co-occur in documents indexed in search engines, therefore also emotions can be generalized in the same way, through tags or terms that express them in a particular language, ranking emotions. Training results are compared to human evaluation, then additional comparative tests on results are performed, both for the global ranking correlation (e.g. Kendall, Spearman, Pearson) both for the evaluation of the emotion linked to each single word. Different from sentiment analysis, our approach works at a deeper level of abstraction, aiming at recognizing specific emotions and not only the positive/negative sentiment, in order to predict emotions as semantic data. | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 65,720 |
2410.09802 | EBDM: Exemplar-guided Image Translation with Brownian-bridge Diffusion
Models | Exemplar-guided image translation, synthesizing photo-realistic images that conform to both structural control and style exemplars, is attracting attention due to its ability to enhance user control over style manipulation. Previous methodologies have predominantly depended on establishing dense correspondences across cross-domain inputs. Despite these efforts, they incur quadratic memory and computational costs for establishing dense correspondence, resulting in limited versatility and performance degradation. In this paper, we propose a novel approach termed Exemplar-guided Image Translation with Brownian-Bridge Diffusion Models (EBDM). Our method formulates the task as a stochastic Brownian bridge process, a diffusion process with a fixed initial point as structure control and translates into the corresponding photo-realistic image while being conditioned solely on the given exemplar image. To efficiently guide the diffusion process toward the style of exemplar, we delineate three pivotal components: the Global Encoder, the Exemplar Network, and the Exemplar Attention Module to incorporate global and detailed texture information from exemplar images. Leveraging Bridge diffusion, the network can translate images from structure control while exclusively conditioned on the exemplar style, leading to more robust training and inference processes. We illustrate the superiority of our method over competing approaches through comprehensive benchmark evaluations and visual results. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 497,778 |
2410.05514 | Toward General Object-level Mapping from Sparse Views with 3D Diffusion
Priors | Object-level mapping builds a 3D map of objects in a scene with detailed shapes and poses from multi-view sensor observations. Conventional methods struggle to build complete shapes and estimate accurate poses due to partial occlusions and sensor noise. They require dense observations to cover all objects, which is challenging to achieve in robotics trajectories. Recent work introduces generative shape priors for object-level mapping from sparse views, but is limited to single-category objects. In this work, we propose a General Object-level Mapping system, GOM, which leverages a 3D diffusion model as shape prior with multi-category support and outputs Neural Radiance Fields (NeRFs) for both texture and geometry for all objects in a scene. GOM includes an effective formulation to guide a pre-trained diffusion model with extra nonlinear constraints from sensor measurements without finetuning. We also develop a probabilistic optimization formulation to fuse multi-view sensor observations and diffusion priors for joint 3D object pose and shape estimation. Our GOM system demonstrates superior multi-category mapping performance from sparse views, and achieves more accurate mapping results compared to state-of-the-art methods on the real-world benchmarks. We will release our code: https://github.com/TRAILab/GeneralObjectMapping. | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | false | false | false | 495,772 |
2411.15626 | Aligning Generalisation Between Humans and Machines | Recent advances in AI -- including generative approaches -- have resulted in technology that can support humans in scientific discovery and decision support but may also disrupt democracies and target individuals. The responsible use of AI increasingly shows the need for human-AI teaming, necessitating effective interaction between humans and machines. A crucial yet often overlooked aspect of these interactions is the different ways in which humans and machines generalise. In cognitive science, human generalisation commonly involves abstraction and concept learning. In contrast, AI generalisation encompasses out-of-domain generalisation in machine learning, rule-based reasoning in symbolic AI, and abstraction in neuro-symbolic AI. In this perspective paper, we combine insights from AI and cognitive science to identify key commonalities and differences across three dimensions: notions of generalisation, methods for generalisation, and evaluation of generalisation. We map the different conceptualisations of generalisation in AI and cognitive science along these three dimensions and consider their role in human-AI teaming. This results in interdisciplinary challenges across AI and cognitive science that must be tackled to provide a foundation for effective and cognitively supported alignment in human-AI teaming scenarios. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 510,694 |
1610.05854 | Mixed context networks for semantic segmentation | Semantic segmentation is challenging as it requires both object-level information and pixel-level accuracy. Recently, FCN-based systems gained great improvement in this area. Unlike classification networks, combining features of different layers plays an important role in these dense prediction models, as these features contains information of different levels. A number of models have been proposed to show how to use these features. However, what is the best architecture to make use of features of different layers is still a question. In this paper, we propose a module, called mixed context network, and show that our presented system outperforms most existing semantic segmentation systems by making use of this module. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 62,572 |
2405.06277 | Learning A Spiking Neural Network for Efficient Image Deraining | Recently, spiking neural networks (SNNs) have demonstrated substantial potential in computer vision tasks. In this paper, we present an Efficient Spiking Deraining Network, called ESDNet. Our work is motivated by the observation that rain pixel values will lead to a more pronounced intensity of spike signals in SNNs. However, directly applying deep SNNs to image deraining task still remains a significant challenge. This is attributed to the information loss and training difficulties that arise from discrete binary activation and complex spatio-temporal dynamics. To this end, we develop a spiking residual block to convert the input into spike signals, then adaptively optimize the membrane potential by introducing attention weights to adjust spike responses in a data-driven manner, alleviating information loss caused by discrete binary activation. By this way, our ESDNet can effectively detect and analyze the characteristics of rain streaks by learning their fluctuations. This also enables better guidance for the deraining process and facilitates high-quality image reconstruction. Instead of relying on the ANN-SNN conversion strategy, we introduce a gradient proxy strategy to directly train the model for overcoming the challenge of training. Experimental results show that our approach gains comparable performance against ANN-based methods while reducing energy consumption by 54%. The code source is available at https://github.com/MingTian99/ESDNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,238 |
1711.09822 | Scalable Object Detection for Stylized Objects | Following recent breakthroughs in convolutional neural networks and monolithic model architectures, state-of-the-art object detection models can reliably and accurately scale into the realm of up to thousands of classes. Things quickly break down, however, when scaling into the tens of thousands, or, eventually, to millions or billions of unique objects. Further, bounding box-trained end-to-end models require extensive training data. Even though - with some tricks using hierarchies - one can sometimes scale up to thousands of classes, the labor requirements for clean image annotations quickly get out of control. In this paper, we present a two-layer object detection method for brand logos and other stylized objects for which prototypical images exist. It can scale to large numbers of unique classes. Our first layer is a CNN from the Single Shot Multibox Detector family of models that learns to propose regions where some stylized object is likely to appear. The contents of a proposed bounding box is then run against an image index that is targeted for the retrieval task at hand. The proposed architecture scales to a large number of object classes, allows to continously add new classes without retraining, and exhibits state-of-the-art quality on a stylized object detection task such as logo recognition. | false | false | false | false | false | true | true | false | false | false | false | true | false | false | false | false | false | false | 85,475 |
1701.03458 | An Asynchronous Parallel Approach to Sparse Recovery | Asynchronous parallel computing and sparse recovery are two areas that have received recent interest. Asynchronous algorithms are often studied to solve optimization problems where the cost function takes the form $\sum_{i=1}^M f_i(x)$, with a common assumption that each $f_i$ is sparse; that is, each $f_i$ acts only on a small number of components of $x\in\mathbb{R}^n$. Sparse recovery problems, such as compressed sensing, can be formulated as optimization problems, however, the cost functions $f_i$ are dense with respect to the components of $x$, and instead the signal $x$ is assumed to be sparse, meaning that it has only $s$ non-zeros where $s\ll n$. Here we address how one may use an asynchronous parallel architecture when the cost functions $f_i$ are not sparse in $x$, but rather the signal $x$ is sparse. We propose an asynchronous parallel approach to sparse recovery via a stochastic greedy algorithm, where multiple processors asynchronously update a vector in shared memory containing information on the estimated signal support. We include numerical simulations that illustrate the potential benefits of our proposed asynchronous method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 66,708 |
2302.09655 | PAPRAS: Plug-And-Play Robotic Arm System | This paper presents a novel robotic arm system, named PAPRAS (Plug-And-Play Robotic Arm System). PAPRAS consists of a portable robotic arm(s), docking mount(s), and software architecture including a control system. By analyzing the target task spaces at home, the dimensions and configuration of PAPRAS are determined. PAPRAS's arm is light (less than 6kg) with an optimized 3D-printed structure, and it has a high payload (3kg) as a human-arm-sized manipulator. A locking mechanism is embedded in the structure for better portability and the 3D-printed docking mount can be installed easily. PAPRAS's software architecture is developed on an open-source framework and optimized for low-latency multiagent-based distributed manipulator control. A process to create new demonstrations is presented to show PAPRAS's ease of use and efficiency. In the paper, simulations and hardware experiments are presented in various demonstrations, including sink-to-dishwasher manipulation, coffee making, mobile manipulation on a quadruped, and suit-up demo to validate the hardware and software design. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 346,512 |
2402.02441 | TopoX: A Suite of Python Packages for Machine Learning on Topological
Domains | We introduce TopoX, a Python software suite that provides reliable and user-friendly building blocks for computing and machine learning on topological domains that extend graphs: hypergraphs, simplicial, cellular, path and combinatorial complexes. TopoX consists of three packages: TopoNetX facilitates constructing and computing on these domains, including working with nodes, edges and higher-order cells; TopoEmbedX provides methods to embed topological domains into vector spaces, akin to popular graph-based embedding algorithms such as node2vec; TopoModelX is built on top of PyTorch and offers a comprehensive toolbox of higher-order message passing functions for neural networks on topological domains. The extensively documented and unit-tested source code of TopoX is available under MIT license at https://pyt-team.github.io/}{https://pyt-team.github.io/. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 426,563 |
2302.12388 | TrafFormer: A Transformer Model for Predicting Long-term Traffic | Traffic prediction is a flourishing research field due to its importance in human mobility in the urban space. Despite this, existing studies only focus on short-term prediction of up to few hours in advance, with most being up to one hour only. Long-term traffic prediction can enable more comprehensive, informed, and proactive measures against traffic congestion and is therefore an important task to explore. In this paper, we explore the task of long-term traffic prediction; where we predict traffic up to 24 hours in advance. We note the weaknesses of existing models--which are based on recurrent structures--for long-term traffic prediction and propose a modified Transformer model "TrafFormer". Experiments comparing our model with existing hybrid neural network models show the superiority of our model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 347,544 |
1301.0575 | CFW: A Collaborative Filtering System Using Posteriors Over Weights Of
Evidence | We describe CFW, a computationally efficient algorithm for collaborative filtering that uses posteriors over weights of evidence. In experiments on real data, we show that this method predicts as well or better than other methods in situations where the size of the user query is small. The new approach works particularly well when the user s query CONTAINS low frequency(unpopular) items.The approach complements that OF dependency networks which perform well WHEN the size OF the query IS large.Also IN this paper, we argue that the USE OF posteriors OVER weights OF evidence IS a natural way TO recommend similar items collaborative - filtering task. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 20,757 |
2303.06050 | Optimal foraging strategies can be learned | The foraging behavior of animals is a paradigm of target search in nature. Understanding which foraging strategies are optimal and how animals learn them are central challenges in modeling animal foraging. While the question of optimality has wide-ranging implications across fields such as economy, physics, and ecology, the question of learnability is a topic of ongoing debate in evolutionary biology. Recognizing the interconnected nature of these challenges, this work addresses them simultaneously by exploring optimal foraging strategies through a reinforcement learning framework. To this end, we model foragers as learning agents. We first prove theoretically that maximizing rewards in our reinforcement learning model is equivalent to optimizing foraging efficiency. We then show with numerical experiments that, in the paradigmatic model of non-destructive search, our agents learn foraging strategies which outperform the efficiency of some of the best known strategies such as L\'evy walks. These findings highlight the potential of reinforcement learning as a versatile framework not only for optimizing search strategies but also to model the learning process, thus shedding light on the role of learning in natural optimization processes. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 350,680 |
1602.04431 | Distributed Query Processing Plans generation using Teacher Learner
Based Optimization | With the growing popularity, the number of data sources and the amount of data has been growing very fast in recent years. The distribution of operational data on disperse data sources impose a challenge on processing user queries. In such database systems, the database relations required by a query to answer may be stored at multiple sites. This leads to an exponential increase in the number of possible equivalent or alternatives of a user query. Though it is not computationally reasonable to explore exhaustively all possible query plans in a large search space, thus a strategy is requisite to produce optimal query plans in distributed database systems. The query plan with most cost-effective option for query processing is measured necessary and must be generated for a given query. This paper attempts to generate such optimal query plans using a parameter less optimization technique Teaching-Learner Based Optimization(TLBO). The TLBO algorithm was experiential to go one better than the other optimization algorithms for the multi objective unconstrained and constrained benchmark problems. Experimental comparisons of TLBO based optimal plan generation with the multiobjective genetic algorithm based distributed query plan generation algorithm shows that for higher number of relations, the TLBO based algorithm is able to generate comparatively better quality Top K query plans. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 52,131 |
2306.10880 | Explaining the Model and Feature Dependencies by Decomposition of the
Shapley Value | Shapley values have become one of the go-to methods to explain complex models to end-users. They provide a model agnostic post-hoc explanation with foundations in game theory: what is the worth of a player (in machine learning, a feature value) in the objective function (the output of the complex machine learning model). One downside is that they always require outputs of the model when some features are missing. These are usually computed by taking the expectation over the missing features. This however introduces a non-trivial choice: do we condition on the unknown features or not? In this paper we examine this question and claim that they represent two different explanations which are valid for different end-users: one that explains the model and one that explains the model combined with the feature dependencies in the data. We propose a new algorithmic approach to combine both explanations, removing the burden of choice and enhancing the explanatory power of Shapley values, and show that it achieves intuitive results on simple problems. We apply our method to two real-world datasets and discuss the explanations. Finally, we demonstrate how our method is either equivalent or superior to state-to-of-art Shapley value implementations while simultaneously allowing for increased insight into the model-data structure. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 374,400 |
2201.11110 | Spin Wave Electromagnetic Nano-Antenna Enabled by Tripartite
Phonon-Magnon-Photon Coupling | We investigate tripartite coupling between phonons, magnons and photons in a periodic array of elliptical magnetostrictive nanomagnets delineated on a piezoelectric substrate to form a two-dimensional two-phase multiferroic crystal. A surface acoustic wave (phonons) of 5 - 35 GHz frequency launched into the substrate causes the magnetizations of the nanomagnets to precess at the frequency of the wave, giving rise to spin waves (magnons). The spin waves, in turn, radiate electromagnetic waves (photons) into the surrounding space at the surface acoustic wave frequency. Here, the phonons couple into magnons, which then couple into photons. This tripartite phonon-magnon-photon coupling is exploited to implement an extreme sub-wavelength electromagnetic antenna whose measured radiation efficiency and antenna gain exceed the theoretical limits for traditional antennas by more than two orders of magnitude at some frequencies. Micro-magnetic simulations are in excellent agreement with experimental observations and provide insight into the spin wave modes that couple into radiating electromagnetic modes to implement the antenna. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 277,187 |
1711.08241 | 3D Point Cloud Classification and Segmentation using 3D Modified Fisher
Vector Representation for Convolutional Neural Networks | The point cloud is gaining prominence as a method for representing 3D shapes, but its irregular format poses a challenge for deep learning methods. The common solution of transforming the data into a 3D voxel grid introduces its own challenges, mainly large memory size. In this paper we propose a novel 3D point cloud representation called 3D Modified Fisher Vectors (3DmFV). Our representation is hybrid as it combines the discrete structure of a grid with continuous generalization of Fisher vectors, in a compact and computationally efficient way. Using the grid enables us to design a new CNN architecture for point cloud classification and part segmentation. In a series of experiments we demonstrate competitive performance or even better than state-of-the-art on challenging benchmark datasets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,168 |
1711.09874 | Divide-and-Conquer Reinforcement Learning | Standard model-free deep reinforcement learning (RL) algorithms sample a new initial state for each trial, allowing them to optimize policies that can perform well even in highly stochastic environments. However, problems that exhibit considerable initial state variation typically produce high-variance gradient estimates for model-free RL, making direct policy or value function optimization challenging. In this paper, we develop a novel algorithm that instead partitions the initial state space into "slices", and optimizes an ensemble of policies, each on a different slice. The ensemble is gradually unified into a single policy that can succeed on the whole state space. This approach, which we term divide-and-conquer RL, is able to solve complex tasks where conventional deep RL methods are ineffective. Our results show that divide-and-conquer RL greatly outperforms conventional policy gradient methods on challenging grasping, manipulation, and locomotion tasks, and exceeds the performance of a variety of prior methods. Videos of policies learned by our algorithm can be viewed at http://bit.ly/dnc-rl | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 85,485 |
2204.03270 | Multi-scale Context-aware Network with Transformer for Gait Recognition | Although gait recognition has drawn increasing research attention recently, since the silhouette differences are quite subtle in spatial domain, temporal feature representation is crucial for gait recognition. Inspired by the observation that humans can distinguish gaits of different subjects by adaptively focusing on clips of varying time scales, we propose a multi-scale context-aware network with transformer (MCAT) for gait recognition. MCAT generates temporal features across three scales, and adaptively aggregates them using contextual information from both local and global perspectives. Specifically, MCAT contains an adaptive temporal aggregation (ATA) module that performs local relation modeling followed by global relation modeling to fuse the multi-scale features. Besides, in order to remedy the spatial feature corruption resulting from temporal operations, MCAT incorporates a salient spatial feature learning (SSFL) module to select groups of discriminative spatial features. Extensive experiments conducted on three datasets demonstrate the state-of-the-art performance. Concretely, we achieve rank-1 accuracies of 98.7%, 96.2% and 88.7% under normal walking, bag-carrying and coat-wearing conditions on CASIA-B, 97.5% on OU-MVLP and 50.6% on GREW. The source code will be available at https://github.com/zhuduowang/MCAT.git. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 290,243 |
2005.05487 | Exploring TTS without T Using Biologically/Psychologically Motivated
Neural Network Modules (ZeroSpeech 2020) | In this study, we reported our exploration of Text-To-Speech without Text (TTS without T) in the Zero Resource Speech Challenge 2020, in which participants proposed an end-to-end, unsupervised system that learned speech recognition and TTS together. We addressed the challenge using biologically/psychologically motivated modules of Artificial Neural Networks (ANN), with a particular interest in unsupervised learning of human language as a biological/psychological problem. The system first processes Mel Frequency Cepstral Coefficient (MFCC) frames with an Echo-State Network (ESN), and simulates computations in cortical microcircuits. The outcome is discretized by our original Variational Autoencoder (VAE) that implements the Dirichlet-based Bayesian clustering widely accepted in computational linguistics and cognitive science. The discretized signal is then reverted into sound waveform via a neural-network implementation of the source-filter model for speech production. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 176,741 |
2303.07099 | Beyond Fish and Bicycles: Exploring the Varieties of Online Women's
Ideological Spaces | The Internet has been instrumental in connecting under-represented and vulnerable groups of people. Platforms built to foster social interaction and engagement have enabled historically disenfranchised groups to have a voice. One such vulnerable group is women. In this paper, we explore the diversity in online women's ideological spaces using a multi-dimensional approach. We perform a large-scale, data-driven analysis of over 6M Reddit comments and submissions from 14 subreddits. We elicit a diverse taxonomy of online women's ideological spaces, ranging from counterparts to the so-called Manosphere to Gender-Critical Feminism. We then perform content analysis, finding meaningful differences across topics and communities. Finally, we shed light on two platforms, ovarit.com and thepinkpill.co, where two toxic communities of online women's ideological spaces (Gender-Critical Feminism and Femcels) migrated after their ban on Reddit. | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | false | false | false | 351,113 |
2106.07803 | SynthASR: Unlocking Synthetic Data for Speech Recognition | End-to-end (E2E) automatic speech recognition (ASR) models have recently demonstrated superior performance over the traditional hybrid ASR models. Training an E2E ASR model requires a large amount of data which is not only expensive but may also raise dependency on production data. At the same time, synthetic speech generated by the state-of-the-art text-to-speech (TTS) engines has advanced to near-human naturalness. In this work, we propose to utilize synthetic speech for ASR training (SynthASR) in applications where data is sparse or hard to get for ASR model training. In addition, we apply continual learning with a novel multi-stage training strategy to address catastrophic forgetting, achieved by a mix of weighted multi-style training, data augmentation, encoder freezing, and parameter regularization. In our experiments conducted on in-house datasets for a new application of recognizing medication names, training ASR RNN-T models with synthetic audio via the proposed multi-stage training improved the recognition performance on new application by more than 65% relative, without degradation on existing general applications. Our observations show that SynthASR holds great promise in training the state-of-the-art large-scale E2E ASR models for new applications while reducing the costs and dependency on production data. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 241,051 |
2004.10240 | Deep Learning for Time Series Forecasting: Tutorial and Literature
Survey | Deep learning based forecasting methods have become the methods of choice in many applications of time series prediction or forecasting often outperforming other approaches. Consequently, over the last years, these methods are now ubiquitous in large-scale industrial forecasting applications and have consistently ranked among the best entries in forecasting competitions (e.g., M4 and M5). This practical success has further increased the academic interest to understand and improve deep forecasting methods. In this article we provide an introduction and overview of the field: We present important building blocks for deep forecasting in some depth; using these building blocks, we then survey the breadth of the recent deep forecasting literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 173,577 |
2009.08453 | MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet
without Tricks | We introduce a simple yet effective distillation framework that is able to boost the vanilla ResNet-50 to 80%+ Top-1 accuracy on ImageNet without tricks. We construct such a framework through analyzing the problems in the existing classification system and simplify the base method ensemble knowledge distillation via discriminators by: (1) adopting the similarity loss and discriminator only on the final outputs and (2) using the average of softmax probabilities from all teacher ensembles as the stronger supervision. Intriguingly, three novel perspectives are presented for distillation: (1) weight decay can be weakened or even completely removed since the soft label also has a regularization effect; (2) using a good initialization for students is critical; and (3) one-hot/hard label is not necessary in the distillation process if the weights are well initialized. We show that such a straight-forward framework can achieve state-of-the-art results without involving any commonly-used techniques, such as architecture modification; outside training data beyond ImageNet; autoaug/randaug; cosine learning rate; mixup/cutmix training; label smoothing; etc. Our method obtains 80.67% top-1 accuracy on ImageNet using a single crop-size of 224x224 with vanilla ResNet-50, outperforming the previous state-of-the-arts by a significant margin under the same network structure. Our result can be regarded as a strong baseline using knowledge distillation, and to our best knowledge, this is also the first method that is able to boost vanilla ResNet-50 to surpass 80% on ImageNet without architecture modification or additional training data. On smaller ResNet-18, our distillation framework consistently improves from 69.76% to 73.19%, which shows tremendous practical values in real-world applications. Our code and models are available at: https://github.com/szq0214/MEAL-V2. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 196,247 |
2211.14485 | FastHuman: Reconstructing High-Quality Clothed Human in Minutes | We propose an approach for optimizing high-quality clothed human body shapes in minutes, using multi-view posed images. While traditional neural rendering methods struggle to disentangle geometry and appearance using only rendering loss, and are computationally intensive, our method uses a mesh-based patch warping technique to ensure multi-view photometric consistency, and sphere harmonics (SH) illumination to refine geometric details efficiently. We employ oriented point clouds' shape representation and SH shading, which significantly reduces optimization and rendering times compared to implicit methods. Our approach has demonstrated promising results on both synthetic and real-world datasets, making it an effective solution for rapidly generating high-quality human body shapes. Project page \href{https://l1346792580123.github.io/nccsfs/}{https://l1346792580123.github.io/nccsfs/} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 332,856 |
2203.15932 | Self-Contrastive Learning based Semi-Supervised Radio Modulation
Classification | This paper presents a semi-supervised learning framework that is new in being designed for automatic modulation classification (AMC). By carefully utilizing unlabeled signal data with a self-supervised contrastive-learning pre-training step, our framework achieves higher performance given smaller amounts of labeled data, thereby largely reducing the labeling burden of deep learning. We evaluate the performance of our semi-supervised framework on a public dataset. The evaluation results demonstrate that our semi-supervised approach significantly outperforms supervised frameworks thereby substantially enhancing our ability to train deep neural networks for automatic modulation classification in a manner that leverages unlabeled data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 288,585 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.