id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2010.13624 | Wind Power Transmission System Integration -- a Case Study of China Wind
Power Base | Due to a series of supporting policies in recent years, China wind power has developed rapidly through a large-scale and centralized mode. This paper analyzes the two major concerns faced by wind power development in China: wind generation reliability and wind energy balancing. More specifically, wind farm tripping-off-grid incidents and wind power curtailment issues, which caused huge economical loss, are investigated in details. Based on operation experience of large wind power bases, technical recommendations and economic incentives are proposed to improve wind power integration and power grid reliability. As a summary and outlook of wind power development in China, this paper provides a reference on future wind power development for other countries. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 203,195 |
2403.10438 | Data Ethics Emergency Drill: A Toolbox for Discussing Responsible AI for
Industry Teams | Researchers urge technology practitioners such as data scientists to consider the impacts and ethical implications of algorithmic decisions. However, unlike programming, statistics, and data management, discussion of ethical implications is rarely included in standard data science training. To begin to address this gap, we designed and tested a toolbox called the data ethics emergency drill (DEED) to help data science teams discuss and reflect on the ethical implications of their work. The DEED is a roleplay of a fictional ethical emergency scenario that is contextually situated in the team's specific workplace and applications. This paper outlines the DEED toolbox and describes three studies carried out with two different data science teams that iteratively shaped its design. Our findings show that practitioners can apply lessons learnt from the roleplay to real-life situations, and how the DEED opened up conversations around ethics and values. | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 438,201 |
2009.13863 | Distributed ADMM with Synergetic Communication and Computation | In this paper, we propose a novel distributed alternating direction method of multipliers (ADMM) algorithm with synergetic communication and computation, called SCCD-ADMM, to reduce the total communication and computation cost of the system. Explicitly, in the proposed algorithm, each node interacts with only part of its neighboring nodes, the number of which is progressively determined according to a heuristic searching procedure, which takes into account both the predicted convergence rate and the communication and computation costs at each iteration, resulting in a trade-off between communication and computation. Then the node chooses its neighboring nodes according to an importance sampling distribution derived theoretically to minimize the variance with the latest information it locally stores. Finally, the node updates its local information with a new update rule which adapts to the number of communication nodes. We prove the convergence of the proposed algorithm and provide an upper bound of the convergence variance brought by randomness. Extensive simulations validate the excellent performances of the proposed algorithm in terms of convergence rate and variance, the overall communication and computation cost, the impact of network topology as well as the time for evaluation, in comparison with the traditional counterparts. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 197,872 |
2210.10947 | Does Learning from Decentralized Non-IID Unlabeled Data Benefit from
Self Supervision? | Decentralized learning has been advocated and widely deployed to make efficient use of distributed datasets, with an extensive focus on supervised learning (SL) problems. Unfortunately, the majority of real-world data are unlabeled and can be highly heterogeneous across sources. In this work, we carefully study decentralized learning with unlabeled data through the lens of self-supervised learning (SSL), specifically contrastive visual representation learning. We study the effectiveness of a range of contrastive learning algorithms under decentralized learning settings, on relatively large-scale datasets including ImageNet-100, MS-COCO, and a new real-world robotic warehouse dataset. Our experiments show that the decentralized SSL (Dec-SSL) approach is robust to the heterogeneity of decentralized datasets, and learns useful representation for object classification, detection, and segmentation tasks. This robustness makes it possible to significantly reduce communication and reduce the participation ratio of data sources with only minimal drops in performance. Interestingly, using the same amount of data, the representation learned by Dec-SSL can not only perform on par with that learned by centralized SSL which requires communication and excessive data storage costs, but also sometimes outperform representations extracted from decentralized SL which requires extra knowledge about the data labels. Finally, we provide theoretical insights into understanding why data heterogeneity is less of a concern for Dec-SSL objectives, and introduce feature alignment and clustering techniques to develop a new Dec-SSL algorithm that further improves the performance, in the face of highly non-IID data. Our study presents positive evidence to embrace unlabeled data in decentralized learning, and we hope to provide new insights into whether and why decentralized SSL is effective. | false | false | false | false | true | false | true | true | false | false | false | true | false | false | false | false | false | false | 325,112 |
2001.05759 | Smart Data driven Decision Trees Ensemble Methodology for Imbalanced Big
Data | Differences in data size per class, also known as imbalanced data distribution, have become a common problem affecting data quality. Big Data scenarios pose a new challenge to traditional imbalanced classification algorithms, since they are not prepared to work with such amount of data. Split data strategies and lack of data in the minority class due to the use of MapReduce paradigm have posed new challenges for tackling the imbalance between classes in Big Data scenarios. Ensembles have shown to be able to successfully address imbalanced data problems. Smart Data refers to data of enough quality to achieve high performance models. The combination of ensembles and Smart Data, achieved through Big Data preprocessing, should be a great synergy. In this paper, we propose a novel Smart Data driven Decision Trees Ensemble methodology for addressing the imbalanced classification problem in Big Data domains, namely SD_DeTE methodology. This methodology is based on the learning of different decision trees using distributed quality data for the ensemble process. This quality data is achieved by fusing Random Discretization, Principal Components Analysis and clustering-based Random Oversampling for obtaining different Smart Data versions of the original data. Experiments carried out in 21 binary adapted datasets have shown that our methodology outperforms Random Forest. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 160,631 |
2312.04803 | SuperNormal: Neural Surface Reconstruction via Multi-View Normal
Integration | We present SuperNormal, a fast, high-fidelity approach to multi-view 3D reconstruction using surface normal maps. With a few minutes, SuperNormal produces detailed surfaces on par with 3D scanners. We harness volume rendering to optimize a neural signed distance function (SDF) powered by multi-resolution hash encoding. To accelerate training, we propose directional finite difference and patch-based ray marching to approximate the SDF gradients numerically. While not compromising reconstruction quality, this strategy is nearly twice as efficient as analytical gradients and about three times faster than axis-aligned finite difference. Experiments on the benchmark dataset demonstrate the superiority of SuperNormal in efficiency and accuracy compared to existing multi-view photometric stereo methods. On our captured objects, SuperNormal produces more fine-grained geometry than recent neural 3D reconstruction methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 413,831 |
2411.07940 | Automatic dataset shift identification to support root cause analysis of
AI performance drift | Shifts in data distribution can substantially harm the performance of clinical AI models. Hence, various methods have been developed to detect the presence of such shifts at deployment time. However, root causes of dataset shifts are varied, and the choice of shift mitigation strategies is highly dependent on the precise type of shift encountered at test time. As such, detecting test-time dataset shift is not sufficient: precisely identifying which type of shift has occurred is critical. In this work, we propose the first unsupervised dataset shift identification framework, effectively distinguishing between prevalence shift (caused by a change in the label distribution), covariate shift (caused by a change in input characteristics) and mixed shifts (simultaneous prevalence and covariate shifts). We discuss the importance of self-supervised encoders for detecting subtle covariate shifts and propose a novel shift detector leveraging both self-supervised encoders and task model outputs for improved shift detection. We report promising results for the proposed shift identification framework across three different imaging modalities (chest radiography, digital mammography, and retinal fundus images) on five types of real-world dataset shifts, using four large publicly available datasets. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 507,726 |
2107.14425 | Enhancing Social Relation Inference with Concise Interaction Graph and
Discriminative Scene Representation | There has been a recent surge of research interest in attacking the problem of social relation inference based on images. Existing works classify social relations mainly by creating complicated graphs of human interactions, or learning the foreground and/or background information of persons and objects, but ignore holistic scene context. The holistic scene refers to the functionality of a place in images, such as dinning room, playground and office. In this paper, by mimicking human understanding on images, we propose an approach of \textbf{PR}actical \textbf{I}nference in \textbf{S}ocial r\textbf{E}lation (PRISE), which concisely learns interactive features of persons and discriminative features of holistic scenes. Technically, we develop a simple and fast relational graph convolutional network to capture interactive features of all persons in one image. To learn the holistic scene feature, we elaborately design a contrastive learning task based on image scene classification. To further boost the performance in social relation inference, we collect and distribute a new large-scale dataset, which consists of about 240 thousand unlabeled images. The extensive experimental results show that our novel learning framework significantly beats the state-of-the-art methods, e.g., PRISE achieves 6.8$\%$ improvement for domain classification in PIPA dataset. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 248,465 |
2208.09736 | C$^{2}$IMUFS: Complementary and Consensus Learning-based Incomplete
Multi-view Unsupervised Feature Selection | Multi-view unsupervised feature selection (MUFS) has been demonstrated as an effective technique to reduce the dimensionality of multi-view unlabeled data. The existing methods assume that all of views are complete. However, multi-view data are usually incomplete, i.e., a part of instances are presented on some views but not all views. Besides, learning the complete similarity graph, as an important promising technology in existing MUFS methods, cannot achieve due to the missing views. In this paper, we propose a complementary and consensus learning-based incomplete multi-view unsupervised feature selection method (C$^{2}$IMUFS) to address the aforementioned issues. Concretely, C$^{2}$IMUFS integrates feature selection into an extended weighted non-negative matrix factorization model equipped with adaptive learning of view-weights and a sparse $\ell_{2,p}$-norm, which can offer better adaptability and flexibility. By the sparse linear combinations of multiple similarity matrices derived from different views, a complementary learning-guided similarity matrix reconstruction model is presented to obtain the complete similarity graph in each view. Furthermore, C$^{2}$IMUFS learns a consensus clustering indicator matrix across different views and embeds it into a spectral graph term to preserve the local geometric structure. Comprehensive experimental results on real-world datasets demonstrate the effectiveness of C$^{2}$IMUFS compared with state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 313,817 |
2111.14193 | On data-driven control: informativity of noisy input-output data with
cross-covariance bounds | In this paper we develop new data informativity based controller synthesis methods that extend existing frameworks in two relevant directions: a more general noise characterization in terms of cross-covariance bounds and informativity conditions for control based on input-output data. Previous works have derived necessary and sufficient informativity conditions for noisy input-state data with quadratic noise bounds via an S-procedure. Although these bounds do not capture cross-covariance bounds in general, we show that the S-procedure is still applicable for obtaining non-conservative conditions on the data. Informativity-conditions for stability, $\mathcal{H}_\infty$ and $\mathcal{H}_2$ control are developed, which are sufficient for input-output data and also necessary for input-state data. Simulation experiments illustrate that cross-covariance bounds can be less conservative for informativity, compared to norm bounds typically employed in the literature. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 268,520 |
2108.10026 | Deep Relational Metric Learning | This paper presents a deep relational metric learning (DRML) framework for image clustering and retrieval. Most existing deep metric learning methods learn an embedding space with a general objective of increasing interclass distances and decreasing intraclass distances. However, the conventional losses of metric learning usually suppress intraclass variations which might be helpful to identify samples of unseen classes. To address this problem, we propose to adaptively learn an ensemble of features that characterizes an image from different aspects to model both interclass and intraclass distributions. We further employ a relational module to capture the correlations among each feature in the ensemble and construct a graph to represent an image. We then perform relational inference on the graph to integrate the ensemble and obtain a relation-aware embedding to measure the similarities. Extensive experiments on the widely-used CUB-200-2011, Cars196, and Stanford Online Products datasets demonstrate that our framework improves existing deep metric learning methods and achieves very competitive results. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 251,775 |
1910.06150 | A generalized intelligent quality-based approach for fusing multi-source
information | In this paper, we propose a generalized intelligent quality-based approach for fusing multi-source information. The goal of the proposed approach intends to fuse the multi-complex-valued distribution information while maintaining a high quality of the fused result by considering the usage of credible information sources. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 149,273 |
2312.14473 | Coordinated Active-Reactive Power Management of ReP2H Systems with
Multiple Electrolyzers | Utility-scale renewable power-to-hydrogen (ReP2H) production typically uses thyristor rectifiers (TRs) to supply power to multiple electrolyzers (ELZs). They exhibit a nonlinear and non-decouplable relation between active and reactive power. The on-off scheduling and load allocation of multiple ELZs simultaneously impact energy conversion efficiency and AC-side active and reactive power flow. Improper scheduling may result in excessive reactive power demand, causing voltage violations and increased network losses, compromising safety and economy. To address these challenges, this paper first explores trade-offs between the efficiency and the reactive load of the electrolyzers. Subsequently, we propose a coordinated approach for scheduling the active and reactive power in the ReP2H system. A mixed-integer second-order cone programming (MISOCP) is established to jointly optimize active and reactive power by coordinating the ELZs, renewable energy sources, energy storage (ES), and var compensations. Case studies demonstrate that the proposed method reduces losses by 3.06% in an off-grid ReP2H system while increasing hydrogen production by 5.27% in average. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 417,653 |
2311.07780 | Parrot-Trained Adversarial Examples: Pushing the Practicality of
Black-Box Audio Attacks against Speaker Recognition Models | Audio adversarial examples (AEs) have posed significant security challenges to real-world speaker recognition systems. Most black-box attacks still require certain information from the speaker recognition model to be effective (e.g., keeping probing and requiring the knowledge of similarity scores). This work aims to push the practicality of the black-box attacks by minimizing the attacker's knowledge about a target speaker recognition model. Although it is not feasible for an attacker to succeed with completely zero knowledge, we assume that the attacker only knows a short (or a few seconds) speech sample of a target speaker. Without any probing to gain further knowledge about the target model, we propose a new mechanism, called parrot training, to generate AEs against the target model. Motivated by recent advancements in voice conversion (VC), we propose to use the one short sentence knowledge to generate more synthetic speech samples that sound like the target speaker, called parrot speech. Then, we use these parrot speech samples to train a parrot-trained(PT) surrogate model for the attacker. Under a joint transferability and perception framework, we investigate different ways to generate AEs on the PT model (called PT-AEs) to ensure the PT-AEs can be generated with high transferability to a black-box target model with good human perceptual quality. Real-world experiments show that the resultant PT-AEs achieve the attack success rates of 45.8% - 80.8% against the open-source models in the digital-line scenario and 47.9% - 58.3% against smart devices, including Apple HomePod (Siri), Amazon Echo, and Google Home, in the over-the-air scenario. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 407,464 |
1704.08030 | Airway segmentation from 3D chest CT volumes based on volume of interest
using gradient vector flow | Some lung diseases are related to bronchial airway structures and morphology. Although airway segmentation from chest CT volumes is an important task in the computer-aided diagnosis and surgery assistance systems for the chest, complete 3-D airway structure segmentation is a quite challenging task due to its complex tree-like structure. In this paper, we propose a new airway segmentation method from 3D chest CT volumes based on volume of interests (VOI) using gradient vector flow (GVF). This method segments the bronchial regions by applying the cavity enhancement filter (CEF) to trace the bronchial tree structure from the trachea. It uses the CEF in the VOI to segment each branch. And a tube-likeness function based on GVF and the GVF magnitude map in each VOI are utilized to assist predicting the positions and directions of child branches. By calculating the tube-likeness function based on GVF and the GVF magnitude map, the airway-like candidate structures are identified and their centrelines are extracted. Based on the extracted centrelines, we can detect the branch points of the bifurcations and directions of the airway branches in the next level. At the same time, a leakage detection is performed to avoid the leakage by analysing the pixel information and the shape information of airway candidate regions extracted in the VOI. Finally, we unify all of the extracted bronchial regions to form an integrated airway tree. Preliminary experiments using four cases of chest CT volumes demonstrated that the proposed method can extract more bronchial branches in comparison with other methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 72,468 |
2103.10803 | Bhattacharyya parameter of monomials codes for the Binary Erasure
Channel: from pointwise to average reliability | Monomial codes were recently equipped with partial order relations, fact that allowed researchers to discover structural properties and efficient algorithm for constructing polar codes. Here, we refine the existing order relations in the particular case of Binary Erasure Channel. The new order relation takes us closer to the ultimate order relation induced by the pointwise evaluation of the Bhattacharyya parameter of the synthetic channels. The best we can hope for is still a partial order relation. To overcome this issue we appeal to related technique from network theory. Reliability network theory was recently used in the context of polar coding and more generally in connection with decreasing monomial codes. In this article, we investigate how the concept of average reliability is applied for polar codes designed for the binary erasure channel. Instead of minimizing the error probability of the synthetic channels, for a particular value of the erasure parameter p, our codes minimize the average error probability of the synthetic channels. By means of basic network theory results we determine a closed formula for the average reliability of a particular synthetic channel, that recently gain the attention of researchers. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 225,571 |
0804.4466 | Free Distance Bounds for Protograph-Based Regular LDPC Convolutional
Codes | In this paper asymptotic methods are used to form lower bounds on the free distance to constraint length ratio of several ensembles of regular, asymptotically good, protograph-based LDPC convolutional codes. In particular, we show that the free distance to constraint length ratio of the regular LDPC convolutional codes exceeds that of the minimum distance to block length ratio of the corresponding LDPC block codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 1,658 |
1507.08847 | A novel multivariate performance optimization method based on sparse
coding and hyper-predictor learning | In this paper, we investigate the problem of optimization multivariate performance measures, and propose a novel algorithm for it. Different from traditional machine learning methods which optimize simple loss functions to learn prediction function, the problem studied in this paper is how to learn effective hyper-predictor for a tuple of data points, so that a complex loss function corresponding to a multivariate performance measure can be minimized. We propose to present the tuple of data points to a tuple of sparse codes via a dictionary, and then apply a linear function to compare a sparse code against a give candidate class label. To learn the dictionary, sparse codes, and parameter of the linear function, we propose a joint optimization problem. In this problem, the both the reconstruction error and sparsity of sparse code, and the upper bound of the complex loss function are minimized. Moreover, the upper bound of the loss function is approximated by the sparse codes and the linear function parameter. To optimize this problem, we develop an iterative algorithm based on descent gradient methods to learn the sparse codes and hyper-predictor parameter alternately. Experiment results on some benchmark data sets show the advantage of the proposed methods over other state-of-the-art algorithms. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 45,606 |
2305.09122 | Power Grid Transient Analysis via Open-Source Circuit Simulator: A Case
Study of HVDC | This paper proposes an electronic circuit simulator-based method to accelerate the power system transient simulation, where the modeling of a generic HVDC (High Voltage Direct Current) system is focused. The electronic circuit simulation equations and the backward differentiation formula for numerical solving are described. Then, the circuit modeling process for power system components such as slack bus, constant power load, and HVDC are respectively illustrated. Finally, a case study is conducted on a four-bus power system to demonstrate the effectiveness of the proposed modeling and simulation method. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 364,523 |
2401.11420 | Embedded Hyperspectral Band Selection with Adaptive Optimization for
Image Semantic Segmentation | The selection of hyperspectral bands plays a pivotal role in remote sensing and image analysis, with the aim of identifying the most informative spectral bands while minimizing computational overhead. This paper introduces a pioneering approach for hyperspectral band selection that offers an embedded solution, making it well-suited for resource-constrained or real-time applications. Our proposed method, embedded hyperspectral band selection (EHBS), excels in selecting the best bands without needing prior processing, seamlessly integrating with the downstream task model. This is achieved through stochastic band gates along with an approximation of the $l0$ norm on the number of selected bands as the regularization term and the integration of a dynamic optimizer, DoG, which removes the need for the required tuning of the learning rate. We conduct experiments on two distinct semantic-segmentation hyperspectral benchmark datasets, demonstrating their superiority in terms of accuracy and ease of use compared to many common and state-of-the-art methods. Furthermore, our contributions extend beyond hyperspectral band selection. Our approach's adaptability to other tasks, especially those involving grouped features, opens promising avenues for broader applications within the realm of deep learning, such as feature selection for feature groups. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 422,994 |
1907.13216 | Deep Learning Training on the Edge with Low-Precision Posits | Recently, the posit numerical format has shown promise for DNN data representation and compute with ultra-low precision ([5..8]-bit). However, majority of studies focus only on DNN inference. In this work, we propose DNN training using posits and compare with the floating point training. We evaluate on both MNIST and Fashion MNIST corpuses, where 16-bit posits outperform 16-bit floating point for end-to-end DNN training. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 140,313 |
2411.10929 | Wildfire Risk Metric Impact on Public Safety Power Shut-off Cost Savings | Public Safety Power Shutoffs (PSPS) are a proactive strategy to mitigate fire hazards from power system infrastructure failures. System operators employ PSPS to deactivate portions of the electric grid with heightened wildfire risks to prevent wildfire ignition and redispatch generators to minimize load shedding. A measure of vegetation flammability, called the Wildland Fire Potential Index (WFPI), has been widely used to evaluate the risk of nearby wildfires to power system operation. However, the WFPI does not correlate as strongly to historically observed wildfire ignition probabilities (OWIP) as WFPI-based the Large Fire Probability (WLFP).Prior work chose not to incorporate wildfire-driven failure probabilities, such as the WLFP, because constraints with Bernoulli random variables to represent wildfire ignitions could require non-linear or non-convex constraints. This paper uses a deterministic equivalent of an otherwise complicating line de-energization constraint by quantifying the wildfire risk of operating transmission line as a sum of each energized line's wildfire ignition log probability (log(WIP)) rather than as a sum of each energized line's WFPI. A day-ahead unit commitment and line de-energization PSPS framework is used to assess the cost differences driven by the choice between the WFPI and WLFP risk metrics. Training the optimization on scenarios developed by mapping WLFP to log(WIP) rather than mapping the WFPI to log(WIP) leads to reductions in the total real-time costs. For the IEEE RTS 24-bus test system, mapping transmission line WLFP values to log(WIP) resulted in a 14.8 % (on average) decrease in expected real-time costs. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 508,853 |
2202.10236 | Edge Data Based Trailer Inception Probabilistic Matrix Factorization for
Context-Aware Movie Recommendation | The rapid growth of edge data generated by mobile devices and applications deployed at the edge of the network has exacerbated the problem of information overload. As an effective way to alleviate information overload, recommender system can improve the quality of various services by adding application data generated by users on edge devices, such as visual and textual information, on the basis of sparse rating data. The visual information in the movie trailer is a significant part of the movie recommender system. However, due to the complexity of visual information extraction, data sparsity cannot be remarkably alleviated by merely using the rough visual features to improve the rating prediction accuracy. Fortunately, the convolutional neural network can be used to extract the visual features precisely. Therefore, the end-to-end neural image caption (NIC) model can be utilized to obtain the textual information describing the visual features of movie trailers. This paper proposes a trailer inception probabilistic matrix factorization model called Ti-PMF, which combines NIC, recurrent convolutional neural network, and probabilistic matrix factorization models as the rating prediction model. We implement the proposed Ti-PMF model with extensive experiments on three real-world datasets to validate its effectiveness. The experimental results illustrate that the proposed Ti-PMF outperforms the existing ones. | false | false | false | false | false | true | true | false | false | false | false | true | false | false | false | false | false | false | 281,466 |
2203.12274 | Pre-training to Match for Unified Low-shot Relation Extraction | Low-shot relation extraction~(RE) aims to recognize novel relations with very few or even no samples, which is critical in real scenario application. Few-shot and zero-shot RE are two representative low-shot RE tasks, which seem to be with similar target but require totally different underlying abilities. In this paper, we propose Multi-Choice Matching Networks to unify low-shot relation extraction. To fill in the gap between zero-shot and few-shot RE, we propose the triplet-paraphrase meta-training, which leverages triplet paraphrase to pre-train zero-shot label matching ability and uses meta-learning paradigm to learn few-shot instance summarizing ability. Experimental results on three different low-shot RE tasks show that the proposed method outperforms strong baselines by a large margin, and achieve the best performance on few-shot RE leaderboard. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 287,207 |
2406.12433 | LLM4Rerank: LLM-based Auto-Reranking Framework for Recommendations | Reranking is a critical component in recommender systems, playing an essential role in refining the output of recommendation algorithms. Traditional reranking models have focused predominantly on accuracy, but modern applications demand consideration of additional criteria such as diversity and fairness. Existing reranking approaches often fail to harmonize these diverse criteria effectively at the model level. Moreover, these models frequently encounter challenges with scalability and personalization due to their complexity and the varying significance of different reranking criteria in diverse scenarios. In response, we introduce a comprehensive reranking framework enhanced by LLM, designed to seamlessly integrate various reranking criteria while maintaining scalability and facilitating personalized recommendations. This framework employs a fully connected graph structure, allowing the LLM to simultaneously consider multiple aspects such as accuracy, diversity, and fairness through a coherent Chain-of-Thought (CoT) process. A customizable input mechanism is also integrated, enabling the tuning of the language model's focus to meet specific reranking needs. We validate our approach using three popular public datasets, where our framework demonstrates superior performance over existing state-of-the-art reranking models in balancing multiple criteria. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 465,406 |
1211.1788 | An Adaptive parameter free data mining approach for healthcare
application | In today's world, healthcare is the most important factor affecting human life. Due to heavy work load it is not possible for personal healthcare. The proposed system acts as a preventive measure for determining whether a person is fit or unfit based on person's historical and real time data by applying clustering algorithms like K-means and D-stream. The Density-based clustering algorithm i.e. the D-stream algorithm overcomes drawbacks of K-Means algorithm. By calculating their performance measures we finally find out effectiveness and efficiency of both the algorithms. Both clustering algorithms are applied on patient's bio-medical historical database. To check the correctness of both the algorithms, we apply them on patient's current bio-medical data. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 19,629 |
1908.04297 | Super-resolution of Omnidirectional Images Using Adversarial Learning | An omnidirectional image (ODI) enables viewers to look in every direction from a fixed point through a head-mounted display providing an immersive experience compared to that of a standard image. Designing immersive virtual reality systems with ODIs is challenging as they require high resolution content. In this paper, we study super-resolution for ODIs and propose an improved generative adversarial network based model which is optimized to handle the artifacts obtained in the spherical observational space. Specifically, we propose to use a fast PatchGAN discriminator, as it needs fewer parameters and improves the super-resolution at a fine scale. We also explore the generative models with adversarial learning by introducing a spherical-content specific loss function, called 360-SS. To train and test the performance of our proposed model we prepare a dataset of 4500 ODIs. Our results demonstrate the efficacy of the proposed method and identify new challenges in ODI super-resolution for future investigations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 141,439 |
2204.13372 | Phase Shift Design in RIS Empowered Wireless Networks: From Optimization
to AI-Based Methods | Reconfigurable intelligent surfaces (RISs) have a revolutionary capability to customize the radio propagation environment for wireless networks. To fully exploit the advantages of RISs in wireless systems, the phases of the reflecting elements must be jointly designed with conventional communication resources, such as beamformers, transmit power, and computation time. However, due to the unique constraints on the phase shift, and massive numbers of reflecting units and users in large-scale networks, the resulting optimization problems are challenging to solve. This paper provides a review of current optimization methods and artificial intelligence-based methods for handling the constraints imposed by RIS and compares them in terms of solution quality and computational complexity. Future challenges in phase shift optimization involving RISs are also described and potential solutions are discussed. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 293,797 |
2205.06331 | Collaborative Multi-agent Stochastic Linear Bandits | We study a collaborative multi-agent stochastic linear bandit setting, where $N$ agents that form a network communicate locally to minimize their overall regret. In this setting, each agent has its own linear bandit problem (its own reward parameter) and the goal is to select the best global action w.r.t. the average of their reward parameters. At each round, each agent proposes an action, and one action is randomly selected and played as the network action. All the agents observe the corresponding rewards of the played actions and use an accelerated consensus procedure to compute an estimate of the average of the rewards obtained by all the agents. We propose a distributed upper confidence bound (UCB) algorithm and prove a high probability bound on its $T$-round regret in which we include a linear growth of regret associated with each communication round. Our regret bound is of order $\mathcal{O}\Big(\sqrt{\frac{T}{N \log(1/|\lambda_2|)}}\cdot (\log T)^2\Big)$, where $\lambda_2$ is the second largest (in absolute value) eigenvalue of the communication matrix. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 296,206 |
1910.03854 | Multimodal representation models for prediction and control from partial
information | Similar to humans, robots benefit from interacting with their environment through a number of different sensor modalities, such as vision, touch, sound. However, learning from different sensor modalities is difficult, because the learning model must be able to handle diverse types of signals, and learn a coherent representation even when parts of the sensor inputs are missing. In this paper, a multimodal variational autoencoder is proposed to enable an iCub humanoid robot to learn representations of its sensorimotor capabilities from different sensor modalities. The proposed model is able to (1) reconstruct missing sensory modalities, (2) predict the sensorimotor state of self and the visual trajectories of other agents actions, and (3) control the agent to imitate an observed visual trajectory. Also, the proposed multimodal variational autoencoder can capture the kinematic redundancy of the robot motion through the learned probability distribution. Training multimodal models is not trivial due to the combinatorial complexity given by the possibility of missing modalities. We propose a strategy to train multimodal models, which successfully achieves improved performance of different reconstruction models. Finally, extensive experiments have been carried out using an iCub humanoid robot, showing high performance in multiple reconstruction, prediction and imitation tasks. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 148,608 |
1809.01772 | Multi-view Factorization AutoEncoder with Network Constraints for
Multi-omic Integrative Analysis | Multi-omic data provides multiple views of the same patients. Integrative analysis of multi-omic data is crucial to elucidate the molecular underpinning of disease etiology. However, multi-omic data has the "big p, small N" problem (the number of features is large, but the number of samples is small), it is challenging to train a complicated machine learning model from the multi-omic data alone and make it generalize well. Here we propose a framework termed Multi-view Factorization AutoEncoder with network constraints to integrate multi-omic data with domain knowledge (biological interactions networks). Our framework employs deep representation learning to learn feature embeddings and patient embeddings simultaneously, enabling us to integrate feature interaction network and patient view similarity network constraints into the training objective. The whole framework is end-to-end differentiable. We applied our approach to the TCGA Pan-cancer dataset and achieved satisfactory results to predict disease progression-free interval (PFI) and patient overall survival (OS) events. Code will be made publicly available. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 106,887 |
2411.17608 | Mixed-State Quantum Denoising Diffusion Probabilistic Model | Generative quantum machine learning has gained significant attention for its ability to produce quantum states with desired distributions. Among various quantum generative models, quantum denoising diffusion probabilistic models (QuDDPMs) [Phys. Rev. Lett. 132, 100602 (2024)] provide a promising approach with stepwise learning that resolves the training issues. However, the requirement of high-fidelity scrambling unitaries in QuDDPM poses a challenge in near-term implementation. We propose the \textit{mixed-state quantum denoising diffusion probabilistic model} (MSQuDDPM) to eliminate the need for scrambling unitaries. Our approach focuses on adapting the quantum noise channels to the model architecture, which integrates depolarizing noise channels in the forward diffusion process and parameterized quantum circuits with projective measurements in the backward denoising steps. We also introduce several techniques to improve MSQuDDPM, including a cosine-exponent schedule of noise interpolation, the use of single-qubit random ancilla, and superfidelity-based cost functions to enhance the convergence. We evaluate MSQuDDPM on quantum ensemble generation tasks, demonstrating its successful performance. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 511,517 |
1804.02722 | Lazy Abstraction-Based Controller Synthesis | We present lazy abstraction-based controller synthesis (ABCS) for continuous-time nonlinear dynamical systems against reach-avoid and safety specifications. State-of-the-art multi-layered ABCS pre-computes multiple finite-state abstractions of varying granularity and applies reactive synthesis to the coarsest abstraction whenever feasible, but adaptively considers finer abstractions when necessary. Lazy ABCS improves this technique by constructing abstractions on demand. Our insight is that the abstract transition relation only needs to be locally computed for a small set of frontier states at the precision currently required by the synthesis algorithm. We show that lazy ABCS can significantly outperform previous multi-layered ABCS algorithms: on standard benchmarks, lazy ABCS is more than 4 times faster. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 94,472 |
1308.4506 | A study of retrieval algorithms of sparse messages in networks of neural
cliques | Associative memories are data structures addressed using part of the content rather than an index. They offer good fault reliability and biological plausibility. Among different families of associative memories, sparse ones are known to offer the best efficiency (ratio of the amount of bits stored to that of bits used by the network itself). Their retrieval process performance has been shown to benefit from the use of iterations. However classical algorithms require having prior knowledge about the data to retrieve such as the number of nonzero symbols. We introduce several families of algorithms to enhance the retrieval process performance in recently proposed sparse associative memories based on binary neural networks. We show that these algorithms provide better performance, along with better plausibility than existing techniques. We also analyze the required number of iterations and derive corresponding curves. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 26,547 |
2209.14708 | TruEyes: Utilizing Microtasks in Mobile Apps for Crowdsourced Labeling
of Machine Learning Datasets | The growing use of supervised machine learning in research and industry has increased the need for labeled datasets. Crowdsourcing has emerged as a popular method to create data labels. However, working on large batches of tasks leads to worker fatigue, negatively impacting labeling quality. To address this, we present TruEyes, a collaborative crowdsourcing system, enabling the distribution of micro-tasks to mobile app users. TruEyes allows machine learning practitioners to publish labeling tasks, mobile app developers to integrate task ads for monetization, and users to label data instead of watching advertisements. To evaluate the system, we conducted an experiment with N=296 participants. Our results show that the quality of the labeled data is comparable to traditional crowdsourcing approaches and most users prefer task ads over traditional ads. We discuss extensions to the system and address how mobile advertisement space can be used as a productive resource in the future. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 320,330 |
2006.11674 | Langevin Dynamics for Adaptive Inverse Reinforcement Learning of
Stochastic Gradient Algorithms | Inverse reinforcement learning (IRL) aims to estimate the reward function of optimizing agents by observing their response (estimates or actions). This paper considers IRL when noisy estimates of the gradient of a reward function generated by multiple stochastic gradient agents are observed. We present a generalized Langevin dynamics algorithm to estimate the reward function $R(\theta)$; specifically, the resulting Langevin algorithm asymptotically generates samples from the distribution proportional to $\exp(R(\theta))$. The proposed IRL algorithms use kernel-based passive learning schemes. We also construct multi-kernel passive Langevin algorithms for IRL which are suitable for high dimensional data. The performance of the proposed IRL algorithms are illustrated on examples in adaptive Bayesian learning, logistic regression (high dimensional problem) and constrained Markov decision processes. We prove weak convergence of the proposed IRL algorithms using martingale averaging methods. We also analyze the tracking performance of the IRL algorithms in non-stationary environments where the utility function $R(\theta)$ jump changes over time as a slow Markov chain. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 183,328 |
1612.04039 | Construction of Full-Diversity LDPC Lattices for Block-Fading Channels | LDPC lattices were the first family of lattices which have an efficient decoding algorithm in high dimensions over an AWGN channel. Considering Construction D' of lattices with one binary LDPC code as underlying code gives the well known Construction A LDPC lattices or 1-level LDPC lattices. Block-fading channel (BF) is a useful model for various wireless communication channels in both indoor and outdoor environments. Frequency-hopping schemes and orthogonal frequency division multiplexing (OFDM) can conveniently be modelled as block-fading channels. Applying lattices in this type of channel entails dividing a lattice point into multiple blocks such that fading is constant within a block but changes, independently, across blocks. The design of lattices for BF channels offers a challenging problem, which differs greatly from its counterparts like AWGN channels. Recently, the original binary Construction A for lattices, due to Forney, have been generalized to a lattice construction from totally real and complex multiplication fields. This generalized Construction A of lattices provides signal space diversity intrinsically, which is the main requirement for the signal sets designed for fading channels. In this paper we construct full diversity LDPC lattices for block-fading channels using Construction A over totally real number fields. We propose a new iterative decoding method for these family of lattices which has complexity that grows linearly in the dimension of the lattice. In order to implement our decoding algorithm, we propose the definition of a parity check matrix and Tanner graph for full diversity Construction A lattices. We also prove that the constructed LDPC lattices together with the proposed decoding method admit diversity order n-1 over an n-block-fading channel. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 65,470 |
2002.12455 | Is the Meta-Learning Idea Able to Improve the Generalization of Deep
Neural Networks on the Standard Supervised Learning? | Substantial efforts have been made on improving the generalization abilities of deep neural networks (DNNs) in order to obtain better performances without introducing more parameters. On the other hand, meta-learning approaches exhibit powerful generalization on new tasks in few-shot learning. Intuitively, few-shot learning is more challenging than the standard supervised learning as each target class only has a very few or no training samples. The natural question that arises is whether the meta-learning idea can be used for improving the generalization of DNNs on the standard supervised learning. In this paper, we propose a novel meta-learning based training procedure (MLTP) for DNNs and demonstrate that the meta-learning idea can indeed improve the generalization abilities of DNNs. MLTP simulates the meta-training process by considering a batch of training samples as a task. The key idea is that the gradient descent step for improving the current task performance should also improve a new task performance, which is ignored by the current standard procedure for training neural networks. MLTP also benefits from all the existing training techniques such as dropout, weight decay, and batch normalization. We evaluate MLTP by training a variety of small and large neural networks on three benchmark datasets, i.e., CIFAR-10, CIFAR-100, and Tiny ImageNet. The experimental results show a consistently improved generalization performance on all the DNNs with different sizes, which verifies the promise of MLTP and demonstrates that the meta-learning idea is indeed able to improve the generalization of DNNs on the standard supervised learning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 166,033 |
2502.06787 | Visual Agentic AI for Spatial Reasoning with a Dynamic API | Visual reasoning -- the ability to interpret the visual world -- is crucial for embodied agents that operate within three-dimensional scenes. Progress in AI has led to vision and language models capable of answering questions from images. However, their performance declines when tasked with 3D spatial reasoning. To tackle the complexity of such reasoning problems, we introduce an agentic program synthesis approach where LLM agents collaboratively generate a Pythonic API with new functions to solve common subproblems. Our method overcomes limitations of prior approaches that rely on a static, human-defined API, allowing it to handle a wider range of queries. To assess AI capabilities for 3D understanding, we introduce a new benchmark of queries involving multiple steps of grounding and inference. We show that our method outperforms prior zero-shot models for visual reasoning in 3D and empirically validate the effectiveness of our agentic framework for 3D spatial reasoning tasks. Project website: https://glab-caltech.github.io/vadar/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 532,244 |
2406.03233 | Generative Diffusion Models for Fast Simulations of Particle Collisions
at CERN | In High Energy Physics simulations play a crucial role in unraveling the complexities of particle collision experiments within CERN's Large Hadron Collider. Machine learning simulation methods have garnered attention as promising alternatives to traditional approaches. While existing methods mainly employ Variational Autoencoders (VAEs) or Generative Adversarial Networks (GANs), recent advancements highlight the efficacy of diffusion models as state-of-the-art generative machine learning methods. We present the first simulation for Zero Degree Calorimeter (ZDC) at the ALICE experiment based on diffusion models, achieving the highest fidelity compared to existing baselines. We perform an analysis of trade-offs between generation times and the simulation quality. The results indicate a significant potential of latent diffusion model due to its rapid generation time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 461,154 |
2410.13597 | Text-Guided Multi-Property Molecular Optimization with a Diffusion
Language Model | Molecular optimization (MO) is a crucial stage in drug discovery in which task-oriented generated molecules are optimized to meet practical industrial requirements. Existing mainstream MO approaches primarily utilize external property predictors to guide iterative property optimization. However, learning all molecular samples in the vast chemical space is unrealistic for predictors. As a result, errors and noise are inevitably introduced during property prediction due to the nature of approximation. This leads to discrepancy accumulation, generalization reduction and suboptimal molecular candidates. In this paper, we propose a text-guided multi-property molecular optimization method utilizing transformer-based diffusion language model (TransDLM). TransDLM leverages standardized chemical nomenclature as semantic representations of molecules and implicitly embeds property requirements into textual descriptions, thereby preventing error propagation during diffusion process. Guided by physically and chemically detailed textual descriptions, TransDLM samples and optimizes encoded source molecules, retaining core scaffolds of source molecules and ensuring structural similarities. Moreover, TransDLM enables simultaneous sampling of multiple molecules, making it ideal for scalable, efficient large-scale optimization through distributed computation on web platforms. Furthermore, our approach surpasses state-of-the-art methods in optimizing molecular structural similarity and enhancing chemical properties on the benchmark dataset. The code is available at: https://anonymous.4open.science/r/TransDLM-A901. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 499,600 |
1112.2816 | Phase transition to two-peaks phase in an information cascade voting
experiment | Observational learning is an important information aggregation mechanism. However, it occasionally leads to a state in which an entire population chooses a sub-optimal option. When it occurs and whether it is a phase transition remain unanswered. To address these questions, we performed a voting experiment in which subjects answered a two-choice quiz sequentially with and without information about the prior subjects' choices. The subjects who could copy others are called herders. We obtained a microscopic rule regarding how herders copy others. Varying the ratio of herders led to qualitative changes in the macroscopic behavior in the experiment of about 50 subjects. If the ratio is small, the sequence of choices rapidly converges to the true one. As the ratio approaches 100%, convergence becomes extremely slow and information aggregation almost terminates. A simulation study of a stochastic model for 10^{6} subjects based on the herder's microscopic rule showed a phase transition to the two-peaks phase, where the convergence completely terminates, as the ratio exceeds some critical value. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,448 |
2312.05028 | Cluster images with AntClust: a clustering algorithm based on the
chemical recognition system of ants | We implement AntClust, a clustering algorithm based on the chemical recognition system of ants and use it to cluster images of cars. We will give a short recap summary of the main working principles of the algorithm as devised by the original paper [1]. Further, we will describe how to define a similarity function for images and how the implementation is used to cluster images of cars from the vehicle re-identification data set. We then test the clustering performance of AntClust against DBSCAN, HDBSCAN and OPTICS. Finally one of the core parts in AntClust, the rule set can be easily redefined with our implementation, enabling a way for other bio-inspired algorithms to find rules in an automated process. The implementation can be found on GitLab [9]. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 413,926 |
1904.00172 | EE-AE: An Exclusivity Enhanced Unsupervised Feature Learning Approach | Unsupervised learning is becoming more and more important recently. As one of its key components, the autoencoder (AE) aims to learn a latent feature representation of data which is more robust and discriminative. However, most AE based methods only focus on the reconstruction within the encoder-decoder phase, which ignores the inherent relation of data, i.e., statistical and geometrical dependence, and easily causes overfitting. In order to deal with this issue, we propose an Exclusivity Enhanced (EE) unsupervised feature learning approach to improve the conventional AE. To the best of our knowledge, our research is the first to utilize such exclusivity concept to cooperate with feature extraction within AE. Moreover, in this paper we also make some improvements to the stacked AE structure especially for the connection of different layers from decoders, this could be regarded as a weight initialization trial. The experimental results show that our proposed approach can achieve remarkable performance compared with other related methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 125,812 |
1111.0379 | Fast reconstruction of phylogenetic trees using locality-sensitive
hashing | We present the first sub-quadratic time algorithm that with high probability correctly reconstructs phylogenetic trees for short sequences generated by a Markov model of evolution. Due to rapid expansion in sequence databases, such very fast algorithms are becoming necessary. Other fast heuristics have been developed for building trees from very large alignments (Price et al, and Brown et al), but they lack theoretical performance guarantees. Our new algorithm runs in $O(n^{1+\gamma(g)}\log^2n)$ time, where $\gamma$ is an increasing function of an upper bound on the branch lengths in the phylogeny, the upper bound $g$ must be below$1/2-\sqrt{1/8} \approx 0.15$, and $\gamma(g)<1$ for all $g$. For phylogenies with very short branches, the running time of our algorithm is close to linear. For example, if all branch lengths correspond to a mutation probability of less than 0.02, the running time of our algorithm is roughly $O(n^{1.2}\log^2n)$. Via a prototype and a sequence of large-scale experiments, we show that many large phylogenies can be reconstructed fast, without compromising reconstruction accuracy. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 12,873 |
2104.00163 | DEALIO: Data-Efficient Adversarial Learning for Imitation from
Observation | In imitation learning from observation IfO, a learning agent seeks to imitate a demonstrating agent using only observations of the demonstrated behavior without access to the control signals generated by the demonstrator. Recent methods based on adversarial imitation learning have led to state-of-the-art performance on IfO problems, but they typically suffer from high sample complexity due to a reliance on data-inefficient, model-free reinforcement learning algorithms. This issue makes them impractical to deploy in real-world settings, where gathering samples can incur high costs in terms of time, energy, and risk. In this work, we hypothesize that we can incorporate ideas from model-based reinforcement learning with adversarial methods for IfO in order to increase the data efficiency of these methods without sacrificing performance. Specifically, we consider time-varying linear Gaussian policies, and propose a method that integrates the linear-quadratic regulator with path integral policy improvement into an existing adversarial IfO framework. The result is a more data-efficient IfO algorithm with better performance, which we show empirically in four simulation domains: using far fewer interactions with the environment, the proposed method exhibits similar or better performance than the existing technique. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 227,894 |
2412.08021 | Can a MISL Fly? Analysis and Ingredients for Mutual Information Skill
Learning | Self-supervised learning has the potential of lifting several of the key challenges in reinforcement learning today, such as exploration, representation learning, and reward design. Recent work (METRA) has effectively argued that moving away from mutual information and instead optimizing a certain Wasserstein distance is important for good performance. In this paper, we argue that the benefits seen in that paper can largely be explained within the existing framework of mutual information skill learning (MISL). Our analysis suggests a new MISL method (contrastive successor features) that retains the excellent performance of METRA with fewer moving parts, and highlights connections between skill learning, contrastive representation learning, and successor features. Finally, through careful ablation studies, we provide further insight into some of the key ingredients for both our method and METRA. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 515,904 |
1305.1112 | json2run: a tool for experiment design & analysis | json2run is a tool to automate the running, storage and analysis of experiments. The main advantage of json2run is that it allows to describe a set of experiments concisely as a JSON-formatted parameter tree. It also supports parallel execution of experiments, automatic parameter tuning through the F-Race framework and storage and analysis of experiments with MongoDB and R. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 24,406 |
2310.07338 | From Supervised to Generative: A Novel Paradigm for Tabular Deep
Learning with Large Language Models | Tabular data is foundational to predictive modeling in various crucial industries, including healthcare, finance, retail, sustainability, etc. Despite the progress made in specialized models, there is an increasing demand for universal models that can transfer knowledge, generalize from limited data, and follow human instructions. These are challenges that current tabular deep learning approaches have not fully tackled. Here we introduce Generative Tabular Learning (GTL), a novel framework that integrates the advanced functionalities of large language models (LLMs)-such as prompt-based zero-shot generalization and in-context learning-into tabular deep learning. GTL capitalizes on the pre-training of LLMs on diverse tabular data, enhancing their understanding of domain-specific knowledge, numerical sequences, and statistical dependencies critical for accurate predictions. Our empirical study spans 384 public datasets, rigorously analyzing GTL's convergence and scaling behaviors and assessing the impact of varied data templates. The GTL-enhanced LLaMA-2 model demonstrates superior zero-shot and in-context learning capabilities across numerous classification and regression tasks. Notably, it achieves this without fine-tuning, outperforming traditional methods and rivaling state-of-the-art models like GPT-4 in certain cases. Through GTL, we not only foster a deeper integration of LLMs' sophisticated abilities into tabular data comprehension and application but also offer a new training resource and a test bed for LLMs to enhance their ability to comprehend tabular data. To facilitate reproducible research, we release our code, data, and model checkpoints at https://github.com/microsoft/Industrial-Foundation-Models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 398,933 |
2407.14570 | Are handcrafted filters helpful for attributing AI-generated images? | Recently, a vast number of image generation models have been proposed, which raises concerns regarding the misuse of these artificial intelligence (AI) techniques for generating fake images. To attribute the AI-generated images, existing schemes usually design and train deep neural networks (DNNs) to learn the model fingerprints, which usually requires a large amount of data for effective learning. In this paper, we aim to answer the following two questions for AI-generated image attribution, 1) is it possible to design useful handcrafted filters to facilitate the fingerprint learning? and 2) how we could reduce the amount of training data after we incorporate the handcrafted filters? We first propose a set of Multi-Directional High-Pass Filters (MHFs) which are capable to extract the subtle fingerprints from various directions. Then, we propose a Directional Enhanced Feature Learning network (DEFL) to take both the MHFs and randomly-initialized filters into consideration. The output of the DEFL is fused with the semantic features to produce a compact fingerprint. To make the compact fingerprint discriminative among different models, we propose a Dual-Margin Contrastive (DMC) loss to tune our DEFL. Finally, we propose a reference based fingerprint classification scheme for image attribution. Experimental results demonstrate that it is indeed helpful to use our MHFs for attributing the AI-generated images. The performance of our proposed method is significantly better than the state-of-the-art for both the closed-set and open-set image attribution, where only a small amount of images are required for training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,833 |
2006.03963 | Combinatorial Black-Box Optimization with Expert Advice | We consider the problem of black-box function optimization over the boolean hypercube. Despite the vast literature on black-box function optimization over continuous domains, not much attention has been paid to learning models for optimization over combinatorial domains until recently. However, the computational complexity of the recently devised algorithms are prohibitive even for moderate numbers of variables; drawing one sample using the existing algorithms is more expensive than a function evaluation for many black-box functions of interest. To address this problem, we propose a computationally efficient model learning algorithm based on multilinear polynomials and exponential weight updates. In the proposed algorithm, we alternate between simulated annealing with respect to the current polynomial representation and updating the weights using monomial experts' advice. Numerical experiments on various datasets in both unconstrained and sum-constrained boolean optimization indicate the competitive performance of the proposed algorithm, while improving the computational time up to several orders of magnitude compared to state-of-the-art algorithms in the literature. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,499 |
2111.02249 | Learned Image Compression for Machine Perception | Recent work has shown that learned image compression strategies can outperform standard hand-crafted compression algorithms that have been developed over decades of intensive research on the rate-distortion trade-off. With growing applications of computer vision, high quality image reconstruction from a compressible representation is often a secondary objective. Compression that ensures high accuracy on computer vision tasks such as image segmentation, classification, and detection therefore has the potential for significant impact across a wide variety of settings. In this work, we develop a framework that produces a compression format suitable for both human perception and machine perception. We show that representations can be learned that simultaneously optimize for compression and performance on core vision tasks. Our approach allows models to be trained directly from compressed representations, and this approach yields increased performance on new tasks and in low-shot learning settings. We present results that improve upon segmentation and detection performance compared to standard high quality JPGs, but with representations that are four to ten times smaller in terms of bits per pixel. Further, unlike naive compression methods, at a level ten times smaller than standard JEPGs, segmentation and detection models trained from our format suffer only minor degradation in performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 264,813 |
2308.11249 | Video BagNet: short temporal receptive fields increase robustness in
long-term action recognition | Previous work on long-term video action recognition relies on deep 3D-convolutional models that have a large temporal receptive field (RF). We argue that these models are not always the best choice for temporal modeling in videos. A large temporal receptive field allows the model to encode the exact sub-action order of a video, which causes a performance decrease when testing videos have a different sub-action order. In this work, we investigate whether we can improve the model robustness to the sub-action order by shrinking the temporal receptive field of action recognition models. For this, we design Video BagNet, a variant of the 3D ResNet-50 model with the temporal receptive field size limited to 1, 9, 17 or 33 frames. We analyze Video BagNet on synthetic and real-world video datasets and experimentally compare models with varying temporal receptive fields. We find that short receptive fields are robust to sub-action order changes, while larger temporal receptive fields are sensitive to the sub-action order. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 387,069 |
1606.05593 | Introspective Agents: Confidence Measures for General Value Functions | Agents of general intelligence deployed in real-world scenarios must adapt to ever-changing environmental conditions. While such adaptive agents may leverage engineered knowledge, they will require the capacity to construct and evaluate knowledge themselves from their own experience in a bottom-up, constructivist fashion. This position paper builds on the idea of encoding knowledge as temporally extended predictions through the use of general value functions. Prior work has focused on learning predictions about externally derived signals about a task or environment (e.g. battery level, joint position). Here we advocate that the agent should also predict internally generated signals regarding its own learning process - for example, an agent's confidence in its learned predictions. Finally, we suggest how such information would be beneficial in creating an introspective agent that is able to learn to make good decisions in a complex, changing world. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 57,429 |
1811.10396 | Learning to Skip Ineffectual Recurrent Computations in LSTMs | Long Short-Term Memory (LSTM) is a special class of recurrent neural network, which has shown remarkable successes in processing sequential data. The typical architecture of an LSTM involves a set of states and gates: the states retain information over arbitrary time intervals and the gates regulate the flow of information. Due to the recursive nature of LSTMs, they are computationally intensive to deploy on edge devices with limited hardware resources. To reduce the computational complexity of LSTMs, we first introduce a method that learns to retain only the important information in the states by pruning redundant information. We then show that our method can prune over 90% of information in the states without incurring any accuracy degradation over a set of temporal tasks. This observation suggests that a large fraction of the recurrent computations are ineffectual and can be avoided to speed up the process during the inference as they involve noncontributory multiplications/accumulations with zero-valued states. Finally, we introduce a custom hardware accelerator that can perform the recurrent computations using both sparse and dense states. Experimental measurements show that performing the computations using the sparse states speeds up the process and improves energy efficiency by up to 5.2x when compared to implementation results of the accelerator performing the computations using dense states. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 114,479 |
0911.0183 | A Gibbs Sampling Based MAP Detection Algorithm for OFDM Over Rapidly
Varying Mobile Radio Channels | In orthogonal frequency-division multiplexing (OFDM) systems operating over rapidly time-varying channels, the orthogonality between subcarriers is destroyed leading to inter-carrier interference (ICI) and resulting in an irreducible error floor. In this paper, a new and low-complexity maximum {\em a posteriori} probability (MAP) detection algorithm is proposed for OFDM systems operating over rapidly time-varying multipath channels. The detection algorithm exploits the banded structure of the frequency-domain channel matrix whose bandwidth is a parameter to be adjusted according to the speed of the mobile terminal. Based on this assumption, the received signal vector is decomposed into reduced dimensional sub-observations in such a way that all components of the observation vector contributing to the symbol to be detected are included in the decomposed observation model. The data symbols are then detected by the MAP algorithm by means of a Markov chain Monte Carlo (MCMC) technique in an optimal and computationally efficient way. Computational complexity investigation as well as simulation results indicate that this algorithm has significant performance and complexity advantages over existing suboptimal detection and equalization algorithms proposed earlier in the literature. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,838 |
2204.00824 | Graph-based Approximate NN Search: A Revisit | Nearest neighbor search plays a fundamental role in many disciplines such as multimedia information retrieval, data-mining, and machine learning. The graph-based search approaches show superior performance over other types of approaches in recent studies. In this paper, the graph-based NN search is revisited. We optimize two key components in the approach, namely the search procedure and the graph that supports the search. For the graph construction, a two-stage graph diversification scheme is proposed, which makes a good trade-off between the efficiency and reachability for the search procedure that builds upon it. Moreover, the proposed diversification scheme allows the search procedure to decide dynamically how many nodes should be visited in one node's neighborhood. By this way, the computing power of the devices is fully utilized when the search is carried out under different circumstances. Furthermore, two NN search procedures are designed respectively for small and large batch queries on the GPU. The optimized NN search, when being supported by the two-stage diversified graph, outperforms all the state-of-the-art approaches on both the CPU and the GPU across all the considered large-scale datasets. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 289,400 |
1805.06201 | Contextual Augmentation: Data Augmentation by Words with Paradigmatic
Relations | We propose a novel data augmentation for labeled sentences called contextual augmentation. We assume an invariance that sentences are natural even if the words in the sentences are replaced with other words with paradigmatic relations. We stochastically replace words with other words that are predicted by a bi-directional language model at the word positions. Words predicted according to a context are numerous but appropriate for the augmentation of the original words. Furthermore, we retrofit a language model with a label-conditional architecture, which allows the model to augment sentences without breaking the label-compatibility. Through the experiments for six various different text classification tasks, we demonstrate that the proposed method improves classifiers based on the convolutional or recurrent neural networks. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 97,556 |
1711.08565 | Person Transfer GAN to Bridge Domain Gap for Person Re-Identification | Although the performance of person Re-Identification (ReID) has been significantly boosted, many challenging issues in real scenarios have not been fully investigated, e.g., the complex scenes and lighting variations, viewpoint and pose changes, and the large number of identities in a camera network. To facilitate the research towards conquering those issues, this paper contributes a new dataset called MSMT17 with many important features, e.g., 1) the raw videos are taken by an 15-camera network deployed in both indoor and outdoor scenes, 2) the videos cover a long period of time and present complex lighting variations, and 3) it contains currently the largest number of annotated identities, i.e., 4,101 identities and 126,441 bounding boxes. We also observe that, domain gap commonly exists between datasets, which essentially causes severe performance drop when training and testing on different datasets. This results in that available training data cannot be effectively leveraged for new testing domains. To relieve the expensive costs of annotating new training samples, we propose a Person Transfer Generative Adversarial Network (PTGAN) to bridge the domain gap. Comprehensive experiments show that the domain gap could be substantially narrowed-down by the PTGAN. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,227 |
2401.03205 | The Dawn After the Dark: An Empirical Study on Factuality Hallucination
in Large Language Models | In the era of large language models (LLMs), hallucination (i.e., the tendency to generate factually incorrect content) poses great challenge to trustworthy and reliable deployment of LLMs in real-world applications. To tackle the LLM hallucination, three key questions should be well studied: how to detect hallucinations (detection), why do LLMs hallucinate (source), and what can be done to mitigate them (mitigation). To address these challenges, this work presents a systematic empirical study on LLM hallucination, focused on the the three aspects of hallucination detection, source and mitigation. Specially, we construct a new hallucination benchmark HaluEval 2.0, and designs a simple yet effective detection method for LLM hallucination. Furthermore, we zoom into the different training or utilization stages of LLMs and extensively analyze the potential factors that lead to the LLM hallucination. Finally, we implement and examine a series of widely used techniques to mitigate the hallucinations in LLMs. Our work has led to several important findings to understand the hallucination origin and mitigate the hallucinations in LLMs. Our code and data can be accessed at https://github.com/RUCAIBox/HaluEval-2.0. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 420,020 |
1703.09026 | Trespassing the Boundaries: Labeling Temporal Bounds for Object
Interactions in Egocentric Video | Manual annotations of temporal bounds for object interactions (i.e. start and end times) are typical training input to recognition, localization and detection algorithms. For three publicly available egocentric datasets, we uncover inconsistencies in ground truth temporal bounds within and across annotators and datasets. We systematically assess the robustness of state-of-the-art approaches to changes in labeled temporal bounds, for object interaction recognition. As boundaries are trespassed, a drop of up to 10% is observed for both Improved Dense Trajectories and Two-Stream Convolutional Neural Network. We demonstrate that such disagreement stems from a limited understanding of the distinct phases of an action, and propose annotating based on the Rubicon Boundaries, inspired by a similarly named cognitive model, for consistent temporal bounds of object interactions. Evaluated on a public dataset, we report a 4% increase in overall accuracy, and an increase in accuracy for 55% of classes when Rubicon Boundaries are used for temporal annotations. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 70,687 |
2407.01065 | Improve ROI with Causal Learning and Conformal Prediction | In the commercial sphere, such as operations and maintenance, advertising, and marketing recommendations, intelligent decision-making utilizing data mining and neural network technologies is crucial, especially in resource allocation to optimize ROI. This study delves into the Cost-aware Binary Treatment Assignment Problem (C-BTAP) across different industries, with a focus on the state-of-the-art Direct ROI Prediction (DRP) method. However, the DRP model confronts issues like covariate shift and insufficient training data, hindering its real-world effectiveness. Addressing these challenges is essential for ensuring dependable and robust predictions in varied operational contexts. This paper presents a robust Direct ROI Prediction (rDRP) method, designed to address challenges in real-world deployment of neural network-based uplift models, particularly under conditions of covariate shift and insufficient training data. The rDRP method, enhancing the standard DRP model, does not alter the model's structure or require retraining. It utilizes conformal prediction and Monte Carlo dropout for interval estimation, adapting to model uncertainty and data distribution shifts. A heuristic calibration method, inspired by a Kaggle competition, combines point and interval estimates. The effectiveness of these approaches is validated through offline tests and online A/B tests in various settings, demonstrating significant improvements in target rewards compared to the state-of-the-art method. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 469,134 |
2211.02643 | A Transformer Architecture for Online Gesture Recognition of
Mathematical Expressions | The Transformer architecture is shown to provide a powerful framework as an end-to-end model for building expression trees from online handwritten gestures corresponding to glyph strokes. In particular, the attention mechanism was successfully used to encode, learn and enforce the underlying syntax of expressions creating latent representations that are correctly decoded to the exact mathematical expression tree, providing robustness to ablated inputs and unseen glyphs. For the first time, the encoder is fed with spatio-temporal data tokens potentially forming an infinitely large vocabulary, which finds applications beyond that of online gesture recognition. A new supervised dataset of online handwriting gestures is provided for training models on generic handwriting recognition tasks and a new metric is proposed for the evaluation of the syntactic correctness of the output expression trees. A small Transformer model suitable for edge inference was successfully trained to an average normalised Levenshtein accuracy of 94%, resulting in valid postfix RPN tree representation for 94% of predictions. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 328,648 |
2311.14094 | Robust Decision Aggregation with Second-order Information | We consider a decision aggregation problem with two experts who each make a binary recommendation after observing a private signal about an unknown binary world state. An agent, who does not know the joint information structure between signals and states, sees the experts' recommendations and aims to match the action with the true state. Under the scenario, we study whether supplemented additionally with second-order information (each expert's forecast on the other's recommendation) could enable a better aggregation. We adopt a minimax regret framework to evaluate the aggregator's performance, by comparing it to an omniscient benchmark that knows the joint information structure. With general information structures, we show that second-order information provides no benefit. No aggregator can improve over a trivial aggregator, which always follows the first expert's recommendation. However, positive results emerge when we assume experts' signals are conditionally independent given the world state. When the aggregator is deterministic, we present a robust aggregator that leverages second-order information, which can significantly outperform counterparts without it. Second, when two experts are homogeneous, by adding a non-degenerate assumption on the signals, we demonstrate that random aggregators using second-order information can surpass optimal ones without it. In the remaining settings, the second-order information is not beneficial. We also extend the above results to the setting when the aggregator's utility function is more general. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 409,991 |
2206.04733 | On Low-Complexity Quickest Intervention of Mutated Diffusion Processes
Through Local Approximation | We consider the problem of controlling a mutated diffusion process with an unknown mutation time. The problem is formulated as the quickest intervention problem with the mutation modeled by a change-point, which is a generalization of the quickest change-point detection (QCD). Our goal is to intervene in the mutated process as soon as possible while maintaining a low intervention cost with optimally chosen intervention actions. This model and the proposed algorithms can be applied to pandemic prevention (such as Covid-19) or misinformation containment. We formulate the problem as a partially observed Markov decision process (POMDP) and convert it to an MDP through the belief state of the change-point. We first propose a grid approximation approach to calculate the optimal intervention policy, whose computational complexity could be very high when the number of grids is large. In order to reduce the computational complexity, we further propose a low-complexity threshold-based policy through the analysis of the first-order approximation of the value functions in the ``local intervention'' regime. Simulation results show the low-complexity algorithm has a similar performance as the grid approximation and both perform much better than the QCD-based algorithms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 301,743 |
2011.10704 | Neural Group Testing to Accelerate Deep Learning | Recent advances in deep learning have made the use of large, deep neural networks with tens of millions of parameters. The sheer size of these networks imposes a challenging computational burden during inference. Existing work focuses primarily on accelerating each forward pass of a neural network. Inspired by the group testing strategy for efficient disease testing, we propose neural group testing, which accelerates by testing a group of samples in one forward pass. Groups of samples that test negative are ruled out. If a group tests positive, samples in that group are then retested adaptively. A key challenge of neural group testing is to modify a deep neural network so that it could test multiple samples in one forward pass. We propose three designs to achieve this without introducing any new parameters and evaluate their performances. We applied neural group testing in an image moderation task to detect rare but inappropriate images. We found that neural group testing can group up to 16 images in one forward pass and reduce the overall computation cost by over 73% while improving detection performance. | false | false | false | false | true | false | true | false | true | false | false | true | false | false | false | false | false | false | 207,593 |
2012.12060 | Information Leakage Games: Exploring Information as a Utility Function | A common goal in the areas of secure information flow and privacy is to build effective defenses against unwanted leakage of information. To this end, one must be able to reason about potential attacks and their interplay with possible defenses. In this paper, we propose a game-theoretic framework to formalize strategies of attacker and defender in the context of information leakage, and provide a basis for developing optimal defense methods. A novelty of our games is that their utility is given by information leakage, which in some cases may behave in a non-linear way. This causes a significant deviation from classic game theory, in which utility functions are linear with respect to players' strategies. Hence, a key contribution of this paper is the establishment of the foundations of information leakage games. We consider two kinds of games, depending on the notion of leakage considered. The first kind, the QIF-games, is tailored for the theory of quantitative information flow (QIF). The second one, the DP-games, corresponds to differential privacy (DP). | false | false | false | false | true | false | false | false | false | true | false | false | true | false | false | false | false | true | 212,810 |
2207.04789 | bloomRF: On Performing Range-Queries in Bloom-Filters with
Piecewise-Monotone Hash Functions and Prefix Hashing | We introduce bloomRF as a unified method for approximate membership testing that supports both point- and range-queries. As a first core idea, bloomRF introduces novel prefix hashing to efficiently encode range information in the hash-code of the key itself. As a second key concept, bloomRF proposes novel piecewise-monotone hash-functions that preserve local order and support fast range-lookups with fewer memory accesses. bloomRF has near-optimal space complexity and constant query complexity. Although, bloomRF is designed for integer domains, it supports floating-points, and can serve as a multi-attribute filter. The evaluation in RocksDB and in a standalone library shows that it is more efficient and outperforms existing point-range-filters by up to 4x across a range of settings and distributions, while keeping the false-positive rate low. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 307,311 |
2201.10249 | Diversity in the Music Listening Experience: Insights from Focus Group
Interviews | Music listening in today's digital spaces is highly characterized by the availability of huge music catalogues, accessible by people all over the world. In this scenario, recommender systems are designed to guide listeners in finding tracks and artists that best fit their requests, having therefore the power to influence the diversity of the music they listen to. Albeit several works have proposed new techniques for developing diversity-aware recommendations, little is known about how people perceive diversity while interacting with music recommendations. In this study, we interview several listeners about the role that diversity plays in their listening experience, trying to get a better understanding of how they interact with music recommendations. We recruit the listeners among the participants of a previous quantitative study, where they were confronted with the notion of diversity when asked to identify, from a series of electronic music lists, the most diverse ones according to their beliefs. As a follow-up, in this qualitative study we carry out semi-structured interviews to understand how listeners may assess the diversity of a music list and to investigate their experiences with music recommendation diversity. We report here our main findings on 1) what can influence the diversity assessment of tracks and artists' music lists, and 2) which factors can characterize listeners' interaction with music recommendation diversity. | true | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 276,925 |
2302.09301 | Exploring the Representation Manifolds of Stable Diffusion Through the
Lens of Intrinsic Dimension | Prompting has become an important mechanism by which users can more effectively interact with many flavors of foundation model. Indeed, the last several years have shown that well-honed prompts can sometimes unlock emergent capabilities within such models. While there has been a substantial amount of empirical exploration of prompting within the community, relatively few works have studied prompting at a mathematical level. In this work we aim to take a first step towards understanding basic geometric properties induced by prompts in Stable Diffusion, focusing on the intrinsic dimension of internal representations within the model. We find that choice of prompt has a substantial impact on the intrinsic dimension of representations at both layers of the model which we explored, but that the nature of this impact depends on the layer being considered. For example, in certain bottleneck layers of the model, intrinsic dimension of representations is correlated with prompt perplexity (measured using a surrogate model), while this correlation is not apparent in the latent layers. Our evidence suggests that intrinsic dimension could be a useful tool for future studies of the impact of different prompts on text-to-image models. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 346,361 |
2211.15242 | Ising Model on Locally Tree-like Graphs: Uniqueness of Solutions to
Cavity Equations | In the study of Ising models on large locally tree-like graphs, in both rigorous and non-rigorous methods one is often led to understanding the so-called belief propagation distributional recursions and its fixed points. We prove that there is at most one non-trivial fixed point for Ising models with zero or certain random external fields. Previously this was only known for sufficiently ``low-temperature'' models. Our main innovation is in applying information-theoretic ideas of channel comparison leading to a new metric (degradation index) between binary-input-symmetric (BMS) channels under which the Belief Propagation (BP) operator is a strict contraction (albeit non-multiplicative). A key ingredient of our proof is a strengthening of the classical stringy tree lemma of (Evans-Kenyon-Peres-Schulman'00). Our result simultaneously closes the following 6 conjectures in the literature: 1) independence of robust reconstruction accuracy to leaf noise in broadcasting on trees (Mossel-Neeman-Sly'16); 2) uselessness of global information for a labeled 2-community stochastic block model, or 2-SBM (Kanade-Mossel-Schramm'16); 3) optimality of local algorithms for 2-SBM under noisy side information (Mossel-Xu'16); 4) uniqueness of BP fixed point in broadcasting on trees in the Gaussian (large degree) limit (ibid); 5) boundary irrelevance in broadcasting on trees (Abbe-Cornacchia-Gu-Polyanskiy'21); 6) characterization of entropy (and mutual information) of community labels given the graph in 2-SBM (ibid). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 333,175 |
1809.05127 | IL-Net: Using Expert Knowledge to Guide the Design of Furcated Neural
Networks | Deep neural networks (DNN) excel at extracting patterns. Through representation learning and automated feature engineering on large datasets, such models have been highly successful in computer vision and natural language applications. Designing optimal network architectures from a principled or rational approach however has been less than successful, with the best successful approaches utilizing an additional machine learning algorithm to tune the network hyperparameters. However, in many technical fields, there exist established domain knowledge and understanding about the subject matter. In this work, we develop a novel furcated neural network architecture that utilizes domain knowledge as high-level design principles of the network. We demonstrate proof-of-concept by developing IL-Net, a furcated network for predicting the properties of ionic liquids, which is a class of complex multi-chemicals entities. Compared to existing state-of-the-art approaches, we show that furcated networks can improve model accuracy by approximately 20-35%, without using additional labeled data. Lastly, we distill two key design principles for furcated networks that can be adapted to other domains. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 107,721 |
2411.02236 | 3D Audio-Visual Segmentation | Recognizing the sounding objects in scenes is a longstanding objective in embodied AI, with diverse applications in robotics and AR/VR/MR. To that end, Audio-Visual Segmentation (AVS), taking as condition an audio signal to identify the masks of the target sounding objects in an input image with synchronous camera and microphone sensors, has been recently advanced. However, this paradigm is still insufficient for real-world operation, as the mapping from 2D images to 3D scenes is missing. To address this fundamental limitation, we introduce a novel research problem, 3D Audio-Visual Segmentation, extending the existing AVS to the 3D output space. This problem poses more challenges due to variations in camera extrinsics, audio scattering, occlusions, and diverse acoustics across sounding object categories. To facilitate this research, we create the very first simulation based benchmark, 3DAVS-S34-O7, providing photorealistic 3D scene environments with grounded spatial audio under single-instance and multi-instance settings, across 34 scenes and 7 object categories. This is made possible by re-purposing the Habitat simulator to generate comprehensive annotations of sounding object locations and corresponding 3D masks. Subsequently, we propose a new approach, EchoSegnet, characterized by integrating the ready-to-use knowledge from pretrained 2D audio-visual foundation models synergistically with 3D visual scene representation through spatial audio-aware mask alignment and refinement. Extensive experiments demonstrate that EchoSegnet can effectively segment sounding objects in 3D space on our new benchmark, representing a significant advancement in the field of embodied AI. Project page: https://surrey-uplab.github.io/research/3d-audio-visual-segmentation/ | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 505,410 |
2304.04158 | Does Continual Learning Equally Forget All Parameters? | Distribution shift (e.g., task or domain shift) in continual learning (CL) usually results in catastrophic forgetting of neural networks. Although it can be alleviated by repeatedly replaying buffered data, the every-step replay is time-consuming. In this paper, we study which modules in neural networks are more prone to forgetting by investigating their training dynamics during CL. Our proposed metrics show that only a few modules are more task-specific and sensitively alter between tasks, while others can be shared across tasks as common knowledge. Hence, we attribute forgetting mainly to the former and find that finetuning them only on a small buffer at the end of any CL method can bring non-trivial improvement. Due to the small number of finetuned parameters, such ``Forgetting Prioritized Finetuning (FPF)'' is efficient in computation. We further propose a more efficient and simpler method that entirely removes the every-step replay and replaces them by only $k$-times of FPF periodically triggered during CL. Surprisingly, this ``$k$-FPF'' performs comparably to FPF and outperforms the SOTA CL methods but significantly reduces their computational overhead and cost. In experiments on several benchmarks of class- and domain-incremental CL, FPF consistently improves existing CL methods by a large margin, and $k$-FPF further excels in efficiency without degrading the accuracy. We also empirically studied the impact of buffer size, epochs per task, and finetuning modules on the cost and accuracy of our methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 357,105 |
2104.10378 | Wireless Sensing With Deep Spectrogram Network and Primitive Based
Autoregressive Hybrid Channel Model | Human motion recognition (HMR) based on wireless sensing is a low-cost technique for scene understanding. Current HMR systems adopt support vector machines (SVMs) and convolutional neural networks (CNNs) to classify radar signals. However, whether a deeper learning model could improve the system performance is currently not known. On the other hand, training a machine learning model requires a large dataset, but data gathering from experiment is cost-expensive and time-consuming. Although wireless channel models can be adopted for dataset generation, current channel models are mostly designed for communication rather than sensing. To address the above problems, this paper proposes a deep spectrogram network (DSN) by leveraging the residual mapping technique to enhance the HMR performance. Furthermore, a primitive based autoregressive hybrid (PBAH) channel model is developed, which facilitates efficient training and testing dataset generation for HMR in a virtual environment. Experimental results demonstrate that the proposed PBAH channel model matches the actual experimental data very well and the proposed DSN achieves significantly smaller recognition error than that of CNN. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 231,560 |
0904.0814 | Stability Analysis and Learning Bounds for Transductive Regression
Algorithms | This paper uses the notion of algorithmic stability to derive novel generalization bounds for several families of transductive regression algorithms, both by using convexity and closed-form solutions. Our analysis helps compare the stability of these algorithms. It also shows that a number of widely used transductive regression algorithms are in fact unstable. Finally, it reports the results of experiments with local transductive regression demonstrating the benefit of our stability bounds for model selection, for one of the algorithms, in particular for determining the radius of the local neighborhood used by the algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 3,487 |
2209.05917 | SpaDE: Improving Sparse Representations using a Dual Document Encoder
for First-stage Retrieval | Sparse document representations have been widely used to retrieve relevant documents via exact lexical matching. Owing to the pre-computed inverted index, it supports fast ad-hoc search but incurs the vocabulary mismatch problem. Although recent neural ranking models using pre-trained language models can address this problem, they usually require expensive query inference costs, implying the trade-off between effectiveness and efficiency. Tackling the trade-off, we propose a novel uni-encoder ranking model, Sparse retriever using a Dual document Encoder (SpaDE), learning document representation via the dual encoder. Each encoder plays a central role in (i) adjusting the importance of terms to improve lexical matching and (ii) expanding additional terms to support semantic matching. Furthermore, our co-training strategy trains the dual encoder effectively and avoids unnecessary intervention in training each other. Experimental results on several benchmarks show that SpaDE outperforms existing uni-encoder ranking models. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 317,241 |
1006.1699 | Multidimensional Datawarehouse with Combination Formula | Multidimensional in data warehouse is a compulsion and become the most important for information delivery, without multidimensional Multidimensional in data warehouse is a compulsion and become the most important for information delivery, without multidimensional datawarehouse is incomplete. Multidimensional give ability to analyze business measurement in many different ways. Multidimensional is also synonymous with online analytical processing (OLAP). By using some concepts in datawarehouse like slice-dice,drill down and roll up will increase the ability of multidimensional datawarehouse. The research question and the discussing for this paper are how much deepest the multidimensional ability from each fact table in datawarehouse. By using the statistic combination formula we try to explore the combination that can be yielded from each dimension in hypercubes, the entire of dimensi combination, minimum combination and maximum combination. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 6,732 |
2311.12233 | Unifying Corroborative and Contributive Attributions in Large Language
Models | As businesses, products, and services spring up around large language models, the trustworthiness of these models hinges on the verifiability of their outputs. However, methods for explaining language model outputs largely fall across two distinct fields of study which both use the term "attribution" to refer to entirely separate techniques: citation generation and training data attribution. In many modern applications, such as legal document generation and medical question answering, both types of attributions are important. In this work, we argue for and present a unified framework of large language model attributions. We show how existing methods of different types of attribution fall under the unified framework. We also use the framework to discuss real-world use cases where one or both types of attributions are required. We believe that this unified framework will guide the use case driven development of systems that leverage both types of attribution, as well as the standardization of their evaluation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 409,251 |
2002.04017 | Provable Self-Play Algorithms for Competitive Reinforcement Learning | Self-play, where the algorithm learns by playing against itself without requiring any direct supervision, has become the new weapon in modern Reinforcement Learning (RL) for achieving superhuman performance in practice. However, the majority of exisiting theory in reinforcement learning only applies to the setting where the agent plays against a fixed environment; it remains largely open whether self-play algorithms can be provably effective, especially when it is necessary to manage the exploration/exploitation tradeoff. We study self-play in competitive reinforcement learning under the setting of Markov games, a generalization of Markov decision processes to the two-player case. We introduce a self-play algorithm---Value Iteration with Upper/Lower Confidence Bound (VI-ULCB)---and show that it achieves regret $\tilde{\mathcal{O}}(\sqrt{T})$ after playing $T$ steps of the game, where the regret is measured by the agent's performance against a \emph{fully adversarial} opponent who can exploit the agent's strategy at \emph{any} step. We also introduce an explore-then-exploit style algorithm, which achieves a slightly worse regret of $\tilde{\mathcal{O}}(T^{2/3})$, but is guaranteed to run in polynomial time even in the worst case. To the best of our knowledge, our work presents the first line of provably sample-efficient self-play algorithms for competitive reinforcement learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 163,479 |
2003.12756 | Harmonic Decompositions of Convolutional Networks | We present a description of the function space and the smoothness class associated with a convolutional network using the machinery of reproducing kernel Hilbert spaces. We show that the mapping associated with a convolutional network expands into a sum involving elementary functions akin to spherical harmonics. This functional decomposition can be related to the functional ANOVA decomposition in nonparametric statistics. Building off our functional characterization of convolutional networks, we obtain statistical bounds highlighting an interesting trade-off between the approximation error and the estimation error. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 170,002 |
2011.14917 | Comparative Analysis of Extreme Verification Latency Learning Algorithms | One of the more challenging real-world problems in computational intelligence is to learn from non-stationary streaming data, also known as concept drift. Perhaps even a more challenging version of this scenario is when -- following a small set of initial labeled data -- the data stream consists of unlabeled data only. Such a scenario is typically referred to as learning in initially labeled nonstationary environment, or simply as extreme verification latency (EVL). Because of the very challenging nature of the problem, very few algorithms have been proposed in the literature up to date. This work is a very first effort to provide a review of some of the existing algorithms (important/prominent) in this field to the research community. More specifically, this paper is a comprehensive survey and comparative analysis of some of the EVL algorithms to point out the weaknesses and strengths of different approaches from three different perspectives: classification accuracy, computational complexity and parameter sensitivity using several synthetic and real world datasets. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 208,929 |
2312.15561 | README: Bridging Medical Jargon and Lay Understanding for Patient
Education through Data-Centric NLP | The advancement in healthcare has shifted focus toward patient-centric approaches, particularly in self-care and patient education, facilitated by access to Electronic Health Records (EHR). However, medical jargon in EHRs poses significant challenges in patient comprehension. To address this, we introduce a new task of automatically generating lay definitions, aiming to simplify complex medical terms into patient-friendly lay language. We first created the README dataset, an extensive collection of over 50,000 unique (medical term, lay definition) pairs and 300,000 mentions, each offering context-aware lay definitions manually annotated by domain experts. We have also engineered a data-centric Human-AI pipeline that synergizes data filtering, augmentation, and selection to improve data quality. We then used README as the training data for models and leveraged a Retrieval-Augmented Generation method to reduce hallucinations and improve the quality of model outputs. Our extensive automatic and human evaluations demonstrate that open-source mobile-friendly models, when fine-tuned with high-quality data, are capable of matching or even surpassing the performance of state-of-the-art closed-source large language models like ChatGPT. This research represents a significant stride in closing the knowledge gap in patient education and advancing patient-centric healthcare solutions. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 418,048 |
1806.00589 | Efficient Entropy for Policy Gradient with Multidimensional Action Space | In recent years, deep reinforcement learning has been shown to be adept at solving sequential decision processes with high-dimensional state spaces such as in the Atari games. Many reinforcement learning problems, however, involve high-dimensional discrete action spaces as well as high-dimensional state spaces. This paper considers entropy bonus, which is used to encourage exploration in policy gradient. In the case of high-dimensional action spaces, calculating the entropy and its gradient requires enumerating all the actions in the action space and running forward and backpropagation for each action, which may be computationally infeasible. We develop several novel unbiased estimators for the entropy bonus and its gradient. We apply these estimators to several models for the parameterized policies, including Independent Sampling, CommNet, Autoregressive with Modified MDP, and Autoregressive with LSTM. Finally, we test our algorithms on two environments: a multi-hunter multi-rabbit grid game and a multi-agent multi-arm bandit problem. The results show that our entropy estimators substantially improve performance with marginal additional computational cost. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 99,341 |
2107.12939 | Optimal Frequency Regulation using Packetized Energy Management | Packetized energy management (PEM) is a demand dispatch scheme that can be used to provide ancillary services such as frequency regulation. In PEM, distributed energy resources (DERs) are granted uninterruptible access to the grid for a pre-specified time interval called the packet length. This results in a down ramp-limited response in PEM for DERs that can only consume power from the grid. In this work, a linearized virtual battery model of PEM is provided that is capable of predicting the down-ramp limited output of PEM and is used in a model predictive control (MPC) framework to improve the performance of PEM in tracking an automatic generation control (AGC) signal. By performing statistical analysis on the AGC regulation signal, PJM Reg-D, an ARMA model is derived as a predictor for the MPC-based precompensator. Finally, as an alternative to MPC, it is shown that by varying the packet length as a function of time, for example through packet randomization, frequency regulation can be improved under PEM. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 248,053 |
2312.11513 | Maatphor: Automated Variant Analysis for Prompt Injection Attacks | Prompt injection has emerged as a serious security threat to large language models (LLMs). At present, the current best-practice for defending against newly-discovered prompt injection techniques is to add additional guardrails to the system (e.g., by updating the system prompt or using classifiers on the input and/or output of the model.) However, in the same way that variants of a piece of malware are created to evade anti-virus software, variants of a prompt injection can be created to evade the LLM's guardrails. Ideally, when a new prompt injection technique is discovered, candidate defenses should be tested not only against the successful prompt injection, but also against possible variants. In this work, we present, a tool to assist defenders in performing automated variant analysis of known prompt injection attacks. This involves solving two main challenges: (1) automatically generating variants of a given prompt according, and (2) automatically determining whether a variant was effective based only on the output of the model. This tool can also assist in generating datasets for jailbreak and prompt injection attacks, thus overcoming the scarcity of data in this domain. We evaluate Maatphor on three different types of prompt injection tasks. Starting from an ineffective (0%) seed prompt, Maatphor consistently generates variants that are at least 60% effective within the first 40 iterations. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 416,602 |
1003.0691 | Statistical and Computational Tradeoffs in Stochastic Composite
Likelihood | Maximum likelihood estimators are often of limited practical use due to the intensive computation they require. We propose a family of alternative estimators that maximize a stochastic variation of the composite likelihood function. Each of the estimators resolve the computation-accuracy tradeoff differently, and taken together they span a continuous spectrum of computation-accuracy tradeoff resolutions. We prove the consistency of the estimators, provide formulas for their asymptotic variance, statistical robustness, and computational complexity. We discuss experimental results in the context of Boltzmann machines and conditional random fields. The theoretical and experimental studies demonstrate the effectiveness of the estimators when the computational resources are insufficient. They also demonstrate that in some cases reduced computational complexity is associated with robustness thereby increasing statistical accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 5,833 |
2402.12212 | Polarization of Autonomous Generative AI Agents Under Echo Chambers | Online social networks often create echo chambers where people only hear opinions reinforcing their beliefs. An echo chamber often generates polarization, leading to conflicts caused by people with radical opinions, such as the January 6, 2021, attack on the US Capitol. The echo chamber has been viewed as a human-specific problem, but this implicit assumption is becoming less reasonable as large language models, such as ChatGPT, acquire social abilities. In response to this situation, we investigated the potential for polarization to occur among a group of autonomous AI agents based on generative language models in an echo chamber environment. We had AI agents discuss specific topics and analyzed how the group's opinions changed as the discussion progressed. As a result, we found that the group of agents based on ChatGPT tended to become polarized in echo chamber environments. The analysis of opinion transitions shows that this result is caused by ChatGPT's high prompt understanding ability to update its opinion by considering its own and surrounding agents' opinions. We conducted additional experiments to investigate under what specific conditions AI agents tended to polarize. As a result, we identified factors that strongly influence polarization, such as the agent's persona. These factors should be monitored to prevent the polarization of AI agents. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 430,754 |
2501.02278 | An experimental comparison of tree-data structures for connectivity
queries on fully-dynamic undirected graphs (Extended Version) | During the past decades significant efforts have been made to propose data structures for answering connectivity queries on fully dynamic graphs, i.e., graphs with frequent insertions and deletions of edges. However, a comprehensive understanding of how these data structures perform in practice is missing, since not all of them have been implemented, let alone evaluated experimentally. We provide reference implementations for the proposed data structures and experimentally evaluate them on a wide range of graphs. Our findings show that the current solutions are not ready to be deployed in systems as is, as every data structure has critical weaknesses when used in practice. Key limitations that must be overcome are the space and time overhead incurred by balanced data structures, the degeneration of the runtime of space-efficient data structures in worst case scenarios, and the maintenance costs for balanced data structures. We detail our findings in the experimental evaluation and provide recommendations for implementing robust solutions for answering connectivity queries on dynamic graphs. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 522,419 |
cmp-lg/9406033 | Verb Semantics and Lexical Selection | This paper will focus on the semantic representation of verbs in computer systems and its impact on lexical selection problems in machine translation (MT). Two groups of English and Chinese verbs are examined to show that lexical selection must be based on interpretation of the sentence as well as selection restrictions placed on the verb arguments. A novel representation scheme is suggested, and is compared to representations with selection restrictions used in transfer-based MT. We see our approach as closely aligned with knowledge-based MT approaches (KBMT), and as a separate component that could be incorporated into existing systems. Examples and experimental results will show that, using this scheme, inexact matches can achieve correct lexical selection. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,112 |
1809.06098 | Policy Optimization via Importance Sampling | Policy optimization is an effective reinforcement learning approach to solve continuous control tasks. Recent achievements have shown that alternating online and offline optimization is a successful choice for efficient trajectory reuse. However, deciding when to stop optimizing and collect new trajectories is non-trivial, as it requires to account for the variance of the objective function estimate. In this paper, we propose a novel, model-free, policy search algorithm, POIS, applicable in both action-based and parameter-based settings. We first derive a high-confidence bound for importance sampling estimation; then we define a surrogate objective function, which is optimized offline whenever a new batch of trajectories is collected. Finally, the algorithm is tested on a selection of continuous control tasks, with both linear and deep policies, and compared with state-of-the-art policy optimization methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 107,960 |
2410.16212 | Comprehensive benchmarking of large language models for RNA secondary
structure prediction | Inspired by the success of large language models (LLM) for DNA and proteins, several LLM for RNA have been developed recently. RNA-LLM uses large datasets of RNA sequences to learn, in a self-supervised way, how to represent each RNA base with a semantically rich numerical vector. This is done under the hypothesis that obtaining high-quality RNA representations can enhance data-costly downstream tasks. Among them, predicting the secondary structure is a fundamental task for uncovering RNA functional mechanisms. In this work we present a comprehensive experimental analysis of several pre-trained RNA-LLM, comparing them for the RNA secondary structure prediction task in an unified deep learning framework. The RNA-LLM were assessed with increasing generalization difficulty on benchmark datasets. Results showed that two LLM clearly outperform the other models, and revealed significant challenges for generalization in low-homology scenarios. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 500,929 |
2102.09680 | Fixing Errors of the Google Voice Recognizer through Phonetic Distance
Metrics | Speech recognition systems for the Spanish language, such as Google's, produce errors quite frequently when used in applications of a specific domain. These errors mostly occur when recognizing words new to the recognizer's language model or ad hoc to the domain. This article presents an algorithm that uses Levenshtein distance on phonemes to reduce the speech recognizer's errors. The preliminary results show that it is possible to correct the recognizer's errors significantly by using this metric and using a dictionary of specific phrases from the domain of the application. Despite being designed for particular domains, the algorithm proposed here is of general application. The phrases that must be recognized can be explicitly defined for each application, without the algorithm having to be modified. It is enough to indicate to the algorithm the set of sentences on which it must work. The algorithm's complexity is $O(tn)$, where $t$ is the number of words in the transcript to be corrected, and $n$ is the number of phrases specific to the domain. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 220,851 |
2203.09281 | Ranking of Communities in Multiplex Spatiotemporal Models of Brain
Dynamics | As a relatively new field, network neuroscience has tended to focus on aggregate behaviours of the brain averaged over many successive experiments or over long recordings in order to construct robust brain models. These models are limited in their ability to explain dynamic state changes in the brain which occurs spontaneously as a result of normal brain function. Hidden Markov Models (HMMs) trained on neuroimaging time series data have since arisen as a method to produce dynamical models that are easy to train but can be difficult to fully parametrise or analyse. We propose an interpretation of these neural HMMs as multiplex brain state graph models we term Hidden Markov Graph Models (HMGMs). This interpretation allows for dynamic brain activity to be analysed using the full repertoire of network analysis techniques. Furthermore, we propose a general method for selecting HMM hyperparameters in the absence of external data, based on the principle of maximum entropy, and use this to select the number of layers in the multiplex model. We produce a new tool for determining important communities of brain regions using a spatiotemporal random walk-based procedure that takes advantage of the underlying Markov structure of the model. Our analysis of real multi-subject fMRI data provides new results that corroborate the modular processing hypothesis of the brain at rest as well as contributing new evidence of functional overlap between and within dynamic brain state communities. Our analysis pipeline provides a way to characterise dynamic network activity of the brain under novel behaviours or conditions. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 286,102 |
1602.07337 | Sparse Estimation of Multivariate Poisson Log-Normal Models from Count
Data | Modeling data with multivariate count responses is a challenging problem due to the discrete nature of the responses. Existing methods for univariate count responses cannot be easily extended to the multivariate case since the dependency among multiple responses needs to be properly accommodated. In this paper, we propose a multivariate Poisson log-normal regression model for multivariate data with count responses. By simultaneously estimating the regression coefficients and inverse covariance matrix over the latent variables with an efficient Monte Carlo EM algorithm, the proposed regression model takes advantages of association among multiple count responses to improve the model prediction performance. Simulation studies and applications to real world data are conducted to systematically evaluate the performance of the proposed method in comparison with conventional methods. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 52,494 |
1907.06417 | Quick, Stat!: A Statistical Analysis of the Quick, Draw! Dataset | The Quick, Draw! Dataset is a Google dataset with a collection of 50 million drawings, divided in 345 categories, collected from the users of the game Quick, Draw!. In contrast with most of the existing image datasets, in the Quick, Draw! Dataset, drawings are stored as time series of pencil positions instead of a bitmap matrix composed by pixels. This aspect makes this dataset the largest doodle dataset available at the time. The Quick, Draw! Dataset is presented as a great opportunity to researchers for developing and studying machine learning techniques. Due to the size of this dataset and the nature of its source, there is a scarce of information about the quality of the drawings contained. In this paper, a statistical analysis of three of the classes contained in the Quick, Draw! Dataset is depicted: mountain, book and whale. The goal is to give to the reader a first impression of the data collected in this dataset. For the analysis of the quality of the drawings, a Classification Neural Network was trained to obtain a classification score. Using this classification score and the parameters provided by the dataset, a statistical analysis of the quality and nature of the drawings contained in this dataset is provided. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | 138,621 |
2210.00169 | Multi-stage Progressive Compression of Conformer Transducer for
On-device Speech Recognition | The smaller memory bandwidth in smart devices prompts development of smaller Automatic Speech Recognition (ASR) models. To obtain a smaller model, one can employ the model compression techniques. Knowledge distillation (KD) is a popular model compression approach that has shown to achieve smaller model size with relatively lesser degradation in the model performance. In this approach, knowledge is distilled from a trained large size teacher model to a smaller size student model. Also, the transducer based models have recently shown to perform well for on-device streaming ASR task, while the conformer models are efficient in handling long term dependencies. Hence in this work we employ a streaming transducer architecture with conformer as the encoder. We propose a multi-stage progressive approach to compress the conformer transducer model using KD. We progressively update our teacher model with the distilled student model in a multi-stage setup. On standard LibriSpeech dataset, our experimental results have successfully achieved compression rates greater than 60% without significant degradation in the performance compared to the larger teacher model. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 320,762 |
2305.04288 | Towards Achieving Near-optimal Utility for Privacy-Preserving Federated
Learning via Data Generation and Parameter Distortion | Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information. Appropriate protection mechanisms have to be adopted to fulfill the requirements in preserving \textit{privacy} and maintaining high model \textit{utility}. The nature of the widely-adopted protection mechanisms including \textit{Randomization Mechanism} and \textit{Compression Mechanism} is to protect privacy via distorting model parameter. We measure the utility via the gap between the original model parameter and the distorted model parameter. We want to identify under what general conditions privacy-preserving federated learning can achieve near-optimal utility via data generation and parameter distortion. To provide an avenue for achieving near-optimal utility, we present an upper bound for utility loss, which is measured using two main terms called variance-reduction and model parameter discrepancy separately. Our analysis inspires the design of appropriate protection parameters for the protection mechanisms to achieve near-optimal utility and meet the privacy requirements simultaneously. The main techniques for the protection mechanism include parameter distortion and data generation, which are generic and can be applied extensively. Furthermore, we provide an upper bound for the trade-off between privacy and utility, \blue{which together with the lower bound provided by no free lunch theorem in federated learning (\cite{zhang2022no}) form the conditions for achieving optimal trade-off. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 362,710 |
1602.05350 | Relative Error Embeddings for the Gaussian Kernel Distance | A reproducing kernel can define an embedding of a data point into an infinite dimensional reproducing kernel Hilbert space (RKHS). The norm in this space describes a distance, which we call the kernel distance. The random Fourier features (of Rahimi and Recht) describe an oblivious approximate mapping into finite dimensional Euclidean space that behaves similar to the RKHS. We show in this paper that for the Gaussian kernel the Euclidean norm between these mapped to features has $(1+\epsilon)$-relative error with respect to the kernel distance. When there are $n$ data points, we show that $O((1/\epsilon^2) \log(n))$ dimensions of the approximate feature space are sufficient and necessary. Without a bound on $n$, but when the original points lie in $\mathbb{R}^d$ and have diameter bounded by $\mathcal{M}$, then we show that $O((d/\epsilon^2) \log(\mathcal{M}))$ dimensions are sufficient, and that this many are required, up to $\log(1/\epsilon)$ factors. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 52,243 |
2402.12326 | PsychoGAT: A Novel Psychological Measurement Paradigm through
Interactive Fiction Games with LLM Agents | Psychological measurement is essential for mental health, self-understanding, and personal development. Traditional methods, such as self-report scales and psychologist interviews, often face challenges with engagement and accessibility. While game-based and LLM-based tools have been explored to improve user interest and automate assessment, they struggle to balance engagement with generalizability. In this work, we propose PsychoGAT (Psychological Game AgenTs) to achieve a generic gamification of psychological assessment. The main insight is that powerful LLMs can function both as adept psychologists and innovative game designers. By incorporating LLM agents into designated roles and carefully managing their interactions, PsychoGAT can transform any standardized scales into personalized and engaging interactive fiction games. To validate the proposed method, we conduct psychometric evaluations to assess its effectiveness and employ human evaluators to examine the generated content across various psychological constructs, including depression, cognitive distortions, and personality traits. Results demonstrate that PsychoGAT serves as an effective assessment tool, achieving statistically significant excellence in psychometric metrics such as reliability, convergent validity, and discriminant validity. Moreover, human evaluations confirm PsychoGAT's enhancements in content coherence, interactivity, interest, immersion, and satisfaction. | true | false | false | false | false | false | true | false | true | false | false | false | false | true | true | false | false | false | 430,802 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.