id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2203.10970 | SOLIS: Autonomous Solubility Screening using Deep Neural Networks | Accelerating material discovery has tremendous societal and industrial impact, particularly for pharmaceuticals and clean energy production. Many experimental instruments have some degree of automation, facilitating continuous running and higher throughput. However, it is common that sample preparation is still carried out manually. This can result in researchers spending a significant amount of their time on repetitive tasks, which introduces errors and can prohibit production of statistically relevant data. Crystallisation experiments are common in many chemical fields, both for purification and in polymorph screening experiments. The initial step often involves a solubility screen of the molecule; that is, understanding whether molecular compounds have dissolved in a particular solvent. This usually can be time consuming and work intensive. Moreover, accurate knowledge of the precise solubility limit of the molecule is often not required, and simply measuring a threshold of solubility in each solvent would be sufficient. To address this, we propose a novel cascaded deep model that is inspired by how a human chemist would visually assess a sample to determine whether the solid has completely dissolved in the solution. In this paper, we design, develop, and evaluate the first fully autonomous solubility screening framework, which leverages state-of-the-art methods for image segmentation and convolutional neural networks for image classification. To realise that, we first create a dataset comprising different molecules and solvents, which is collected in a real-world chemistry laboratory. We then evaluated our method on the data recorded through an eye-in-hand camera mounted on a seven degree-of-freedom robotic manipulator, and show that our model can achieve 99.13% test accuracy across various setups. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 286,749 |
2205.08790 | On-device modeling of user's social context and familiar places from
smartphone-embedded sensor data | Context modeling and recognition represent complex tasks that allow mobile and ubiquitous computing applications to adapt to the user's situation. Current solutions mainly focus on limited context information generally processed on centralized architectures, potentially exposing users' personal data to privacy leakage, and missing personalization features. For these reasons on-device context modeling and recognition represent the current research trend in this area. Among the different information characterizing the user's context in mobile environments, social interactions and visited locations remarkably contribute to the characterization of daily life scenarios. In this paper we propose a novel, unsupervised and lightweight approach to model the user's social context and her locations based on ego networks directly on the user mobile device. Relying on this model, the system is able to extract high-level and semantic-rich context features from smartphone-embedded sensors data. Specifically, for the social context it exploits data related to both physical and cyber social interactions among users and their devices. As far as location context is concerned, we assume that it is more relevant to model the familiarity degree of a specific location for the user's context than the raw location data, both in terms of GPS coordinates and proximity devices. By using 5 real-world datasets, we assess the structure of the social and location ego networks, we provide a semantic evaluation of the proposed models and a complexity evaluation in terms of mobile computing performance. Finally, we demonstrate the relevance of the extracted features by showing the performance of 3 machine learning algorithms to recognize daily-life situations, obtaining an improvement of 3% of AUROC, 9% of Precision, and 5% in terms of Recall with respect to use only features related to physical context. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 297,058 |
2209.03712 | Unsupervised Video Object Segmentation via Prototype Memory Network | Unsupervised video object segmentation aims to segment a target object in the video without a ground truth mask in the initial frame. This challenging task requires extracting features for the most salient common objects within a video sequence. This difficulty can be solved by using motion information such as optical flow, but using only the information between adjacent frames results in poor connectivity between distant frames and poor performance. To solve this problem, we propose a novel prototype memory network architecture. The proposed model effectively extracts the RGB and motion information by extracting superpixel-based component prototypes from the input RGB images and optical flow maps. In addition, the model scores the usefulness of the component prototypes in each frame based on a self-learning algorithm and adaptively stores the most useful prototypes in memory and discards obsolete prototypes. We use the prototypes in the memory bank to predict the next query frames mask, which enhances the association between distant frames to help with accurate mask prediction. Our method is evaluated on three datasets, achieving state-of-the-art performance. We prove the effectiveness of the proposed model with various ablation studies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 316,573 |
2410.12787 | The Curse of Multi-Modalities: Evaluating Hallucinations of Large
Multimodal Models across Language, Visual, and Audio | Recent advancements in large multimodal models (LMMs) have significantly enhanced performance across diverse tasks, with ongoing efforts to further integrate additional modalities such as video and audio. However, most existing LMMs remain vulnerable to hallucinations, the discrepancy between the factual multimodal input and the generated textual output, which has limited their applicability in various real-world scenarios. This paper presents the first systematic investigation of hallucinations in LMMs involving the three most common modalities: language, visual, and audio. Our study reveals two key contributors to hallucinations: overreliance on unimodal priors and spurious inter-modality correlations. To address these challenges, we introduce the benchmark The Curse of Multi-Modalities (CMM), which comprehensively evaluates hallucinations in LMMs, providing a detailed analysis of their underlying issues. Our findings highlight key vulnerabilities, including imbalances in modality integration and biases from training data, underscoring the need for balanced cross-modal learning and enhanced hallucination mitigation strategies. Based on our observations and findings, we suggest potential research directions that could enhance the reliability of LMMs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 499,188 |
2408.07719 | Operator Feature Neural Network for Symbolic Regression | Symbolic regression is a task aimed at identifying patterns in data and representing them through mathematical expressions, generally involving skeleton prediction and constant optimization. Many methods have achieved some success, however they treat variables and symbols merely as characters of natural language without considering their mathematical essence. This paper introduces the operator feature neural network (OF-Net) which employs operator representation for expressions and proposes an implicit feature encoding method for the intrinsic mathematical operational logic of operators. By substituting operator features for numeric loss, we can predict the combination of operators of target expressions. We evaluate the model on public datasets, and the results demonstrate that the model achieves superior recovery rates and high $R^2$ scores. With the discussion of the results, we analyze the merit and demerit of OF-Net and propose optimizing schemes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 480,700 |
2302.00077 | Personalized Privacy Auditing and Optimization at Test Time | A number of learning models used in consequential domains, such as to assist in legal, banking, hiring, and healthcare decisions, make use of potentially sensitive users' information to carry out inference. Further, the complete set of features is typically required to perform inference. This not only poses severe privacy risks for the individuals using the learning systems, but also requires companies and organizations massive human efforts to verify the correctness of the released information. This paper asks whether it is necessary to require \emph{all} input features for a model to return accurate predictions at test time and shows that, under a personalized setting, each individual may need to release only a small subset of these features without impacting the final decisions. The paper also provides an efficient sequential algorithm that chooses which attributes should be provided by each individual. Evaluation over several learning tasks shows that individuals may be able to report as little as 10\% of their information to ensure the same level of accuracy of a model that uses the complete users' information. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 343,085 |
2102.02335 | Self-Supervised Claim Identification for Automated Fact Checking | We propose a novel, attention-based self-supervised approach to identify "claim-worthy" sentences in a fake news article, an important first step in automated fact-checking. We leverage "aboutness" of headline and content using attention mechanism for this task. The identified claims can be used for downstream task of claim verification for which we are releasing a benchmark dataset of manually selected compelling articles with veracity labels and associated evidence. This work goes beyond stylistic analysis to identifying content that influences reader belief. Experiments with three datasets show the strength of our model. Data and code available at https://github.com/architapathak/Self-Supervised-ClaimIdentification | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 218,378 |
2311.02986 | Hacking Cryptographic Protocols with Advanced Variational Quantum
Attacks | Here we introduce an improved approach to Variational Quantum Attack Algorithms (VQAA) on crytographic protocols. Our methods provide robust quantum attacks to well-known cryptographic algorithms, more efficiently and with remarkably fewer qubits than previous approaches. We implement simulations of our attacks for symmetric-key protocols such as S-DES, S-AES and Blowfish. For instance, we show how our attack allows a classical simulation of a small 8-qubit quantum computer to find the secret key of one 32-bit Blowfish instance with 24 times fewer number of iterations than a brute-force attack. Our work also shows improvements in attack success rates for lightweight ciphers such as S-DES and S-AES. Further applications beyond symmetric-key cryptography are also discussed, including asymmetric-key protocols and hash functions. In addition, we also comment on potential future improvements of our methods. Our results bring one step closer assessing the vulnerability of large-size classical cryptographic protocols with Noisy Intermediate-Scale Quantum (NISQ) devices, and set the stage for future research in quantum cybersecurity. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 405,665 |
2402.06536 | Relative frequencies of constrained events in stochastic processes: An
analytical approach | The stochastic simulation algorithm (SSA) and the corresponding Monte Carlo (MC) method are among the most common approaches for studying stochastic processes. They rely on knowledge of interevent probability density functions (PDFs) and on information about dependencies between all possible events. Analytical representations of a PDF are difficult to specify in advance, in many real life applications. Knowing the shapes of PDFs, and using experimental data, different optimization schemes can be applied in order to evaluate probability density functions and, therefore, the properties of the studied system. Such methods, however, are computationally demanding, and often not feasible. We show that, in the case where experimentally accessed properties are directly related to the frequencies of events involved, it may be possible to replace the heavy Monte Carlo core of optimization schemes with an analytical solution. Such a replacement not only provides a more accurate estimation of the properties of the process, but also reduces the simulation time by a factor of order of the sample size (at least $\approx 10^4$). The proposed analytical approach is valid for any choice of PDF. The accuracy, computational efficiency, and advantages of the method over MC procedures are demonstrated in the exactly solvable case and in the evaluation of branching fractions in controlled radical polymerization (CRP) of acrylic monomers. This polymerization can be modeled by a constrained stochastic process. Constrained systems are quite common, and this makes the method useful for various applications. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 428,334 |
1810.11227 | From the EM Algorithm to the CM-EM Algorithm for Global Convergence of
Mixture Models | The Expectation-Maximization (EM) algorithm for mixture models often results in slow or invalid convergence. The popular convergence proof affirms that the likelihood increases with Q; Q is increasing in the M -step and non-decreasing in the E-step. The author found that (1) Q may and should decrease in some E-steps; (2) The Shannon channel from the E-step is improper and hence the expectation is improper. The author proposed the CM-EM algorithm (CM means Channel's Matching), which adds a step to optimize the mixture ratios for the proper Shannon channel and maximizes G, average log-normalized-likelihood, in the M-step. Neal and Hinton's Maximization-Maximization (MM) algorithm use F instead of Q to speed the convergence. Maximizing G is similar to maximizing F. The new convergence proof is similar to Beal's proof with the variational method. It first proves that the minimum relative entropy equals the minimum R-G (R is mutual information), then uses variational and iterative methods that Shannon et al. use for rate-distortion functions to prove the global convergence. Some examples show that Q and F should and may decrease in some E-steps. For the same example, the EM, MM, and CM-EM algorithms need about 36, 18, and 9 iterations respectively. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 111,460 |
2402.08265 | A Dense Reward View on Aligning Text-to-Image Diffusion with Preference | Aligning text-to-image diffusion model (T2I) with preference has been gaining increasing research attention. While prior works exist on directly optimizing T2I by preference data, these methods are developed under the bandit assumption of a latent reward on the entire diffusion reverse chain, while ignoring the sequential nature of the generation process. This may harm the efficacy and efficiency of preference alignment. In this paper, we take on a finer dense reward perspective and derive a tractable alignment objective that emphasizes the initial steps of the T2I reverse chain. In particular, we introduce temporal discounting into DPO-style explicit-reward-free objectives, to break the temporal symmetry therein and suit the T2I generation hierarchy. In experiments on single and multiple prompt generation, our method is competitive with strong relevant baselines, both quantitatively and qualitatively. Further investigations are conducted to illustrate the insight of our approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 429,036 |
2002.00818 | Linearly Constrained Gaussian Processes with Boundary Conditions | One goal in Bayesian machine learning is to encode prior knowledge into prior distributions, to model data efficiently. We consider prior knowledge from systems of linear partial differential equations together with their boundary conditions. We construct multi-output Gaussian process priors with realizations in the solution set of such systems, in particular only such solutions can be represented by Gaussian process regression. The construction is fully algorithmic via Gr\"obner bases and it does not employ any approximation. It builds these priors combining two parametrizations via a pullback: the first parametrizes the solutions for the system of differential equations and the second parametrizes all functions adhering to the boundary conditions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 162,479 |
2312.09871 | ChemTime: Rapid and Early Classification for Multivariate Time Series
Classification of Chemical Sensors | Multivariate time series data are ubiquitous in the application of machine learning to problems in the physical sciences. Chemiresistive sensor arrays are highly promising in chemical detection tasks relevant to industrial, safety, and military applications. Sensor arrays are an inherently multivariate time series data collection tool which demand rapid and accurate classification of arbitrary chemical analytes. Previous research has benchmarked data-agnostic multivariate time series classifiers across diverse multivariate time series supervised tasks in order to find general-purpose classification algorithms. To our knowledge, there has yet to be an effort to survey machine learning and time series classification approaches to chemiresistive hardware sensor arrays for the detection of chemical analytes. In addition to benchmarking existing approaches to multivariate time series classifiers, we incorporate findings from a model survey to propose the novel \textit{ChemTime} approach to sensor array classification for chemical sensing. We design experiments addressing the unique challenges of hardware sensor arrays classification including the rapid classification ability of classifiers and minimization of inference time while maintaining performance for deployed lightweight hardware sensing devices. We find that \textit{ChemTime} is uniquely positioned for the chemical sensing task by combining rapid and early classification of time series with beneficial inference and high accuracy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 415,913 |
1812.04480 | A Hybrid Distribution Feeder Long-Term Load Forecasting Method Based on
Sequence Prediction | Distribution feeder long-term load forecast (LTLF) is a critical task many electric utility companies perform on an annual basis. The goal of this task is to forecast the annual load of distribution feeders. The previous top-down and bottom-up LTLF methods are unable to incorporate different levels of information. This paper proposes a hybrid modeling method using sequence prediction for this classic and important task. The proposed method can seamlessly integrate top-down, bottom-up and sequential information hidden in multi-year data. Two advanced sequence prediction models Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) networks are investigated in this paper. They successfully solve the vanishing and exploding gradient problems a standard recurrent neural network has. This paper firstly explains the theories of LSTM and GRU networks and then discusses the steps of feature selection, feature engineering and model implementation in detail. In the end, a real-world application example for a large urban grid in West Canada is provided. LSTM and GRU networks under different sequential configurations and traditional models including bottom-up, ARIMA and feed-forward neural network are all implemented and compared in detail. The proposed method demonstrates superior performance and great practicality. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 116,229 |
2409.18937 | Robust Deep Reinforcement Learning for Volt-VAR Optimization in Active
Distribution System under Uncertainty | The deep reinforcement learning (DRL) based Volt-VAR optimization (VVO) methods have been widely studied for active distribution networks (ADNs). However, most of them lack safety guarantees in terms of power injection uncertainties due to the increase in distributed energy resources (DERs) and load demand, such as electric vehicles. This article proposes a robust deep reinforcement learning (RDRL) framework for VVO via a robust deep deterministic policy gradient (DDPG) algorithm. This algorithm can effectively manage hybrid action spaces, considering control devices like capacitors, voltage regulators, and smart inverters. Additionally, it is designed to handle uncertainties by quantifying uncertainty sets with conformal prediction and modeling uncertainties as adversarial attacks to guarantee safe exploration across action spaces. Numerical results on three IEEE test cases demonstrate the sample efficiency and safety of the proposed robust DDPG against uncertainties compared to the benchmark algorithms. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 492,463 |
1908.10831 | Stochastic AUC Maximization with Deep Neural Networks | Stochastic AUC maximization has garnered an increasing interest due to better fit to imbalanced data classification. However, existing works are limited to stochastic AUC maximization with a linear predictive model, which restricts its predictive power when dealing with extremely complex data. In this paper, we consider stochastic AUC maximization problem with a deep neural network as the predictive model. Building on the saddle point reformulation of a surrogated loss of AUC, the problem can be cast into a {\it non-convex concave} min-max problem. The main contribution made in this paper is to make stochastic AUC maximization more practical for deep neural networks and big data with theoretical insights as well. In particular, we propose to explore Polyak-\L{}ojasiewicz (PL) condition that has been proved and observed in deep learning, which enables us to develop new stochastic algorithms with even faster convergence rate and more practical step size scheme. An AdaGrad-style algorithm is also analyzed under the PL condition with adaptive convergence rate. Our experimental results demonstrate the effectiveness of the proposed algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 143,224 |
1206.4391 | Gray Image extraction using Fuzzy Logic | Fuzzy systems concern fundamental methodology to represent and process uncertainty and imprecision in the linguistic information. The fuzzy systems that use fuzzy rules to represent the domain knowledge of the problem are known as Fuzzy Rule Base Systems (FRBS). On the other hand image segmentation and subsequent extraction from a noise-affected background, with the help of various soft computing methods, are relatively new and quite popular due to various reasons. These methods include various Artificial Neural Network (ANN) models (primarily supervised in nature), Genetic Algorithm (GA) based techniques, intensity histogram based methods etc. providing an extraction solution working in unsupervised mode happens to be even more interesting problem. Literature suggests that effort in this respect appears to be quite rudimentary. In the present article, we propose a fuzzy rule guided novel technique that is functional devoid of any external intervention during execution. Experimental results suggest that this approach is an efficient one in comparison to different other techniques extensively addressed in literature. In order to justify the supremacy of performance of our proposed technique in respect of its competitors, we take recourse to effective metrics like Mean Squared Error (MSE), Mean Absolute Error (MAE), Peak Signal to Noise Ratio (PSNR). | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 16,638 |
2204.11673 | Incorporating Explicit Knowledge in Pre-trained Language Models for
Passage Re-ranking | Passage re-ranking is to obtain a permutation over the candidate passage set from retrieval stage. Re-rankers have been boomed by Pre-trained Language Models (PLMs) due to their overwhelming advantages in natural language understanding. However, existing PLM based re-rankers may easily suffer from vocabulary mismatch and lack of domain specific knowledge. To alleviate these problems, explicit knowledge contained in knowledge graph is carefully introduced in our work. Specifically, we employ the existing knowledge graph which is incomplete and noisy, and first apply it in passage re-ranking task. To leverage a reliable knowledge, we propose a novel knowledge graph distillation method and obtain a knowledge meta graph as the bridge between query and passage. To align both kinds of embedding in the latent space, we employ PLM as text encoder and graph neural network over knowledge meta graph as knowledge encoder. Besides, a novel knowledge injector is designed for the dynamic interaction between text and knowledge encoder. Experimental results demonstrate the effectiveness of our method especially in queries requiring in-depth domain knowledge. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 293,227 |
1810.08755 | Property Graph Type System and Data Definition Language | Property graph manages data by vertices and edges. Each vertex and edge can have a property map, storing ad hoc attribute and its value. Label can be attached to vertices and edges to group them. While this schema-less methodology is very flexible for data evolvement and for managing explosive graph element, it has two shortcomings-- 1) data dependency 2) less compression. Both problems can be solved by a schema based approach. In this paper, a type system used to model property graph is defined. Based on the type system, the associated data definition language (DDL) is proposed and multiple graph instances created under this type system is discussed. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 110,909 |
1408.6924 | Joint Bi-Directional Training of Nonlinear Precoders and Receivers in
Cellular Networks | Joint optimization of nonlinear precoders and receive filters is studied for both the uplink and downlink in a cellular system. For the uplink, the base transceiver station (BTS) receiver implements successive interference cancellation, and for the downlink, the BTS station pre-compensates for the interference with Tomlinson-Harashima precoding (THP). Convergence of alternating optimization of receivers and transmitters in a single cell is established when filters are updated according to a minimum mean squared error (MMSE) criterion, subject to appropriate power constraints. Adaptive algorithms are then introduced for updating the precoders and receivers in the absence of channel state information, assuming time-division duplex transmissions with channel reciprocity. Instead of estimating the channels, the filters are directly estimated according to a least squares criterion via bi-directional training: Uplink pilots are used to update the feedforward and feedback filters, which are then used as interference pre-compensation filters for downlink training of the mobile receivers. Numerical results show that nonlinear filters can provide substantial gains relative to linear filters with limited forward-backward iterations. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 35,678 |
2408.08336 | Graph representations of 3D data for machine learning | We give an overview of combinatorial methods to represent 3D data, such as graphs and meshes, from the viewpoint of their amenability to analysis using machine learning algorithms. We highlight pros and cons of various representations and we discuss some methods of generating/switching between the representations. We finally present two concrete applications in life science and industry. Despite its theoretical nature, our discussion is in general motivated by, and biased towards real-world challenges. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 480,958 |
2111.14454 | TsFeX: Contact Tracing Model using Time Series Feature Extraction and
Gradient Boosting | With the outbreak of COVID-19 pandemic, a dire need to effectively identify the individuals who may have come in close-contact to others who have been infected with COVID-19 has risen. This process of identifying individuals, also termed as 'Contact tracing', has significant implications for the containment and control of the spread of this virus. However, manual tracing has proven to be ineffective calling for automated contact tracing approaches. As such, this research presents an automated machine learning system for identifying individuals who may have come in contact with others infected with COVID-19 using sensor data transmitted through handheld devices. This paper describes the different approaches followed in arriving at an optimal solution model that effectually predicts whether a person has been in close proximity to an infected individual using a gradient boosting algorithm and time series feature extraction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 268,606 |
1407.7097 | High SNR BER Comparison of Coherent and Differentially Coherent
Modulation Schemes in Lognormal Fading Channels | Using an auxiliary random variable technique, we prove that binary differential phase-shift keying and binary phase-shift keying have the same asymptotic bit-error rate performance in lognormal fading channels. We also show that differential quaternary phase-shift keying is exactly 2.32 dB worse than quaternary phase-shift keying over the lognormal fading channels in high signal-to-noise ratio regimes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,910 |
2207.07399 | An Approach for Link Prediction in Directed Complex Networks based on
Asymmetric Similarity-Popularity | Complex networks are graphs representing real-life systems that exhibit unique characteristics not found in purely regular or completely random graphs. The study of such systems is vital but challenging due to the complexity of the underlying processes. This task has nevertheless been made easier in recent decades thanks to the availability of large amounts of networked data. Link prediction in complex networks aims to estimate the likelihood that a link between two nodes is missing from the network. Links can be missing due to imperfections in data collection or simply because they are yet to appear. Discovering new relationships between entities in networked data has attracted researchers' attention in various domains such as sociology, computer science, physics, and biology. Most existing research focuses on link prediction in undirected complex networks. However, not all real-life systems can be faithfully represented as undirected networks. This simplifying assumption is often made when using link prediction algorithms but inevitably leads to loss of information about relations among nodes and degradation in prediction performance. This paper introduces a link prediction method designed explicitly for directed networks. It is based on the similarity-popularity paradigm, which has recently proven successful in undirected networks. The presented algorithms handle the asymmetry in node relationships by modeling it as asymmetry in similarity and popularity. Given the observed network topology, the algorithms approximate the hidden similarities as shortest path distances using edge weights that capture and factor out the links' asymmetry and nodes' popularity. The proposed approach is evaluated on real-life networks, and the experimental results demonstrate its effectiveness in predicting missing links across a broad spectrum of networked data types and sizes. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 308,196 |
2212.13088 | Learning Generalizable Representations for Reinforcement Learning via
Adaptive Meta-learner of Behavioral Similarities | How to learn an effective reinforcement learning-based model for control tasks from high-level visual observations is a practical and challenging problem. A key to solving this problem is to learn low-dimensional state representations from observations, from which an effective policy can be learned. In order to boost the learning of state encoding, recent works are focused on capturing behavioral similarities between state representations or applying data augmentation on visual observations. In this paper, we propose a novel meta-learner-based framework for representation learning regarding behavioral similarities for reinforcement learning. Specifically, our framework encodes the high-dimensional observations into two decomposed embeddings regarding reward and dynamics in a Markov Decision Process (MDP). A pair of meta-learners are developed, one of which quantifies the reward similarity and the other quantifies dynamics similarity over the correspondingly decomposed embeddings. The meta-learners are self-learned to update the state embeddings by approximating two disjoint terms in on-policy bisimulation metric. To incorporate the reward and dynamics terms, we further develop a strategy to adaptively balance their impacts based on different tasks or environments. We empirically demonstrate that our proposed framework outperforms state-of-the-art baselines on several benchmarks, including conventional DM Control Suite, Distracting DM Control Suite and a self-driving task CARLA. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 338,219 |
1810.02180 | Improved Generalization Bounds for Adversarially Robust Learning | We consider a model of robust learning in an adversarial environment. The learner gets uncorrupted training data with access to possible corruptions that may be affected by the adversary during testing. The learner's goal is to build a robust classifier, which will be tested on future adversarial examples. The adversary is limited to $k$ possible corruptions for each input. We model the learner-adversary interaction as a zero-sum game. This model is closely related to the adversarial examples model of Schmidt et al. (2018); Madry et al. (2017). Our main results consist of generalization bounds for the binary and multiclass classification, as well as the real-valued case (regression). For the binary classification setting, we both tighten the generalization bound of Feige et al. (2015), and are also able to handle infinite hypothesis classes. The sample complexity is improved from $O(\frac{1}{\epsilon^4}\log(\frac{|H|}{\delta}))$ to $O\big(\frac{1}{\epsilon^2}(kVC(H)\log^{\frac{3}{2}+\alpha}(kVC(H))+\log(\frac{1}{\delta})\big)$ for any $\alpha > 0$. Additionally, we extend the algorithm and generalization bound from the binary to the multiclass and real-valued cases. Along the way, we obtain results on fat-shattering dimension and Rademacher complexity of $k$-fold maxima over function classes; these may be of independent interest. For binary classification, the algorithm of Feige et al. (2015) uses a regret minimization algorithm and an ERM oracle as a black box; we adapt it for the multiclass and regression settings. The algorithm provides us with near-optimal policies for the players on a given training sample. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 109,545 |
1901.10024 | Cross-Domain Image Manipulation by Demonstration | In this work we propose a model that can manipulate individual visual attributes of objects in a real scene using examples of how respective attribute manipulations affect the output of a simulation. As an example, we train our model to manipulate the expression of a human face using nonphotorealistic 3D renders of a face with varied expression. Our model manages to preserve all other visual attributes of a real face, such as head orientation, even though this and other attributes are not labeled in either real or synthetic domain. Since our model learns to manipulate a specific property in isolation using only "synthetic demonstrations" of such manipulations without explicitly provided labels, it can be applied to shape, texture, lighting, and other properties that are difficult to measure or represent as real-valued vectors. We measure the degree to which our model preserves other attributes of a real image when a single specific attribute is manipulated. We use digit datasets to analyze how discrepancy in attribute distributions affects the performance of our model, and demonstrate results in a far more difficult setting: learning to manipulate real human faces using nonphotorealistic 3D renders. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 119,900 |
2412.11747 | Beyond Graph Convolution: Multimodal Recommendation with Topology-aware
MLPs | Given the large volume of side information from different modalities, multimodal recommender systems have become increasingly vital, as they exploit richer semantic information beyond user-item interactions. Recent works highlight that leveraging Graph Convolutional Networks (GCNs) to explicitly model multimodal item-item relations can significantly enhance recommendation performance. However, due to the inherent over-smoothing issue of GCNs, existing models benefit only from shallow GCNs with limited representation power. This drawback is especially pronounced when facing complex and high-dimensional patterns such as multimodal data, as it requires large-capacity models to accommodate complicated correlations. To this end, in this paper, we investigate bypassing GCNs when modeling multimodal item-item relationship. More specifically, we propose a Topology-aware Multi-Layer Perceptron (TMLP), which uses MLPs instead of GCNs to model the relationships between items. TMLP enhances MLPs with topological pruning to denoise item-item relations and intra (inter)-modality learning to integrate higher-order modality correlations. Extensive experiments on three real-world datasets verify TMLP's superiority over nine baselines. We also find that by discarding the internal message passing in GCNs, which is sensitive to node connections, TMLP achieves significant improvements in both training efficiency and robustness against existing models. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 517,563 |
1404.2576 | Asymptotics of Fingerprinting and Group Testing: Tight Bounds from
Channel Capacities | In this work we consider the large-coalition asymptotics of various fingerprinting and group testing games, and derive explicit expressions for the capacities for each of these models. We do this both for simple decoders (fast but suboptimal) and for joint decoders (slow but optimal). For fingerprinting, we show that if the pirate strategy is known, the capacity often decreases linearly with the number of colluders, instead of quadratically as in the uninformed fingerprinting game. For many attacks the joint capacity is further shown to be strictly higher than the simple capacity. For group testing, we improve upon known results about the joint capacities, and derive new explicit asymptotics for the simple capacities. These show that existing simple group testing algorithms are suboptimal, and that simple decoders cannot asymptotically be as efficient as joint decoders. For the traditional group testing model, we show that the gap between the simple and joint capacities is a factor 1.44 for large numbers of defectives. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | 32,222 |
1410.2652 | More Natural Models of Electoral Control by Partition | "Control" studies attempts to set the outcome of elections through the addition, deletion, or partition of voters or candidates. The set of benchmark control types was largely set in the seminal 1992 paper by Bartholdi, Tovey, and Trick that introduced control, and there now is a large literature studying how many of the benchmark types various election systems are vulnerable to, i.e., have polynomial-time attack algorithms for. However, although the longstanding benchmark models of addition and deletion model relatively well the real-world settings that inspire them, the longstanding benchmark models of partition model settings that are arguably quite distant from those they seek to capture. In this paper, we introduce--and for some important cases analyze the complexity of--new partition models that seek to better capture many real-world partition settings. In particular, in many partition settings one wants the two parts of the partition to be of (almost) equal size, or is partitioning into more than two parts, or has groups of actors who must be placed in the same part of the partition. Our hope is that having these new partition types will allow studies of control attacks to include such models that more realistically capture many settings. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 36,634 |
2203.05154 | Practical Evaluation of Adversarial Robustness via Adaptive Auto Attack | Defense models against adversarial attacks have grown significantly, but the lack of practical evaluation methods has hindered progress. Evaluation can be defined as looking for defense models' lower bound of robustness given a budget number of iterations and a test dataset. A practical evaluation method should be convenient (i.e., parameter-free), efficient (i.e., fewer iterations) and reliable (i.e., approaching the lower bound of robustness). Towards this target, we propose a parameter-free Adaptive Auto Attack (A$^3$) evaluation method which addresses the efficiency and reliability in a test-time-training fashion. Specifically, by observing that adversarial examples to a specific defense model follow some regularities in their starting points, we design an Adaptive Direction Initialization strategy to speed up the evaluation. Furthermore, to approach the lower bound of robustness under the budget number of iterations, we propose an online statistics-based discarding strategy that automatically identifies and abandons hard-to-attack images. Extensive experiments demonstrate the effectiveness of our A$^3$. Particularly, we apply A$^3$ to nearly 50 widely-used defense models. By consuming much fewer iterations than existing methods, i.e., $1/10$ on average (10$\times$ speed up), we achieve lower robust accuracy in all cases. Notably, we won $\textbf{first place}$ out of 1681 teams in CVPR 2021 White-box Adversarial Attacks on Defense Models competitions with this method. Code is available at: $\href{https://github.com/liuye6666/adaptive_auto_attack}{https://github.com/liuye6666/adaptive\_auto\_attack}$ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,726 |
1410.6277 | The Probabilistic Structure of Discrete Agent-Based Models | This paper describes a formalization of agent-based models (ABMs) as random walks on regular graphs and relates the symmetry group of those graphs to a coarse-graining of the ABM that is still Markovian. An ABM in which $N$ agents can be in $\delta$ different states leads to a Markov chain with $\delta^N$ states. In ABMs with a sequential update scheme by which one agent is chosen to update its state at a time, transitions are only allowed between system configurations that differ with respect to a single agent. This characterizes ABMs as random walks on regular graphs. The non-trivial automorphisms of those graphs make visible the dynamical symmetries that an ABM gives rise to because sets of micro configurations can be interchanged without changing the probability structure of the random walk. This allows for a systematic loss-less reduction of the state space of the model. | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | 36,971 |
2407.19670 | Overview of PerpectiveArg2024: The First Shared Task on Perspective
Argument Retrieval | Argument retrieval is the task of finding relevant arguments for a given query. While existing approaches rely solely on the semantic alignment of queries and arguments, this first shared task on perspective argument retrieval incorporates perspectives during retrieval, accounting for latent influences in argumentation. We present a novel multilingual dataset covering demographic and socio-cultural (socio) variables, such as age, gender, and political attitude, representing minority and majority groups in society. We distinguish between three scenarios to explore how retrieval systems consider explicitly (in both query and corpus) and implicitly (only in query) formulated perspectives. This paper provides an overview of this shared task and summarizes the results of the six submitted systems. We find substantial challenges in incorporating perspectivism, especially when aiming for personalization based solely on the text of arguments without explicitly providing socio profiles. Moreover, retrieval systems tend to be biased towards the majority group but partially mitigate bias for the female gender. While we bootstrap perspective argument retrieval, further research is essential to optimize retrieval systems to facilitate personalization and reduce polarization. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 476,877 |
2009.14530 | Asymmetric Contextual Modulation for Infrared Small Target Detection | Single-frame infrared small target detection remains a challenge not only due to the scarcity of intrinsic target characteristics but also because of lacking a public dataset. In this paper, we first contribute an open dataset with high-quality annotations to advance the research in this field. We also propose an asymmetric contextual modulation module specially designed for detecting infrared small targets. To better highlight small targets, besides a top-down global contextual feedback, we supplement a bottom-up modulation pathway based on point-wise channel attention for exchanging high-level semantics and subtle low-level details. We report ablation studies and comparisons to state-of-the-art methods, where we find that our approach performs significantly better. Our dataset and code are available online. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 198,068 |
2208.13311 | When Robots Interact with Groups, Where Does the Trust Reside? | As robots are introduced to more and more complex scenarios, the issues of trust become more complex as various groups, peoples, and entities begin to interact with a deployed robot. This short paper explores a few scenarios in which the trust of the robot may come into conflict between one (or more) entities or groups that the robot is required to deal with. We also present a scenario concerning the idea of repairing trust through a possible apology. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 315,020 |
1710.10313 | A Self-Training Method for Semi-Supervised GANs | Since the creation of Generative Adversarial Networks (GANs), much work has been done to improve their training stability, their generated image quality, their range of application but nearly none of them explored their self-training potential. Self-training has been used before the advent of deep learning in order to allow training on limited labelled training data and has shown impressive results in semi-supervised learning. In this work, we combine these two ideas and make GANs self-trainable for semi-supervised learning tasks by exploiting their infinite data generation potential. Results show that using even the simplest form of self-training yields an improvement. We also show results for a more complex self-training scheme that performs at least as well as the basic self-training scheme but with significantly less data augmentation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 83,346 |
2406.17797 | MoleculeCLA: Rethinking Molecular Benchmark via Computational
Ligand-Target Binding Analysis | Molecular representation learning is pivotal for various molecular property prediction tasks related to drug discovery. Robust and accurate benchmarks are essential for refining and validating current methods. Existing molecular property benchmarks derived from wet experiments, however, face limitations such as data volume constraints, unbalanced label distribution, and noisy labels. To address these issues, we construct a large-scale and precise molecular representation dataset of approximately 140,000 small molecules, meticulously designed to capture an extensive array of chemical, physical, and biological properties, derived through a robust computational ligand-target binding analysis pipeline. We conduct extensive experiments on various deep learning models, demonstrating that our dataset offers significant physicochemical interpretability to guide model development and design. Notably, the dataset's properties are linked to binding affinity metrics, providing additional insights into model performance in drug-target interaction tasks. We believe this dataset will serve as a more accurate and reliable benchmark for molecular representation learning, thereby expediting progress in the field of artificial intelligence-driven drug discovery. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 467,732 |
2402.18508 | Orchid: Flexible and Data-Dependent Convolution for Sequence Modeling | In the rapidly evolving field of deep learning, the demand for models that are both expressive and computationally efficient has never been more critical. This paper introduces Orchid, a novel architecture designed to address the quadratic complexity of traditional attention mechanisms without compromising the ability to capture long-range dependencies and in-context learning. At the core of this architecture lies a new data-dependent global convolution layer, which contextually adapts its kernel conditioned on input sequence using a dedicated conditioning neural network. We design two simple conditioning networks that maintain shift equivariance in our data-dependent convolution operation. The dynamic nature of the proposed convolution kernel grants Orchid high expressivity while maintaining quasilinear scalability for long sequences. We evaluate the proposed model across multiple domains, including language modeling and image classification, to highlight its performance and generality. Our experiments demonstrate that this architecture not only outperforms traditional attention-based architectures such as BERT and Vision Transformers with smaller model sizes, but also extends the feasible sequence length beyond the limitations of the dense attention layers. This achievement represents a significant step towards more efficient and scalable deep learning models for sequence modeling. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 433,457 |
2005.03645 | XEM: An Explainable-by-Design Ensemble Method for Multivariate Time
Series Classification | We present XEM, an eXplainable-by-design Ensemble method for Multivariate time series classification. XEM relies on a new hybrid ensemble method that combines an explicit boosting-bagging approach to handle the bias-variance trade-off faced by machine learning models and an implicit divide-and-conquer approach to individualize classifier errors on different parts of the training data. Our evaluation shows that XEM outperforms the state-of-the-art MTS classifiers on the public UEA datasets. Furthermore, XEM provides faithful explainability-by-design and manifests robust performance when faced with challenges arising from continuous data collection (different MTS length, missing data and noise). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 176,216 |
2105.14397 | On the Number of Edges of the Frechet Mean and Median Graphs | The availability of large datasets composed of graphs creates an unprecedented need to invent novel tools in statistical learning for graph-valued random variables. To characterize the average of a sample of graphs, one can compute the sample Frechet mean and median graphs. In this paper, we address the following foundational question: does a mean or median graph inherit the structural properties of the graphs in the sample? An important graph property is the edge density; we establish that edge density is an hereditary property, which can be transmitted from a graph sample to its sample Frechet mean or median graphs, irrespective of the method used to estimate the mean or the median. Because of the prominence of the Frechet mean in graph-valued machine learning, this novel theoretical result has some significant practical consequences. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 237,654 |
2203.08037 | Interactive Robotic Grasping with Attribute-Guided Disambiguation | Interactive robotic grasping using natural language is one of the most fundamental tasks in human-robot interaction. However, language can be a source of ambiguity, particularly when there are ambiguous visual or linguistic contents. This paper investigates the use of object attributes in disambiguation and develops an interactive grasping system capable of effectively resolving ambiguities via dialogues. Our approach first predicts target scores and attribute scores through vision-and-language grounding. To handle ambiguous objects and commands, we propose an attribute-guided formulation of the partially observable Markov decision process (Attr-POMDP) for disambiguation. The Attr-POMDP utilizes target and attribute scores as the observation model to calculate the expected return of an attribute-based (e.g., "what is the color of the target, red or green?") or a pointing-based (e.g., "do you mean this one?") question. Our disambiguation module runs in real time on a real robot, and the interactive grasping system achieves a 91.43\% selection accuracy in the real-robot experiments, outperforming several baselines by large margins. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 285,652 |
2409.16694 | A Survey of Low-bit Large Language Models: Basics, Systems, and
Algorithms | Large language models (LLMs) have achieved remarkable advancements in natural language processing, showcasing exceptional performance across various tasks. However, the expensive memory and computational requirements present significant challenges for their practical deployment. Low-bit quantization has emerged as a critical approach to mitigate these challenges by reducing the bit-width of model parameters, activations, and gradients, thus decreasing memory usage and computational demands. This paper presents a comprehensive survey of low-bit quantization methods tailored for LLMs, covering the fundamental principles, system implementations, and algorithmic strategies. An overview of basic concepts and new data formats specific to low-bit LLMs is first introduced, followed by a review of frameworks and systems that facilitate low-bit LLMs across various hardware platforms. Then, we categorize and analyze techniques and toolkits for efficient low-bit training and inference of LLMs. Finally, we conclude with a discussion of future trends and potential advancements of low-bit LLMs. Our systematic overview from basic, system, and algorithm perspectives can offer valuable insights and guidelines for future works to enhance the efficiency and applicability of LLMs through low-bit quantization. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 491,466 |
2305.12755 | GNCformer Enhanced Self-attention for Automatic Speech Recognition | In this paper,an Enhanced Self-Attention (ESA) mechanism has been put forward for robust feature extraction.The proposed ESA is integrated with the recursive gated convolution and self-attention mechanism.In particular, the former is used to capture multi-order feature interaction and the latter is for global feature extraction.In addition, the location of interest that is suitable for inserting the ESA is also worth being explored.In this paper, the ESA is embedded into the encoder layer of the Transformer network for automatic speech recognition (ASR) tasks, and this newly proposed model is named GNCformer. The effectiveness of the GNCformer has been validated using two datasets, that are Aishell-1 and HKUST.Experimental results show that, compared with the Transformer network,0.8%CER,and 1.2%CER improvement for these two mentioned datasets, respectively, can be achieved.It is worth mentioning that only 1.4M additional parameters have been involved in our proposed GNCformer. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 366,173 |
2206.13181 | Learning to Control Local Search for Combinatorial Optimization | Combinatorial optimization problems are encountered in many practical contexts such as logistics and production, but exact solutions are particularly difficult to find and usually NP-hard for considerable problem sizes. To compute approximate solutions, a zoo of generic as well as problem-specific variants of local search is commonly used. However, which variant to apply to which particular problem is difficult to decide even for experts. In this paper we identify three independent algorithmic aspects of such local search algorithms and formalize their sequential selection over an optimization process as Markov Decision Process (MDP). We design a deep graph neural network as policy model for this MDP, yielding a learned controller for local search called NeuroLS. Ample experimental evidence shows that NeuroLS is able to outperform both, well-known general purpose local search controllers from Operations Research as well as latest machine learning-based approaches. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 304,869 |
2106.11504 | Knowing How to Plan | Various planning-based know-how logics have been studied in the recent literature. In this paper, we use such a logic to do know-how-based planning via model checking. In particular, we can handle the higher-order epistemic planning involving know-how formulas as the goal, e.g., find a plan to make sure p such that the adversary does not know how to make p false in the future. We give a PTIME algorithm for the model checking problem over finite epistemic transition systems and axiomatize the logic under the assumption of perfect recall. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 242,421 |
2004.08141 | Modeling Extent-of-Texture Information for Ground Terrain Recognition | Ground Terrain Recognition is a difficult task as the context information varies significantly over the regions of a ground terrain image. In this paper, we propose a novel approach towards ground-terrain recognition via modeling the Extent-of-Texture information to establish a balance between the order-less texture component and ordered-spatial information locally. At first, the proposed method uses a CNN backbone feature extractor network to capture meaningful information of a ground terrain image, and model the extent of texture and shape information locally. Then, the order-less texture information and ordered shape information are encoded in a patch-wise manner, which is utilized by intra-domain message passing module to make every patch aware of each other for rich feature learning. Next, the Extent-of-Texture (EoT) Guided Inter-domain Message Passing module combines the extent of texture and shape information with the encoded texture and shape information in a patch-wise fashion for sharing knowledge to balance out the order-less texture information with ordered shape information. Further, Bilinear model generates a pairwise correlation between the order-less texture information and ordered shape information. Finally, the ground-terrain image classification is performed by a fully connected layer. The experimental results indicate superior performance of the proposed model over existing state-of-the-art techniques on publicly available datasets like DTD, MINC and GTOS-mobile. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 172,982 |
2409.00830 | Building FKG.in: a Knowledge Graph for Indian Food | This paper presents an ontology design along with knowledge engineering, and multilingual semantic reasoning techniques to build an automated system for assimilating culinary information for Indian food in the form of a knowledge graph. The main focus is on designing intelligent methods to derive ontology designs and capture all-encompassing knowledge about food, recipes, ingredients, cooking characteristics, and most importantly, nutrition, at scale. We present our ongoing work in this workshop paper, describe in some detail the relevant challenges in curating knowledge of Indian food, and propose our high-level ontology design. We also present a novel workflow that uses AI, LLM, and language technology to curate information from recipe blog sites in the public domain to build knowledge graphs for Indian food. The methods for knowledge curation proposed in this paper are generic and can be replicated for any domain. The design is application-agnostic and can be used for AI-driven smart analysis, building recommendation systems for Personalized Digital Health, and complementing the knowledge graph for Indian food with contextual information such as user information, food biochemistry, geographic information, agricultural information, etc. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 485,087 |
1206.0259 | The Causal Topography of Cognition | The causal structure of cognition can be simulated but not implemented computationally, just as the causal structure of a comet can be simulated but not implemented computationally. The only thing that allows us even to imagine otherwise is that cognition, unlike a comet, is invisible (to all but the cognizer). | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 16,285 |
1901.11179 | Human Face Expressions from Images - 2D Face Geometry and 3D Face Local
Motion versus Deep Neural Features | Several computer algorithms for recognition of visible human emotions are compared at the web camera scenario using CNN/MMOD face detector. The recognition refers to four face expressions: smile, surprise, anger, and neutral. At the feature extraction stage, the following three concepts of face description are confronted: (a) static 2D face geometry represented by its 68 characteristic landmarks (FP68); (b) dynamic 3D geometry defined by motion parameters for eight distinguished face parts (denoted as AU8) of personalized Candide-3 model; (c) static 2D visual description as 2D array of gray scale pixels (known as facial raw image). At the classification stage, the performance of two major models are analyzed: (a) support vector machine (SVM) with kernel options; (b) convolutional neural network (CNN) with variety of relevant tensor processing layers and blocks of them. The models are trained for frontal views of human faces while they are tested for arbitrary head poses. For geometric features, the success rate (accuracy) indicate nearly triple increase of performance of CNN with respect to SVM classifiers. For raw images, CNN outperforms in accuracy its best geometric counterpart (AU/CNN) by about 30 percent while the best SVM solutions are inferior nearly four times. For F-score the high advantage of raw/CNN over geometric/CNN and geometric/SVM is observed, as well. We conclude that contrary to CNN based emotion classifiers, the generalization capability wrt human head pose is for SVM based emotion classifiers poor. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 120,189 |
2107.00967 | R2D2: Recursive Transformer based on Differentiable Tree for
Interpretable Hierarchical Language Modeling | Human language understanding operates at multiple levels of granularity (e.g., words, phrases, and sentences) with increasing levels of abstraction that can be hierarchically combined. However, existing deep models with stacked layers do not explicitly model any sort of hierarchical process. This paper proposes a recursive Transformer model based on differentiable CKY style binary trees to emulate the composition process. We extend the bidirectional language model pre-training objective to this architecture, attempting to predict each word given its left and right abstraction nodes. To scale up our approach, we also introduce an efficient pruned tree induction algorithm to enable encoding in just a linear number of composition steps. Experimental results on language modeling and unsupervised parsing show the effectiveness of our approach. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 244,336 |
2012.13204 | Predicting Seminal Quality with the Dominance-Based Rough Sets Approach | The paper relies on the clinical data of a previously published study. We identify two very questionable assumptions of said work, namely confusing evidence of absence and absence of evidence, and neglecting the ordinal nature of attributes' domains. We then show that using an adequate ordinal methodology such as the dominance-based rough sets approach (DRSA) can significantly improve the predictive accuracy of the expert system, resulting in almost complete accuracy for a dataset of 100 instances. Beyond the performance of DRSA in solving the diagnosis problem at hand, these results suggest the inadequacy and triviality of the underlying dataset. We provide links to open data from the UCI machine learning repository to allow for an easy verification/refutation of the claims made in this paper. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 213,152 |
2403.08283 | Optimized Detection and Classification on GTRSB: Advancing Traffic Sign
Recognition with Convolutional Neural Networks | In the rapidly evolving landscape of transportation, the proliferation of automobiles has made road traffic more complex, necessitating advanced vision-assisted technologies for enhanced safety and navigation. These technologies are imperative for providing critical traffic sign information, influencing driver behavior, and supporting vehicle control, especially for drivers with disabilities and in the burgeoning field of autonomous vehicles. Traffic sign detection and recognition have emerged as key areas of research due to their essential roles in ensuring road safety and compliance with traffic regulations. Traditional computer vision methods have faced challenges in achieving optimal accuracy and speed due to real-world variabilities. However, the advent of deep learning and Convolutional Neural Networks (CNNs) has revolutionized this domain, offering solutions that significantly surpass previous capabilities in terms of speed and reliability. This paper presents an innovative approach leveraging CNNs that achieves an accuracy of nearly 96\%, highlighting the potential for even greater precision through advanced localization techniques. Our findings not only contribute to the ongoing advancement of traffic sign recognition technology but also underscore the critical impact of these developments on road safety and the future of autonomous driving. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 437,276 |
2004.04822 | Analysis on DeepLabV3+ Performance for Automatic Steel Defects Detection | Our works experimented DeepLabV3+ with different backbones on a large volume of steel images aiming to automatically detect different types of steel defects. Our methods applied random weighted augmentation to balance different defects types in the training set. And then applied DeeplabV3+ model three different backbones, ResNet, DenseNet and EfficientNet, on segmenting defection regions on the steel images. Based on experiments, we found that applying ResNet101 or EfficientNet as backbones could reach the best IoU scores on the test set, which is around 0.57, comparing with 0.325 for using DenseNet. Also, DeepLabV3+ model with ResNet101 as backbone has the fewest training time. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 172,003 |
1802.08080 | Classification of Breast Cancer Histology using Deep Learning | Breast Cancer is a major cause of death worldwide among women. Hematoxylin and Eosin (H&E) stained breast tissue samples from biopsies are observed under microscopes for the primary diagnosis of breast cancer. In this paper, we propose a deep learning-based method for classification of H&E stained breast tissue images released for BACH challenge 2018 by fine-tuning Inception-v3 convolutional neural network (CNN) proposed by Szegedy et al. These images are to be classified into four classes namely, i) normal tissue, ii) benign tumor, iii) in-situ carcinoma and iv) invasive carcinoma. Our strategy is to extract patches based on nuclei density instead of random or grid sampling, along with rejection of patches that are not rich in nuclei (non-epithelial) regions for training and testing. Every patch (nuclei-dense region) in an image is classified in one of the four above mentioned categories. The class of the entire image is determined using majority voting over the nuclear classes. We obtained an average four class accuracy of 85% and an average two class (non-cancer vs. carcinoma) accuracy of 93%, which improves upon a previous benchmark by Araujo et al. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 91,030 |
2108.05317 | Model-agnostic vs. Model-intrinsic Interpretability for Explainable
Product Search | Product retrieval systems have served as the main entry for customers to discover and purchase products online. With increasing concerns on the transparency and accountability of AI systems, studies on explainable information retrieval has received more and more attention in the research community. Interestingly, in the domain of e-commerce, despite the extensive studies on explainable product recommendation, the studies of explainable product search is still in an early stage. In this paper, we study how to construct effective explainable product search by comparing model-agnostic explanation paradigms with model-intrinsic paradigms and analyzing the important factors that determine the performance of product search explanations. We propose an explainable product search model with model-intrinsic interpretability and conduct crowdsourcing to compare it with the state-of-the-art explainable product search model with model-agnostic interpretability. We observe that both paradigms have their own advantages and the effectiveness of search explanations on different properties are affected by different factors. For example, explanation fidelity is more important for user's overall satisfaction on the system while explanation novelty may be more useful in attracting user purchases. These findings could have important implications for the future studies and design of explainable product search engines. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 250,271 |
2309.06961 | Towards Reliable Dermatology Evaluation Benchmarks | Benchmark datasets for digital dermatology unwittingly contain inaccuracies that reduce trust in model performance estimates. We propose a resource-efficient data-cleaning protocol to identify issues that escaped previous curation. The protocol leverages an existing algorithmic cleaning strategy and is followed by a confirmation process terminated by an intuitive stopping criterion. Based on confirmation by multiple dermatologists, we remove irrelevant samples and near duplicates and estimate the percentage of label errors in six dermatology image datasets for model evaluation promoted by the International Skin Imaging Collaboration. Along with this paper, we publish revised file lists for each dataset which should be used for model evaluation. Our work paves the way for more trustworthy performance assessment in digital dermatology. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 391,607 |
1810.02920 | A Class of Hybrid LQG Mean Field Games with State-Invariant Switching
and Stopping Strategies | A novel framework is presented that combines Mean Field Game (MFG) theory and Hybrid Optimal Control (HOC) theory to obtain a unique $\epsilon$-Nash equilibrium for a non-cooperative game with switching and stopping times. We consider the case where there exists one major agent with a significant influence on the system together with a large number of minor agents constituting two subpopulations, each agent with individually asymptotically negligible effect on the whole system. Each agent has stochastic linear dynamics with quadratic costs, and the agents are coupled in their dynamics and costs by the average state of minor agents (i.e. the empirical mean field). It is shown that for a class of Hybrid LQG MFGs, the optimal switching and stopping times are state-invariant and only depend on the dynamical parameters of each agent. Accordingly, a hybrid systems formulation of the game is presented via the indexing by discrete events: (i) the switching of the major agent between alternative dynamics or (ii) the termination of the agents' trajectories in one or both of the subpopulations of minor agents. Optimal switchings and stopping time strategies together with best response control actions for, respectively, the major agent and all minor agents are established with respect to their individual cost criteria by an application of Hybrid LQG MFG theory. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 109,691 |
1603.01842 | Proximal groupoid patterns In digital images | The focus of this article is on the detection and classification of patterns based on groupoids. The approach hinges on descriptive proximity of points in a set based on the neighborliness property. This approach lends support to image analysis and understanding and in studying nearness of image segments. A practical application of the approach is in terms of the analysis of natural images for pattern identification and classification. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 52,941 |
2012.02743 | SMPLy Benchmarking 3D Human Pose Estimation in the Wild | Predicting 3D human pose from images has seen great recent improvements. Novel approaches that can even predict both pose and shape from a single input image have been introduced, often relying on a parametric model of the human body such as SMPL. While qualitative results for such methods are often shown for images captured in-the-wild, a proper benchmark in such conditions is still missing, as it is cumbersome to obtain ground-truth 3D poses elsewhere than in a motion capture room. This paper presents a pipeline to easily produce and validate such a dataset with accurate ground-truth, with which we benchmark recent 3D human pose estimation methods in-the-wild. We make use of the recently introduced Mannequin Challenge dataset which contains in-the-wild videos of people frozen in action like statues and leverage the fact that people are static and the camera moving to accurately fit the SMPL model on the sequences. A total of 24,428 frames with registered body models are then selected from 567 scenes at almost no cost, using only online RGB videos. We benchmark state-of-the-art SMPL-based human pose estimation methods on this dataset. Our results highlight that challenges remain, in particular for difficult poses or for scenes where the persons are partially truncated or occluded. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 209,865 |
2107.01875 | DeepRapper: Neural Rap Generation with Rhyme and Rhythm Modeling | Rap generation, which aims to produce lyrics and corresponding singing beats, needs to model both rhymes and rhythms. Previous works for rap generation focused on rhyming lyrics but ignored rhythmic beats, which are important for rap performance. In this paper, we develop DeepRapper, a Transformer-based rap generation system that can model both rhymes and rhythms. Since there is no available rap dataset with rhythmic beats, we develop a data mining pipeline to collect a large-scale rap dataset, which includes a large number of rap songs with aligned lyrics and rhythmic beats. Second, we design a Transformer-based autoregressive language model which carefully models rhymes and rhythms. Specifically, we generate lyrics in the reverse order with rhyme representation and constraint for rhyme enhancement and insert a beat symbol into lyrics for rhythm/beat modeling. To our knowledge, DeepRapper is the first system to generate rap with both rhymes and rhythms. Both objective and subjective evaluations demonstrate that DeepRapper generates creative and high-quality raps with rhymes and rhythms. Code will be released on GitHub. | false | false | true | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 244,628 |
2004.02758 | Towards Detection of Sheep Onboard a UAV | In this work we consider the task of detecting sheep onboard an unmanned aerial vehicle (UAV) flying at an altitude of 80 m. At this height, the sheep are relatively small, only about 15 pixels across. Although deep learning strategies have gained enormous popularity in the last decade and are now extensively used for object detection in many fields, state-of-the-art detectors perform poorly in the case of smaller objects. We develop a novel dataset of UAV imagery of sheep and consider a variety of object detectors to determine which is the most suitable for our task in terms of both accuracy and speed. Our findings indicate that a UNet detector using the weighted Hausdorff distance as a loss function during training is an excellent option for detection of sheep onboard a UAV. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 171,331 |
1805.03625 | Multicast Networks Solvable over Every Finite Field | In this work, it is revealed that an acyclic multicast network that is scalar linearly solvable over Galois Field of two elements, GF(2), is solvable over all higher finite fields. An algorithm which, given a GF(2) solution for an acyclic multicast network, computes the solution over any arbitrary finite field is presented. The concept of multicast matroid is introduced in this paper. Gammoids and their base-orderability along with the regularity of a binary multicast matroid are used to prove the results. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 97,085 |
2112.12793 | A Multi-View Framework for BGP Anomaly Detection via Graph Attention
Network | As the default protocol for exchanging routing reachability information on the Internet, the abnormal behavior in traffic of Border Gateway Protocols (BGP) is closely related to Internet anomaly events. The BGP anomalous detection model ensures stable routing services on the Internet through its real-time monitoring and alerting capabilities. Previous studies either focused on the feature selection problem or the memory characteristic in data, while ignoring the relationship between features and the precise time correlation in feature (whether it's long or short term dependence). In this paper, we propose a multi-view model for capturing anomalous behaviors from BGP update traffic, in which Seasonal and Trend decomposition using Loess (STL) method is used to reduce the noise in the original time-series data, and Graph Attention Network (GAT) is used to discover feature relationships and time correlations in feature, respectively. Our results outperform the state-of-the-art methods at the anomaly detection task, with the average F1 score up to 96.3% and 93.2% on the balanced and imbalanced datasets respectively. Meanwhile, our model can be extended to classify multiple anomalous and to detect unknown events. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 273,055 |
2402.15080 | Infusing Hierarchical Guidance into Prompt Tuning: A Parameter-Efficient
Framework for Multi-level Implicit Discourse Relation Recognition | Multi-level implicit discourse relation recognition (MIDRR) aims at identifying hierarchical discourse relations among arguments. Previous methods achieve the promotion through fine-tuning PLMs. However, due to the data scarcity and the task gap, the pre-trained feature space cannot be accurately tuned to the task-specific space, which even aggravates the collapse of the vanilla space. Besides, the comprehension of hierarchical semantics for MIDRR makes the conversion much harder. In this paper, we propose a prompt-based Parameter-Efficient Multi-level IDRR (PEMI) framework to solve the above problems. First, we leverage parameter-efficient prompt tuning to drive the inputted arguments to match the pre-trained space and realize the approximation with few parameters. Furthermore, we propose a hierarchical label refining (HLR) method for the prompt verbalizer to deeply integrate hierarchical guidance into the prompt tuning. Finally, our model achieves comparable results on PDTB 2.0 and 3.0 using about 0.1% trainable parameters compared with baselines and the visualization demonstrates the effectiveness of our HLR method. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 431,982 |
cmp-lg/9505014 | Compositionality for Presuppositions over Tableaux | Tableaux originate as a decision method for a logical language. They can also be extended to obtain a structure that spells out all the information in a set of sentences in terms of truth value assignments to atomic formulas that appear in them. This approach is pursued here. Over such a structure, compositional rules are provided for obtaining the presuppositions of a logical statement from its atomic subformulas and their presuppositions. The rules are based on classical logic semantics and they are shown to model the behaviour of presuppositions observed in natural language sentences built with {\em if \ldots then}, {\em and} and {\em or}. The advantages of this method over existing frameworks for presuppositions are discussed. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,373 |
2002.05388 | Recurrent Attention Model with Log-Polar Mapping is Robust against
Adversarial Attacks | Convolutional neural networks are vulnerable to small $\ell^p$ adversarial attacks, while the human visual system is not. Inspired by neural networks in the eye and the brain, we developed a novel artificial neural network model that recurrently collects data with a log-polar field of view that is controlled by attention. We demonstrate the effectiveness of this design as a defense against SPSA and PGD adversarial attacks. It also has beneficial properties observed in the animal visual system, such as reflex-like pathways for low-latency inference, fixed amount of computation independent of image size, and rotation and scale invariance. The code for experiments is available at https://gitlab.com/exwzd-public/kiritani_ono_2020. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 163,881 |
2411.19468 | Random Feature Models with Learnable Activation Functions | Current random feature models typically rely on fixed activation functions, limiting their ability to capture diverse patterns in data. To address this, we introduce the Random Feature model with Learnable Activation Functions (RFLAF), a novel model that significantly enhances the expressivity and interpretability of traditional random feature (RF) models. We begin by studying the RF model with a single radial basis function, where we discover a new kernel and provide the first theoretical analysis on it. By integrating the basis functions with learnable weights, we show that RFLAF can represent a broad class of random feature models whose activation functions belong in $C_c(\mathbb{R})$. Theoretically, we prove that the model requires only about twice the parameter number compared to a traditional RF model to achieve the significant leap in expressivity. Experimentally, RFLAF demonstrates two key advantages: (1) it performs better across various tasks compared to traditional RF model with the same number of parameters, and (2) the optimized weights offer interpretability, as the learned activation function can be directly inferred from these weights. Our model paves the way for developing more expressive and interpretable frameworks within random feature models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 512,253 |
2307.10471 | Classification of Visualization Types and Perspectives in Patents | Due to the swift growth of patent applications each year, information and multimedia retrieval approaches that facilitate patent exploration and retrieval are of utmost importance. Different types of visualizations (e.g., graphs, technical drawings) and perspectives (e.g., side view, perspective) are used to visualize details of innovations in patents. The classification of these images enables a more efficient search and allows for further analysis. So far, datasets for image type classification miss some important visualization types for patents. Furthermore, related work does not make use of recent deep learning approaches including transformers. In this paper, we adopt state-of-the-art deep learning methods for the classification of visualization types and perspectives in patent images. We extend the CLEF-IP dataset for image type classification in patents to ten classes and provide manual ground truth annotations. In addition, we derive a set of hierarchical classes from a dataset that provides weakly-labeled data for image perspectives. Experimental results have demonstrated the feasibility of the proposed approaches. Source code, models, and dataset will be made publicly available. | false | false | false | false | true | true | true | false | false | false | false | true | false | false | false | false | false | true | 380,553 |
2412.05155 | Multimodal Fact-Checking with Vision Language Models: A Probing
Classifier based Solution with Embedding Strategies | This study evaluates the effectiveness of Vision Language Models (VLMs) in representing and utilizing multimodal content for fact-checking. To be more specific, we investigate whether incorporating multimodal content improves performance compared to text-only models and how well VLMs utilize text and image information to enhance misinformation detection. Furthermore we propose a probing classifier based solution using VLMs. Our approach extracts embeddings from the last hidden layer of selected VLMs and inputs them into a neural probing classifier for multi-class veracity classification. Through a series of experiments on two fact-checking datasets, we demonstrate that while multimodality can enhance performance, fusing separate embeddings from text and image encoders yielded superior results compared to using VLM embeddings. Furthermore, the proposed neural classifier significantly outperformed KNN and SVM baselines in leveraging extracted embeddings, highlighting its effectiveness for multimodal fact-checking. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 514,713 |
1311.3882 | Sampling Content Distributed Over Graphs | Despite recent effort to estimate topology characteristics of large graphs (i.e., online social networks and peer-to-peer networks), little attention has been given to develop a formal methodology to characterize the vast amount of content distributed over these networks. Due to the large scale nature of these networks, exhaustive enumeration of this content is computationally prohibitive. In this paper, we show how one can obtain content properties by sampling only a small fraction of vertices. We first show that when sampling is naively applied, this can produce a huge bias in content statistics (i.e., average number of content duplications). To remove this bias, one may use maximum likelihood estimation to estimate content characteristics. However our experimental results show that one needs to sample most vertices in the graph to obtain accurate statistics using such a method. To address this challenge, we propose two efficient estimators: special copy estimator (SCE) and weighted copy estimator (WCE) to measure content characteristics using available information in sampled contents. SCE uses the special content copy indicator to compute the estimate, while WCE derives the estimate based on meta-information in sampled vertices. We perform experiments to show WCE and SCE are cost effective and also ``{\em asymptotically unbiased}''. Our methodology provides a new tool for researchers to efficiently query content distributed in large scale networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 28,445 |
1912.05828 | Formal Verification of Debates in Argumentation Theory | Humans engage in informal debates on a daily basis. By expressing their opinions and ideas in an argumentative fashion, they are able to gain a deeper understanding of a given problem and in some cases, find the best possible course of actions towards resolving it. In this paper, we develop a methodology to verify debates formalised as abstract argumentation frameworks. We first present a translation from debates to transition systems. Such transition systems can model debates and represent their evolution over time using a finite set of states. We then formalise relevant debate properties using temporal and strategy logics. These formalisations, along with a debate transition system, allow us to verify whether a given debate satisfies certain properties. The verification process can be automated using model checkers. Therefore, we also measure their performance when verifying debates, and use the results to discuss the feasibility of model checking debates. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 157,201 |
1802.05101 | Democratizing AI: Non-expert design of prediction tasks | Non-experts have long made important contributions to machine learning (ML) by contributing training data, and recent work has shown that non-experts can also help with feature engineering by suggesting novel predictive features. However, non-experts have only contributed features to prediction tasks already posed by experienced ML practitioners. Here we study how non-experts can design prediction tasks themselves, what types of tasks non-experts will design, and whether predictive models can be automatically trained on data sourced for their tasks. We use a crowdsourcing platform where non-experts design predictive tasks that are then categorized and ranked by the crowd. Crowdsourced data are collected for top-ranked tasks and predictive models are then trained and evaluated automatically using those data. We show that individuals without ML experience can collectively construct useful datasets and that predictive models can be learned on these datasets, but challenges remain. The prediction tasks designed by non-experts covered a broad range of domains, from politics and current events to health behavior, demographics, and more. Proper instructions are crucial for non-experts, so we also conducted a randomized trial to understand how different instructions may influence the types of prediction tasks being proposed. In general, understanding better how non-experts can contribute to ML can further leverage advances in Automatic ML and has important implications as ML continues to drive workplace automation. | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 90,378 |
2406.14701 | Speech Prefix-Tuning with RNNT Loss for Improving LLM Predictions | In this paper, we focus on addressing the constraints faced when applying LLMs to ASR. Recent works utilize prefixLM-type models, which directly apply speech as a prefix to LLMs for ASR. We have found that optimizing speech prefixes leads to better ASR performance and propose applying RNNT loss to perform speech prefix-tuning. This is a simple approach and does not increase the model complexity or alter the inference pipeline. We also propose language-based soft prompting to further improve with frozen LLMs. Empirical analysis on realtime testset from 10 Indic languages demonstrate that our proposed speech prefix-tuning yields improvements with both frozen and fine-tuned LLMs. Our recognition results on an average of 10 Indics show that the proposed prefix-tuning with RNNT loss results in a 12\% relative improvement in WER over the baseline with a fine-tuned LLM. Our proposed approches with the frozen LLM leads to a 31\% relative improvement over basic soft-prompting prefixLM. | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 466,421 |
2310.14748 | A Comparative Study of Portfolio Optimization Methods for the Indian
Stock Market | This chapter presents a comparative study of the three portfolio optimization methods, MVP, HRP, and HERC, on the Indian stock market, particularly focusing on the stocks chosen from 15 sectors listed on the National Stock Exchange of India. The top stocks of each cluster are identified based on their free-float market capitalization from the report of the NSE published on July 1, 2022 (NSE Website). For each sector, three portfolios are designed on stock prices from July 1, 2019, to June 30, 2022, following three portfolio optimization approaches. The portfolios are tested over the period from July 1, 2022, to June 30, 2023. For the evaluation of the performances of the portfolios, three metrics are used. These three metrics are cumulative returns, annual volatilities, and Sharpe ratios. For each sector, the portfolios that yield the highest cumulative return, the lowest volatility, and the maximum Sharpe Ratio over the training and the test periods are identified. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 402,002 |
1810.06337 | Keyless Semi-Quantum Point-to-point Communication Protocol with Low
Resource Requirements | Full quantum capability devices can provide secure communications, but they are challenging to make portable given the current technology. Besides, classical portable devices are unable to construct communication channels resistant to quantum computers. Hence, communication security on portable devices cannot be guaranteed. Semi-Quantum Communication (SQC) attempts to break the quandary by lowering the receiver's required quantum capability so that secure communications can be implemented on a portable device. However, all SQC protocols have low qubit efficiency and complex hardware implementations. The protocols involving quantum entanglement require linear Entanglement Preservation Time (EPT) and linear quregister size. In this paper, we propose two new keyless SQC protocols that address the aforementioned weaknesses. They are named Economic Keyless Semi-Quantum Point-to-point Communication (EKSQPC) and Rate Estimation EKSQPC (REKSQPC). They achieve theoretically constant minimal EPT and quregister size, regardless of message length. We show that the new protocols, with low overhead, can detect Measure and Replay Attacks (MRAs). REKSQDC is tolerant to transmission impairments and environmental perturbations. The protocols are based on a new quantum message transmission operation termed Tele-Fetch. Like QKD, their strength depends on physical principles rather than mathematical complexity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 110,419 |
1902.09820 | Robust and Subject-Independent Driving Manoeuvre Anticipation through
Domain-Adversarial Recurrent Neural Networks | Through deep learning and computer vision techniques, driving manoeuvres can be predicted accurately a few seconds in advance. Even though adapting a learned model to new drivers and different vehicles is key for robust driver-assistance systems, this problem has received little attention so far. This work proposes to tackle this challenge through domain adaptation, a technique closely related to transfer learning. A proof of concept for the application of a Domain-Adversarial Recurrent Neural Network (DA-RNN) to multi-modal time series driving data is presented, in which domain-invariant features are learned by maximizing the loss of an auxiliary domain classifier. Our implementation is evaluated using a leave-one-driver-out approach on individual drivers from the Brain4Cars dataset, as well as using a new dataset acquired through driving simulations, yielding an average increase in performance of 30% and 114% respectively compared to no adaptation. We also show the importance of fine-tuning sections of the network to optimise the extraction of domain-independent features. The results demonstrate the applicability of the approach to driver-assistance systems as well as training and simulation environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,513 |
2110.05223 | Continual Learning with Differential Privacy | In this paper, we focus on preserving differential privacy (DP) in continual learning (CL), in which we train ML models to learn a sequence of new tasks while memorizing previous tasks. We first introduce a notion of continual adjacent databases to bound the sensitivity of any data record participating in the training process of CL. Based upon that, we develop a new DP-preserving algorithm for CL with a data sampling strategy to quantify the privacy risk of training data in the well-known Averaged Gradient Episodic Memory (A-GEM) approach by applying a moments accountant. Our algorithm provides formal guarantees of privacy for data records across tasks in CL. Preliminary theoretical analysis and evaluations show that our mechanism tightens the privacy loss while maintaining a promising model utility. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 260,208 |
2204.12831 | The Revisiting Problem in Simultaneous Localization and Mapping: A
Survey on Visual Loop Closure Detection | Where am I? This is one of the most critical questions that any intelligent system should answer to decide whether it navigates to a previously visited area. This problem has long been acknowledged for its challenging nature in simultaneous localization and mapping (SLAM), wherein the robot needs to correctly associate the incoming sensory data to the database allowing consistent map generation. The significant advances in computer vision achieved over the last 20 years, the increased computational power, and the growing demand for long-term exploration contributed to efficiently performing such a complex task with inexpensive perception sensors. In this article, visual loop closure detection, which formulates a solution based solely on appearance input data, is surveyed. We start by briefly introducing place recognition and SLAM concepts in robotics. Then, we describe a loop closure detection system's structure, covering an extensive collection of topics, including the feature extraction, the environment representation, the decision-making step, and the evaluation process. We conclude by discussing open and new research challenges, particularly concerning the robustness in dynamic environments, the computational complexity, and scalability in long-term operations. The article aims to serve as a tutorial and a position paper for newcomers to visual loop closure detection. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 293,622 |
2007.10004 | Deep Image Clustering with Category-Style Representation | Deep clustering which adopts deep neural networks to obtain optimal representations for clustering has been widely studied recently. In this paper, we propose a novel deep image clustering framework to learn a category-style latent representation in which the category information is disentangled from image style and can be directly used as the cluster assignment. To achieve this goal, mutual information maximization is applied to embed relevant information in the latent representation. Moreover, augmentation-invariant loss is employed to disentangle the representation into category part and style part. Last but not least, a prior distribution is imposed on the latent representation to ensure the elements of the category vector can be used as the probabilities over clusters. Comprehensive experiments demonstrate that the proposed approach outperforms state-of-the-art methods significantly on five public datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 188,137 |
1903.04988 | Cascaded Projection: End-to-End Network Compression and Acceleration | We propose a data-driven approach for deep convolutional neural network compression that achieves high accuracy with high throughput and low memory requirements. Current network compression methods either find a low-rank factorization of the features that requires more memory, or select only a subset of features by pruning entire filter channels. We propose the Cascaded Projection (CaP) compression method that projects the output and input filter channels of successive layers to a unified low dimensional space based on a low-rank projection. We optimize the projection to minimize classification loss and the difference between the next layer's features in the compressed and uncompressed networks. To solve this non-convex optimization problem we propose a new optimization method of a proxy matrix using backpropagation and Stochastic Gradient Descent (SGD) with geometric constraints. Our cascaded projection approach leads to improvements in all critical areas of network compression: high accuracy, low memory consumption, low parameter count and high processing speed. The proposed CaP method demonstrates state-of-the-art results compressing VGG16 and ResNet networks with over 4x reduction in the number of computations and excellent performance in top-5 accuracy on the ImageNet dataset before and after fine-tuning. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 124,079 |
2303.14115 | Principles of Forgetting in Domain-Incremental Semantic Segmentation in
Adverse Weather Conditions | Deep neural networks for scene perception in automated vehicles achieve excellent results for the domains they were trained on. However, in real-world conditions, the domain of operation and its underlying data distribution are subject to change. Adverse weather conditions, in particular, can significantly decrease model performance when such data are not available during training.Additionally, when a model is incrementally adapted to a new domain, it suffers from catastrophic forgetting, causing a significant drop in performance on previously observed domains. Despite recent progress in reducing catastrophic forgetting, its causes and effects remain obscure. Therefore, we study how the representations of semantic segmentation models are affected during domain-incremental learning in adverse weather conditions. Our experiments and representational analyses indicate that catastrophic forgetting is primarily caused by changes to low-level features in domain-incremental learning and that learning more general features on the source domain using pre-training and image augmentations leads to efficient feature reuse in subsequent tasks, which drastically reduces catastrophic forgetting. These findings highlight the importance of methods that facilitate generalized features for effective continual learning algorithms. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 353,957 |
2410.05754 | Simple Relative Deviation Bounds for Covariance and Gram Matrices | We provide non-asymptotic, relative deviation bounds for the eigenvalues of empirical covariance and gram matrices in general settings. Unlike typical uniform bounds, which may fail to capture the behavior of smaller eigenvalues, our results provide sharper control across the spectrum. Our analysis is based on a general-purpose theorem that allows one to convert existing uniform bounds into relative ones. The theorems and techniques emphasize simplicity and should be applicable across various settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 495,895 |
2408.17437 | SYNTHEVAL: Hybrid Behavioral Testing of NLP Models with Synthetic
CheckLists | Traditional benchmarking in NLP typically involves using static held-out test sets. However, this approach often results in an overestimation of performance and lacks the ability to offer comprehensive, interpretable, and dynamic assessments of NLP models. Recently, works like DynaBench (Kiela et al., 2021) and CheckList (Ribeiro et al., 2020) have addressed these limitations through behavioral testing of NLP models with test types generated by a multistep human-annotated pipeline. Unfortunately, manually creating a variety of test types requires much human labor, often at prohibitive cost. In this work, we propose SYNTHEVAL, a hybrid behavioral testing framework that leverages large language models (LLMs) to generate a wide range of test types for a comprehensive evaluation of NLP models. SYNTHEVAL first generates sentences via LLMs using controlled generation, and then identifies challenging examples by comparing the predictions made by LLMs with task-specific NLP models. In the last stage, human experts investigate the challenging examples, manually design templates, and identify the types of failures the taskspecific models consistently exhibit. We apply SYNTHEVAL to two classification tasks, sentiment analysis and toxic language detection, and show that our framework is effective in identifying weaknesses of strong models on these tasks. We share our code in https://github.com/Loreley99/SynthEval_CheckList. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 484,704 |
1812.11422 | Loss Aversion in Recommender Systems: Utilizing Negative User Preference
to Improve Recommendation Quality | Negative user preference is an important context that is not sufficiently utilized by many existing recommender systems. This context is especially useful in scenarios where the cost of negative items is high for the users. In this work, we describe a new recommender algorithm that explicitly models negative user preferences in order to recommend more positive items at the top of recommendation-lists. We build upon existing machine-learning model to incorporate the contextual information provided by negative user preference. With experimental evaluations on two openly available datasets, we show that our method is able to improve recommendation quality: by improving accuracy and at the same time reducing the number of negative items at the top of recommendation-lists. Our work demonstrates the value of the contextual information provided by negative feedback, and can also be extended to signed social networks and link prediction in other networks. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 117,556 |
1904.08774 | Decoding High-Order Interleaved Rank-Metric Codes | This paper presents an algorithm for decoding homogeneous interleaved codes of high interleaving order in the rank metric. The new decoder is an adaption of the Hamming-metric decoder by Metzner and Kapturowski (1990) and guarantees to correct all rank errors of weight up to $d-2$ whose rank over the large base field of the code equals the number of errors, where $d$ is the minimum rank distance of the underlying code. In contrast to previously-known decoding algorithms, the new decoder works for any rank-metric code, not only Gabidulin codes. It is purely based on linear-algebraic computations, and has an explicit and easy-to-handle success condition. Furthermore, a lower bound on the decoding success probability for random errors of a given weight is derived. The relation of the new algorithm to existing interleaved decoders in the special case of Gabidulin codes is given. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 128,176 |
2405.16528 | LoQT: Low-Rank Adapters for Quantized Pretraining | Despite advances using low-rank adapters and quantization, pretraining of large models on consumer hardware has not been possible without model sharding, offloading during training, or per-layer gradient updates. To address these limitations, we propose Low-Rank Adapters for Quantized Training (LoQT), a method for efficiently training quantized models. LoQT uses gradient-based tensor factorization to initialize low-rank trainable weight matrices that are periodically merged into quantized full-rank weight matrices. Our approach is suitable for both pretraining and fine-tuning models. We demonstrate this for language modeling and downstream task adaptation, finding that LoQT enables efficient training of models up to 7B parameters on a 24GB GPU. We also demonstrate the feasibility of training a 13B model using per-layer gradient updates on the same hardware. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 457,473 |
1405.7869 | Integrating Vague Association Mining with Markov Model | The increasing demand of world wide web raises the need of predicting the user's web page request.The most widely used approach to predict the web pages is the pattern discovery process of Web usage mining. This process involves inevitability of many techniques like Markov model, association rules and clustering. Fuzzy theory with different techniques has been introduced for the better results. Our focus is on Markov models. This paper is introducing the vague Rules with Markov models for more accuracy using the vague set theory. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 33,501 |
2408.13358 | Shape-Preserving Generation of Food Images for Automatic Dietary
Assessment | Traditional dietary assessment methods heavily rely on self-reporting, which is time-consuming and prone to bias. Recent advancements in Artificial Intelligence (AI) have revealed new possibilities for dietary assessment, particularly through analysis of food images. Recognizing foods and estimating food volumes from images are known as the key procedures for automatic dietary assessment. However, both procedures required large amounts of training images labeled with food names and volumes, which are currently unavailable. Alternatively, recent studies have indicated that training images can be artificially generated using Generative Adversarial Networks (GANs). Nonetheless, convenient generation of large amounts of food images with known volumes remain a challenge with the existing techniques. In this work, we present a simple GAN-based neural network architecture for conditional food image generation. The shapes of the food and container in the generated images closely resemble those in the reference input image. Our experiments demonstrate the realism of the generated images and shape-preserving capabilities of the proposed framework. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 483,113 |
2008.10313 | Improved Mutual Mean-Teaching for Unsupervised Domain Adaptive Re-ID | In this technical report, we present our submission to the VisDA Challenge in ECCV 2020 and we achieved one of the top-performing results on the leaderboard. Our solution is based on Structured Domain Adaptation (SDA) and Mutual Mean-Teaching (MMT) frameworks. SDA, a domain-translation-based framework, focuses on carefully translating the source-domain images to the target domain. MMT, a pseudo-label-based framework, focuses on conducting pseudo label refinery with robust soft labels. Specifically, there are three main steps in our training pipeline. (i) We adopt SDA to generate source-to-target translated images, and (ii) such images serve as informative training samples to pre-train the network. (iii) The pre-trained network is further fine-tuned by MMT on the target domain. Note that we design an improved MMT (dubbed MMT+) to further mitigate the label noise by modeling inter-sample relations across two domains and maintaining the instance discrimination. Our proposed method achieved 74.78% accuracies in terms of mAP, ranked the 2nd place out of 153 teams. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 192,962 |
2001.02519 | Temperature states in Powder Bed Fusion additive manufacturing are
structurally controllable and observable | Powder Bed Fusion (PBF) is a type of Additive Manufacturing (AM) technology that builds parts in a layer-by-layer fashion out of a bed of metal powder via the selective melting action of a laser or electron beam heat source. The technology has become widespread, however the demand is growing for closed loop process monitoring and control in PBF systems to replace the open loop architectures that exist today. Controls-based models have potential to satisfy this demand by utilizing computationally tractable, simplified models while also decreasing the error associated with these models. This paper introduces a controls theoretic analysis of the PBF process, demonstrating models of PBF that are asymptotically stable, stabilizable, and detectable. We show that linear models of PBF are structurally controllable and structurally observable, provided that any portion of the build is exposed to the energy source and measurement, we provide conditions for which time-invariant PBF models are classically controllable/observable, and we demonstrate energy requirements for performing state estimation and control for time-invariant systems. This paper therefore presents the foundation for an effective means of realizing closed loop PBF quality control. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 159,755 |
2406.03406 | LncRNA-disease association prediction method based on heterogeneous
information completion and convolutional neural network | The emerging research shows that lncRNA has crucial research value in a series of complex human diseases. Therefore, the accurate identification of lncRNA-disease associations (LDAs) is very important for the warning and treatment of diseases. However, most of the existing methods have limitations in identifying nonlinear LDAs, and it remains a huge challenge to predict new LDAs. In this paper, a deep learning model based on a heterogeneous network and convolutional neural network (CNN) is proposed for lncRNA-disease association prediction, named HCNNLDA. The heterogeneous network containing the lncRNA, disease, and miRNA nodes, is constructed firstly. The embedding matrix of a lncRNA-disease node pair is constructed according to various biological premises about lncRNAs, diseases, and miRNAs. Then, the low-dimensional feature representation is fully learned by the convolutional neural network. In the end, the XGBoot classifier model is trained to predict the potential LDAs. HCNNLDA obtains a high AUC value of 0.9752 and AUPR of 0.9740 under the 5-fold cross-validation. The experimental results show that the proposed model has better performance than that of several latest prediction models. Meanwhile, the effectiveness of HCNNLDA in identifying novel LDAs is further demonstrated by case studies of three diseases. To sum up, HCNNLDA is a feasible calculation model to predict LDAs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 461,225 |
2409.00880 | Compressing VAE-Based Out-of-Distribution Detectors for Embedded
Deployment | Out-of-distribution (OOD) detectors can act as safety monitors in embedded cyber-physical systems by identifying samples outside a machine learning model's training distribution to prevent potentially unsafe actions. However, OOD detectors are often implemented using deep neural networks, which makes it difficult to meet real-time deadlines on embedded systems with memory and power constraints. We consider the class of variational autoencoder (VAE) based OOD detectors where OOD detection is performed in latent space, and apply quantization, pruning, and knowledge distillation. These techniques have been explored for other deep models, but no work has considered their combined effect on latent space OOD detection. While these techniques increase the VAE's test loss, this does not correspond to a proportional decrease in OOD detection performance and we leverage this to develop lean OOD detectors capable of real-time inference on embedded CPUs and GPUs. We propose a design methodology that combines all three compression techniques and yields a significant decrease in memory and execution time while maintaining AUROC for a given OOD detector. We demonstrate this methodology with two existing OOD detectors on a Jetson Nano and reduce GPU and CPU inference time by 20% and 28% respectively while keeping AUROC within 5% of the baseline. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 485,115 |
2006.07709 | Auditing Differentially Private Machine Learning: How Private is Private
SGD? | We investigate whether Differentially Private SGD offers better privacy in practice than what is guaranteed by its state-of-the-art analysis. We do so via novel data poisoning attacks, which we show correspond to realistic privacy attacks. While previous work (Ma et al., arXiv 2019) proposed this connection between differential privacy and data poisoning as a defense against data poisoning, our use as a tool for understanding the privacy of a specific mechanism is new. More generally, our work takes a quantitative, empirical approach to understanding the privacy afforded by specific implementations of differentially private algorithms that we believe has the potential to complement and influence analytical work on differential privacy. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 181,908 |
1809.06225 | Investigation of Multimodal Features, Classifiers and Fusion Methods for
Emotion Recognition | Automatic emotion recognition is a challenging task. In this paper, we present our effort for the audio-video based sub-challenge of the Emotion Recognition in the Wild (EmotiW) 2018 challenge, which requires participants to assign a single emotion label to the video clip from the six universal emotions (Anger, Disgust, Fear, Happiness, Sad and Surprise) and Neutral. The proposed multimodal emotion recognition system takes audio, video and text information into account. Except for handcraft features, we also extract bottleneck features from deep neutral networks (DNNs) via transfer learning. Both temporal classifiers and non-temporal classifiers are evaluated to obtain the best unimodal emotion classification result. Then possibilities are extracted and passed into the Beam Search Fusion (BS-Fusion). We test our method in the EmotiW 2018 challenge and we gain promising results. Compared with the baseline system, there is a significant improvement. We achieve 60.34% accuracy on the testing dataset, which is only 1.5% lower than the winner. It shows that our method is very competitive. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 108,008 |
2003.08837 | Vulnerabilities of Connectionist AI Applications: Evaluation and Defence | This article deals with the IT security of connectionist artificial intelligence (AI) applications, focusing on threats to integrity, one of the three IT security goals. Such threats are for instance most relevant in prominent AI computer vision applications. In order to present a holistic view on the IT security goal integrity, many additional aspects such as interpretability, robustness and documentation are taken into account. A comprehensive list of threats and possible mitigations is presented by reviewing the state-of-the-art literature. AI-specific vulnerabilities such as adversarial attacks and poisoning attacks as well as their AI-specific root causes are discussed in detail. Additionally and in contrast to former reviews, the whole AI supply chain is analysed with respect to vulnerabilities, including the planning, data acquisition, training, evaluation and operation phases. The discussion of mitigations is likewise not restricted to the level of the AI system itself but rather advocates viewing AI systems in the context of their supply chains and their embeddings in larger IT infrastructures and hardware devices. Based on this and the observation that adaptive attackers may circumvent any single published AI-specific defence to date, the article concludes that single protective measures are not sufficient but rather multiple measures on different levels have to be combined to achieve a minimum level of IT security for AI applications. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | true | 168,886 |
2307.09055 | Robust Data Clustering with Outliers via Transformed Tensor Low-Rank
Representation | Recently, tensor low-rank representation (TLRR) has become a popular tool for tensor data recovery and clustering, due to its empirical success and theoretical guarantees. However, existing TLRR methods consider Gaussian or gross sparse noise, inevitably leading to performance degradation when the tensor data are contaminated by outliers or sample-specific corruptions. This paper develops an outlier-robust tensor low-rank representation (OR-TLRR) method that provides outlier detection and tensor data clustering simultaneously based on the t-SVD framework. For tensor observations with arbitrary outlier corruptions, OR-TLRR has provable performance guarantee for exactly recovering the row space of clean data and detecting outliers under mild conditions. Moreover, an extension of OR-TLRR is proposed to handle the case when parts of the data are missing. Finally, extensive experimental results on synthetic and real data demonstrate the effectiveness of the proposed algorithms. We release our code at https://github.com/twugithub/2024-AISTATS-ORTLRR. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 380,043 |
2501.10630 | Exploring the Potential of Large Language Models for Massive MIMO CSI
Feedback | Large language models (LLMs) have achieved remarkable success across a wide range of tasks, particularly in natural language processing and computer vision. This success naturally raises an intriguing yet unexplored question: Can LLMs be harnessed to tackle channel state information (CSI) compression and feedback in massive multiple-input multiple-output (MIMO) systems? Efficient CSI feedback is a critical challenge in next-generation wireless communication. In this paper, we pioneer the use of LLMs for CSI compression, introducing a novel framework that leverages the powerful denoising capabilities of LLMs -- capable of error correction in language tasks -- to enhance CSI reconstruction performance. To effectively adapt LLMs to CSI data, we design customized pre-processing, embedding, and post-processing modules tailored to the unique characteristics of wireless signals. Extensive numerical results demonstrate the promising potential of LLMs in CSI feedback, opening up possibilities for this research direction. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 525,594 |
1711.09026 | Long-Term On-Board Prediction of People in Traffic Scenes under
Uncertainty | Progress towards advanced systems for assisted and autonomous driving is leveraging recent advances in recognition and segmentation methods. Yet, we are still facing challenges in bringing reliable driving to inner cities, as those are composed of highly dynamic scenes observed from a moving platform at considerable speeds. Anticipation becomes a key element in order to react timely and prevent accidents. In this paper we argue that it is necessary to predict at least 1 second and we thus propose a new model that jointly predicts ego motion and people trajectories over such large time horizons. We pay particular attention to modeling the uncertainty of our estimates arising from the non-deterministic nature of natural traffic scenes. Our experimental results show that it is indeed possible to predict people trajectories at the desired time horizons and that our uncertainty estimates are informative of the prediction error. We also show that both sequence modeling of trajectories as well as our novel method of long term odometry prediction are essential for best performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 85,311 |
2304.07247 | The R-mAtrIx Net | We provide a novel Neural Network architecture that can: i) output R-matrix for a given quantum integrable spin chain, ii) search for an integrable Hamiltonian and the corresponding R-matrix under assumptions of certain symmetries or other restrictions, iii) explore the space of Hamiltonians around already learned models and reconstruct the family of integrable spin chains which they belong to. The neural network training is done by minimizing loss functions encoding Yang-Baxter equation, regularity and other model-specific restrictions such as hermiticity. Holomorphy is implemented via the choice of activation functions. We demonstrate the work of our Neural Network on the two-dimensional spin chains of difference form. In particular, we reconstruct the R-matrices for all 14 classes. We also demonstrate its utility as an \textit{Explorer}, scanning a certain subspace of Hamiltonians and identifying integrable classes after clusterisation. The last strategy can be used in future to carve out the map of integrable spin chains in higher dimensions and in more general settings where no analytical methods are available. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 358,280 |
2209.11035 | MonoByte: A Pool of Monolingual Byte-level Language Models | The zero-shot cross-lingual ability of models pretrained on multilingual and even monolingual corpora has spurred many hypotheses to explain this intriguing empirical result. However, due to the costs of pretraining, most research uses public models whose pretraining methodology, such as the choice of tokenization, corpus size, and computational budget, might differ drastically. When researchers pretrain their own models, they often do so under a constrained budget, and the resulting models might underperform significantly compared to SOTA models. These experimental differences led to various inconsistent conclusions about the nature of the cross-lingual ability of these models. To help further research on the topic, we released 10 monolingual byte-level models rigorously pretrained under the same configuration with a large compute budget (equivalent to 420 days on a V100) and corpora that are 4 times larger than the original BERT's. Because they are tokenizer-free, the problem of unseen token embeddings is eliminated, thus allowing researchers to try a wider range of cross-lingual experiments in languages with different scripts. Additionally, we release two models pretrained on non-natural language texts that can be used in sanity-check experiments. Experiments on QA and NLI tasks show that our monolingual models achieve competitive performance to the multilingual one, and hence can be served to strengthen our understanding of cross-lingual transferability in language models. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 319,056 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.