id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2501.02208
Robust Multi-Dimensional Scaling via Accelerated Alternating Projections
We consider the robust multi-dimensional scaling (RMDS) problem in this paper. The goal is to localize point locations from pairwise distances that may be corrupted by outliers. Inspired by classic MDS theories, and nonconvex works for the robust principal component analysis (RPCA) problem, we propose an alternating projection based algorithm that is further accelerated by the tangent space projection technique. For the proposed algorithm, if the outliers are sparse enough, we can establish linear convergence of the reconstructed points to the original points after centering and rotation alignment. Numerical experiments verify the state-of-the-art performances of the proposed algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
522,398
2408.02904
Enabling Intelligent Traffic Systems: A Deep Learning Method for Accurate Arabic License Plate Recognition
This paper introduces a novel two-stage framework for accurate Egyptian Vehicle License Plate Recognition (EVLPR). The first stage employs image processing techniques to reliably localize license plates, while the second stage utilizes a custom-designed deep learning model for robust Arabic character recognition. The proposed system achieves a remarkable 99.3% accuracy on a diverse dataset, surpassing existing approaches. Its potential applications extend to intelligent traffic management, including traffic violation detection and parking optimization. Future research will focus on enhancing the system's capabilities through architectural refinements, expanded datasets, and addressing system dependencies.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
478,810
2406.15540
Specify What? Enhancing Neural Specification Synthesis by Symbolic Methods
We investigate how combinations of Large Language Models (LLMs) and symbolic analyses can be used to synthesise specifications of C programs. The LLM prompts are augmented with outputs from two formal methods tools in the Frama-C ecosystem, Pathcrawler and EVA, to produce C program annotations in the specification language ACSL. We demonstrate how the addition of symbolic analysis to the workflow impacts the quality of annotations: information about input/output examples from Pathcrawler produce more context-aware annotations, while the inclusion of EVA reports yields annotations more attuned to runtime errors. In addition, we show that the method infers rather the programs intent than its behaviour, by generating specifications for buggy programs and observing robustness of the result against bugs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
466,772
2403.00520
IAI MovieBot 2.0: An Enhanced Research Platform with Trainable Neural Components and Transparent User Modeling
While interest in conversational recommender systems has been on the rise, operational systems suitable for serving as research platforms for comprehensive studies are currently lacking. This paper introduces an enhanced version of the IAI MovieBot conversational movie recommender system, aiming to evolve it into a robust and adaptable platform for conducting user-facing experiments. The key highlights of this enhancement include the addition of trainable neural components for natural language understanding and dialogue policy, transparent and explainable modeling of user preferences, along with improvements in the user interface and research infrastructure.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
434,008
2111.04473
Senatus -- A Fast and Accurate Code-to-Code Recommendation Engine
Machine learning on source code (MLOnCode) is a popular research field that has been driven by the availability of large-scale code repositories and the development of powerful probabilistic and deep learning models for mining source code. Code-to-code recommendation is a task in MLOnCode that aims to recommend relevant, diverse and concise code snippets that usefully extend the code currently being written by a developer in their development environment (IDE). Code-to-code recommendation engines hold the promise of increasing developer productivity by reducing context switching from the IDE and increasing code-reuse. Existing code-to-code recommendation engines do not scale gracefully to large codebases, exhibiting a linear growth in query time as the code repository increases in size. In addition, existing code-to-code recommendation engines fail to account for the global statistics of code repositories in the ranking function, such as the distribution of code snippet lengths, leading to sub-optimal retrieval results. We address both of these weaknesses with \emph{Senatus}, a new code-to-code recommendation engine. At the core of Senatus is \emph{De-Skew} LSH a new locality sensitive hashing (LSH) algorithm that indexes the data for fast (sub-linear time) retrieval while also counteracting the skewness in the snippet length distribution using novel abstract syntax tree-based feature scoring and selection algorithms. We evaluate Senatus and find the recommendations to be of higher quality than competing baselines, while achieving faster search. For example on the CodeSearchNet dataset Senatus improves performance by 31.21\% F1 and 147.9\emph{x} faster query time compared to Facebook Aroma. Senatus also outperforms standard MinHash LSH by 29.2\% F1 and 51.02\emph{x} faster query time.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
265,494
2006.10628
Offline detection of change-points in the mean for stationary graph signals
This paper addresses the problem of segmenting a stream of graph signals: we aim to detect changes in the mean of a multivariate signal defined over the nodes of a known graph. We propose an offline method that relies on the concept of graph signal stationarity and allows the convenient translation of the problem from the original vertex domain to the spectral domain (Graph Fourier Transform), where it is much easier to solve. Although the obtained spectral representation is sparse in real applications, to the best of our knowledge this property has not been sufficiently exploited in the existing related literature. Our change-point detection method adopts a model selection approach that takes into account the sparsity of the spectral representation and determines automatically the number of change-points. Our detector comes with a proof of a non-asymptotic oracle inequality. Numerical experiments demonstrate the performance of the proposed method.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,951
2502.12929
Flow-of-Options: Diversified and Improved LLM Reasoning by Thinking Through Options
We present a novel reasoning approach called Flow-of-Options (FoO), designed to address intrinsic biases in Large Language Models (LLMs). FoO enables LLMs to systematically explore a diverse range of possibilities in their reasoning, as demonstrated by an FoO-based agentic system for autonomously solving Machine Learning tasks (AutoML). Our framework outperforms state-of-the-art baselines, achieving improvements of 38.2% - 69.2% on standard data science tasks, and 37.4% - 47.9% on therapeutic chemistry tasks. With an overall operation cost under $1 per task, our framework is well-suited for cost-sensitive applications. Beyond classification and regression, we illustrate the broader applicability of our FoO-based agentic system to tasks such as reinforcement learning and image generation. Our framework presents significant advancements compared to current state-of-the-art agentic systems for AutoML, due to the benefits of FoO in enforcing diversity in LLM solutions through compressed, explainable representations that also support long-term memory when combined with case-based reasoning.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
535,117
2006.11835
An Overview on the Landscape of R Packages for Credit Scoring
The credit scoring industry has a long tradition of using statistical tools for loan default probability prediction and domain specific standards have been established long before the hype of machine learning. Although several commercial software companies offer specific solutions for credit scorecard modelling in R explicit packages for this purpose have been missing long time. In the recent years this has changed and several packages have been developed which are dedicated to credit scoring. The aim of this paper is to give a structured overview on these packages. This may guide users to select the appropriate functions for a desired purpose and further hopefully will contribute to directing future development activities. The paper is guided by the chain of subsequent modelling steps as they are forming the typical scorecard development process.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
183,387
2010.10059
Very Fast Streaming Submodular Function Maximization
Data summarization has become a valuable tool in understanding even terabytes of data. Due to their compelling theoretical properties, submodular functions have been in the focus of summarization algorithms. These algorithms offer worst-case approximations guarantees to the expense of higher computation and memory requirements. However, many practical applications do not fall under this worst-case, but are usually much more well-behaved. In this paper, we propose a new submodular function maximization algorithm called ThreeSieves, which ignores the worst-case, but delivers a good solution in high probability. It selects the most informative items from a data-stream on the fly and maintains a provable performance on a fixed memory budget. In an extensive evaluation, we compare our method against $6$ other methods on $8$ different datasets with and without concept drift. We show that our algorithm outperforms current state-of-the-art algorithms and, at the same time, uses fewer resources. Last, we highlight a real-world use-case of our algorithm for data summarization in gamma-ray astronomy. We make our code publicly available at https://github.com/sbuschjaeger/SubmodularStreamingMaximization.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
201,774
2409.15261
Identification and Localization of Cometary Activity in Solar System Objects with Machine Learning
In this chapter, we will discuss the use of Machine Learning methods for the identification and localization of cometary activity for Solar System objects in ground and in space-based wide-field all-sky surveys. We will begin the chapter by discussing the challenges of identifying known and unknown active, extended Solar System objects in the presence of stellar-type sources and the application of classical pre-ML identification techniques and their limitations. We will then transition to the discussion of implementing ML techniques to address the challenge of extended object identification. We will finish with prospective future methods and the application to future surveys such as the Vera C. Rubin Observatory.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
490,826
1508.07192
Varying-coefficient models with isotropic Gaussian process priors
We study learning problems in which the conditional distribution of the output given the input varies as a function of additional task variables. In varying-coefficient models with Gaussian process priors, a Gaussian process generates the functional relationship between the task variables and the parameters of this conditional. Varying-coefficient models subsume hierarchical Bayesian multitask models, but also generalizations in which the conditional varies continuously, for instance, in time or space. However, Bayesian inference in varying-coefficient models is generally intractable. We show that inference for varying-coefficient models with isotropic Gaussian process priors resolves to standard inference for a Gaussian process that can be solved efficiently. MAP inference in this model resolves to multitask learning using task and instance kernels, and inference for hierarchical Bayesian multitask models can be carried out efficiently using graph-Laplacian kernels. We report on experiments for geospatial prediction.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
46,385
2007.07595
Non-Relational Databases on FPGAs: Survey, Design Decisions, Challenges
Non-relational database systems (NRDS), such as graph, document, key-value, and wide-column, have gained much attention in various trending (business) application domains like smart logistics, social network analysis, and medical applications, due to their data model variety and scalability. The broad data variety and sheer size of datasets pose unique challenges for the system design and runtime (incl. power consumption). While CPU performance scaling becomes increasingly more difficult, we argue that NRDS can benefit from adding field programmable gate arrays (FPGAs) as accelerators. However, FPGA-accelerated NRDS have not been systematically studied, yet. To facilitate understanding of this emerging domain, we explore the fit of FPGA acceleration for NRDS with a focus on data model variety. We define the term NRDS class as a group of non-relational database systems supporting the same data model. This survey describes and categorizes the inherent differences and non-trivial trade-offs of relevant NRDS classes as well as their commonalities in the context of common design decisions when building such a system with FPGAs. For example, we found in the literature that for key-value stores the FPGA should be placed into the system as a smart network interface card (SmartNIC) to benefit from direct access of the FPGA to the network. However, more complex data models and processing of other classes (e.g., graph and document) commonly require more elaborate near-data or socket accelerator placements where the FPGA respectively has the only or shared access to main memory. Across the different classes, FPGAs can be used as communication layer or for acceleration of operators and data access. We close with open research and engineering challenges to outline the future of FPGA-accelerated NRDS.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
187,382
2202.11864
Some Stylometric Remarks on Ovid's Heroides and the Epistula Sapphus
This article aims to contribute to two well-worn areas of debate in classical Latin philology, relating to Ovid's Heroides. The first is the question of the authenticity (and, to a lesser extent the correct position) of the letter placed fifteenth by almost every editor -- the so-called Epistula Sapphus (henceforth ES). The secondary question, although perhaps now less fervently debated, is the authenticity of the 'Double Heroides', placed by those who accept them as letters 16-21. I employ a variety of methods drawn from the domain of computational stylometry to consider the poetics and the lexico-grammatical features of these elegiac poems in the broader context of a corpus of 'shorter' (from 20 to 546 lines) elegiac works from five authors (266 poems in all) comprising more or less all of the non-fragmentary classical corpus. Based on a variety of techniques, every measure gives clear indication that the poetic style of the Heroides is Ovidian, but distinctive; they can be accurately isolated from Ovid more broadly. The Single and Double Heroides split into two clear groups, with the ES grouped consistently with the single letters. Furthermore, by comparing the style of the letters with the 'early' (although there are complications in this label) works of the Amores and the late works of the Ex Ponto, the evidence supports sequential composition -- meaning that the ES is correctly placed -- and, further, supports the growing consensus that the double letters were composed significantly later, in exile.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
282,026
1712.09473
Sketching for Kronecker Product Regression and P-splines
TensorSketch is an oblivious linear sketch introduced in Pagh'13 and later used in Pham, Pagh'13 in the context of SVMs for polynomial kernels. It was shown in Avron, Nguyen, Woodruff'14 that TensorSketch provides a subspace embedding, and therefore can be used for canonical correlation analysis, low rank approximation, and principal component regression for the polynomial kernel. We take TensorSketch outside of the context of polynomials kernels, and show its utility in applications in which the underlying design matrix is a Kronecker product of smaller matrices. This allows us to solve Kronecker product regression and non-negative Kronecker product regression, as well as regularized spline regression. Our main technical result is then in extending TensorSketch to other norms. That is, TensorSketch only provides input sparsity time for Kronecker product regression with respect to the $2$-norm. We show how to solve Kronecker product regression with respect to the $1$-norm in time sublinear in the time required for computing the Kronecker product, as well as for more general $p$-norms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
87,360
1509.08465
How Many Political Parties Should Brazil Have? A Data-driven Method to Assess and Reduce Fragmentation in Multi-Party Political Systems
In June 2013, Brazil faced the largest and most significant mass protests in a generation. These were exacerbated by the population's disenchantment towards its highly fragmented party system, which is composed by a very large number of political parties. Under these circumstances, presidents are constrained by informal coalition governments, bringing very harmful consequences to the country. In this work I propose ARRANGE, a dAta dRiven method foR Assessing and reduciNG party fragmEntation in a country. ARRANGE uses as input the roll call data for congress votes on bills and amendments as a proxy for political preferences and ideology. With that, ARRANGE finds the minimum number of parties required to house all congressmen without decreasing party discipline. When applied to Brazil's historical roll call data, ARRANGE was able to generate 23 distinct configurations that, compared with the status quo, have (i) a significant smaller number of parties, (ii) a higher discipline of partisans towards their parties and (iii) a more even distribution of partisans into parties. ARRANGE is fast and parsimonious, relying on a single, intuitive parameter.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
true
false
false
false
47,375
2312.04346
Detection and Imputation based Two-Stage Denoising Diffusion Power System Measurement Recovery under Cyber-Physical Uncertainties
Power system cyber-physical uncertainties, including measurement ambiguities stemming from cyber attacks and data losses, along with system uncertainties introduced by massive renewables and complex dynamics, reduce the likelihood of enhancing the quality of measurements. Fortunately, denoising diffusion models exhibit powerful learning and generation abilities for the complex underlying physics of the real world. To this end, this paper proposes an improved detection and imputation based two-stage denoising diffusion model (TSDM) to identify and reconstruct the measurements with various cyber-physical uncertainties. The first stage of the model comprises a classifier-guided conditional anomaly detection component, while the second stage involves diffusion-based measurement imputation component. Moreover, the proposed TSDM adopts optimal variance to accelerate the diffusion generation process with subsequence sampling. Extensive numerical case studies demonstrate that the proposed TSDM can accurately recover power system measurements despite renewables-induced strong randomness and highly nonlinear dynamics. Additionally, the proposed TSDM has stronger robustness compared to existing reconstruction networks and exhibits lower computational complexity than general denoising diffusion models.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
413,646
2207.13843
Deep Learning-Based Acoustic Mosquito Detection in Noisy Conditions Using Trainable Kernels and Augmentations
In this paper, we demonstrate a unique recipe to enhance the effectiveness of audio machine learning approaches by fusing pre-processing techniques into a deep learning model. Our solution accelerates training and inference performance by optimizing hyper-parameters through training instead of costly random searches to build a reliable mosquito detector from audio signals. The experiments and the results presented here are part of the MOS C submission of the ACM 2022 challenge. Our results outperform the published baseline by 212% on the unpublished test set. We believe that this is one of the best real-world examples of building a robust bio-acoustic system that provides reliable mosquito detection in noisy conditions.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
310,418
2010.05698
Deep Autoencoder based Energy Method for the Bending, Vibration, and Buckling Analysis of Kirchhoff Plates
In this paper, we present a deep autoencoder based energy method (DAEM) for the bending, vibration and buckling analysis of Kirchhoff plates. The DAEM exploits the higher order continuity of the DAEM and integrates a deep autoencoder and the minimum total potential principle in one framework yielding an unsupervised feature learning method. The DAEM is a specific type of feedforward deep neural network (DNN) and can also serve as function approximator. With robust feature extraction capacity, the DAEM can more efficiently identify patterns behind the whole energy system, such as the field variables, natural frequency and critical buckling load factor studied in this paper. The objective function is to minimize the total potential energy. The DAEM performs unsupervised learning based on random generated points inside the physical domain so that the total potential energy is minimized at all points. For vibration and buckling analysis, the loss function is constructed based on Rayleigh's principle and the fundamental frequency and the critical buckling load is extracted. A scaled hyperbolic tangent activation function for the underlying mechanical model is presented which meets the continuity requirement and alleviates the gradient vanishing/explosive problems under bending analysis. The DAEM can be easily implemented and we employed the Pytorch library and the LBFGS optimizer. A comprehensive study of the DAEM configuration is performed for several numerical examples with various geometries, load conditions, and boundary conditions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
200,230
2411.11536
Hierarchical-Graph-Structured Edge Partition Models for Learning Evolving Community Structure
We propose a novel dynamic network model to capture evolving latent communities within temporal networks. To achieve this, we decompose each observed dynamic edge between vertices using a Poisson-gamma edge partition model, assigning each vertex to one or more latent communities through \emph{nonnegative} vertex-community memberships. Specifically, hierarchical transition kernels are employed to model the interactions between these latent communities in the observed temporal network. A hierarchical graph prior is placed on the transition structure of the latent communities, allowing us to model how they evolve and interact over time. Consequently, our dynamic network enables the inferred community structure to merge, split, and interact with one another, providing a comprehensive understanding of complex network dynamics. Experiments on various real-world network datasets demonstrate that the proposed model not only effectively uncovers interpretable latent structures but also surpasses other state-of-the art dynamic network models in the tasks of link prediction and community detection.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
509,092
1201.2483
Duality of Channel Encoding and Decoding - Part I: Rate-1 Binary Convolutional Codes
In this paper, we revisit the forward, backward and bidirectional Bahl-Cocke-Jelinek-Raviv (BCJR) soft-input soft-output (SISO) maximum a posteriori probability (MAP) decoding process of rate-1 binary convolutional codes. From this we establish some interesting explicit relationships between encoding and decoding of rate-1 convolutional codes. We observe that the forward and backward BCJR SISO MAP decoders can be simply represented by their dual SISO channel encoders using shift registers in the complex number field. Similarly, the bidirectional MAP decoding can be implemented by linearly combining the shift register contents of the dual SISO encoders of the respective forward and backward decoders. The dual encoder structures for various recursive and non-recursive rate-1 convolutional codes are derived.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
13,781
1707.04987
Online Multi-Armed Bandit
We introduce a novel variant of the multi-armed bandit problem, in which bandits are streamed one at a time to the player, and at each point, the player can either choose to pull the current bandit or move on to the next bandit. Once a player has moved on from a bandit, they may never visit it again, which is a crucial difference between our problem and classic multi-armed bandit problems. In this online context, we study Bernoulli bandits (bandits with payout Ber($p_i$) for some underlying mean $p_i$) with underlying means drawn i.i.d. from various distributions, including the uniform distribution, and in general, all distributions that have a CDF satisfying certain differentiability conditions near zero. In all cases, we suggest several strategies and investigate their expected performance. Furthermore, we bound the performance of any optimal strategy and show that the strategies we have suggested are indeed optimal up to a constant factor. We also investigate the case where the distribution from which the underlying means are drawn is not known ahead of time. We again, are able to suggest algorithms that are optimal up to a constant factor for this case, given certain mild conditions on the universe of distributions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
77,143
2201.11374
Systematic Investigation of Strategies Tailored for Low-Resource Settings for Low-Resource Dependency Parsing
In this work, we focus on low-resource dependency parsing for multiple languages. Several strategies are tailored to enhance performance in low-resource scenarios. While these are well-known to the community, it is not trivial to select the best-performing combination of these strategies for a low-resource language that we are interested in, and not much attention has been given to measuring the efficacy of these strategies. We experiment with 5 low-resource strategies for our ensembled approach on 7 Universal Dependency (UD) low-resource languages. Our exhaustive experimentation on these languages supports the effective improvements for languages not covered in pretrained models. We show a successful application of the ensembled system on a truly low-resource language Sanskrit. The code and data are available at: https://github.com/Jivnesh/SanDP
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
277,280
1701.02468
Unite the People: Closing the Loop Between 3D and 2D Human Representations
3D models provide a common ground for different representations of human bodies. In turn, robust 2D estimation has proven to be a powerful tool to obtain 3D fits "in-the- wild". However, depending on the level of detail, it can be hard to impossible to acquire labeled data for training 2D estimators on large scale. We propose a hybrid approach to this problem: with an extended version of the recently introduced SMPLify method, we obtain high quality 3D body model fits for multiple human pose datasets. Human annotators solely sort good and bad fits. This procedure leads to an initial dataset, UP-3D, with rich annotations. With a comprehensive set of experiments, we show how this data can be used to train discriminative models that produce results with an unprecedented level of detail: our models predict 31 segments and 91 landmark locations on the body. Using the 91 landmark pose estimator, we present state-of-the art results for 3D human pose and shape estimation using an order of magnitude less training data and without assumptions about gender or pose in the fitting procedure. We show that UP-3D can be enhanced with these improved fits to grow in quantity and quality, which makes the system deployable on large scale. The data, code and models are available for research purposes.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
66,555
2408.13966
Reducing the Cost: Cross-Prompt Pre-Finetuning for Short Answer Scoring
Automated Short Answer Scoring (SAS) is the task of automatically scoring a given input to a prompt based on rubrics and reference answers. Although SAS is useful in real-world applications, both rubrics and reference answers differ between prompts, thus requiring a need to acquire new data and train a model for each new prompt. Such requirements are costly, especially for schools and online courses where resources are limited and only a few prompts are used. In this work, we attempt to reduce this cost through a two-phase approach: train a model on existing rubrics and answers with gold score signals and finetune it on a new prompt. Specifically, given that scoring rubrics and reference answers differ for each prompt, we utilize key phrases, or representative expressions that the answer should contain to increase scores, and train a SAS model to learn the relationship between key phrases and answers using already annotated prompts (i.e., cross-prompts). Our experimental results show that finetuning on existing cross-prompt data with key phrases significantly improves scoring accuracy, especially when the training data is limited. Finally, our extensive analysis shows that it is crucial to design the model so that it can learn the task's general property.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
483,365
1107.1600
On fuzzy syndrome hashing with LDPC coding
The last decades have seen a growing interest in hash functions that allow some sort of tolerance, e.g. for the purpose of biometric authentication. Among these, the syndrome fuzzy hashing construction allows to securely store biometric data and to perform user authentication without the need of sharing any secret key. This paper analyzes this model, showing that it offers a suitable protection against information leakage and several advantages with respect to similar solutions, such as the fuzzy commitment scheme. Furthermore, the design and characterization of LDPC codes to be used for this purpose is addressed.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
11,201
2302.02592
RLTP: Reinforcement Learning to Pace for Delayed Impression Modeling in Preloaded Ads
To increase brand awareness, many advertisers conclude contracts with advertising platforms to purchase traffic and then deliver advertisements to target audiences. In a whole delivery period, advertisers usually desire a certain impression count for the ads, and they also expect that the delivery performance is as good as possible (e.g., obtaining high click-through rate). Advertising platforms employ pacing algorithms to satisfy the demands via adjusting the selection probabilities to traffic requests in real-time. However, the delivery procedure is also affected by the strategies from publishers, which cannot be controlled by advertising platforms. Preloading is a widely used strategy for many types of ads (e.g., video ads) to make sure that the response time for displaying after a traffic request is legitimate, which results in delayed impression phenomenon. Traditional pacing algorithms cannot handle the preloading nature well because they rely on immediate feedback signals, and may fail to guarantee the demands from advertisers. In this paper, we focus on a new research problem of impression pacing for preloaded ads, and propose a Reinforcement Learning To Pace framework RLTP. It learns a pacing agent that sequentially produces selection probabilities in the whole delivery period. To jointly optimize the two objectives of impression count and delivery performance, RLTP employs tailored reward estimator to satisfy the guaranteed impression count, penalize the over-delivery and maximize the traffic value. Experiments on large-scale industrial datasets verify that RLTP outperforms baseline pacing algorithms by a large margin. We have deployed the RLTP framework online to our advertising platform, and results show that it achieves significant uplift to core metrics including delivery completion rate and click-through rate.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
344,055
2105.14156
SMASH: Sparse Matrix Atomic Scratchpad Hashing
Sparse matrices, more specifically SpGEMM kernels, are commonly found in a wide range of applications, spanning graph-based path-finding to machine learning algorithms (e.g., neural networks). A particular challenge in implementing SpGEMM kernels has been the pressure placed on DRAM memory. One approach to tackle this problem is to use an inner product method for the SpGEMM kernel implementation. While the inner product produces fewer intermediate results, it can end up saturating the memory bandwidth, given the high number of redundant fetches of the input matrix elements. Using an outer product-based SpGEMM kernel can reduce redundant fetches, but at the cost of increased overhead due to extra computation and memory accesses for producing/managing partial products. In this thesis, we introduce a novel SpGEMM kernel implementation based on the row-wise product approach. We leverage atomic instructions to merge intermediate partial products as they are generated. The use of atomic instructions eliminates the need to create partial product matrices. To evaluate our row-wise product approach, we map an optimized SpGEMM kernel to a custom accelerator designed to accelerate graph-based applications. The targeted accelerator is an experimental system named PIUMA, being developed by Intel. PIUMA provides several attractive features, including fast context switching, user-configurable caches, globally addressable memory, non-coherent caches, and asynchronous pipelines. We tailor our SpGEMM kernel to exploit many of the features of the PIUMA fabric. This thesis compares our SpGEMM implementation against prior solutions, all mapped to the PIUMA framework. We briefly describe some of the PIUMA architecture features and then delve into the details of our optimized SpGEMM kernel. Our SpGEMM kernel can achieve 9.4x speedup as compared to competing approaches.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
237,551
2409.18402
Embed and Emulate: Contrastive representations for simulation-based inference
Scientific modeling and engineering applications rely heavily on parameter estimation methods to fit physical models and calibrate numerical simulations using real-world measurements. In the absence of analytic statistical models with tractable likelihoods, modern simulation-based inference (SBI) methods first use a numerical simulator to generate a dataset of parameters and simulated outputs. This dataset is then used to approximate the likelihood and estimate the system parameters given observation data. Several SBI methods employ machine learning emulators to accelerate data generation and parameter estimation. However, applying these approaches to high-dimensional physical systems remains challenging due to the cost and complexity of training high-dimensional emulators. This paper introduces Embed and Emulate (E&E): a new SBI method based on contrastive learning that efficiently handles high-dimensional data and complex, multimodal parameter posteriors. E&E learns a low-dimensional latent embedding of the data (i.e., a summary statistic) and a corresponding fast emulator in the latent space, eliminating the need to run expensive simulations or a high dimensional emulator during inference. We illustrate the theoretical properties of the learned latent space through a synthetic experiment and demonstrate superior performance over existing methods in a realistic, non-identifiable parameter estimation task using the high-dimensional, chaotic Lorenz 96 system.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
492,238
2411.19058
Quality Time: Carbon-Aware Quality Adaptation for Energy-Intensive Services
The energy demand of modern cloud services, particularly those related to generative AI, is increasing at an unprecedented pace. While hyperscalers collectively fail to meet their self-imposed emission reduction targets, they face increasing pressure from environmental sustainability reporting across many jurisdictions. To date, carbon-aware computing strategies have primarily focused on batch process scheduling or geo-distributed load balancing. However, such approaches are not applicable to services that require constant availability at specific locations due to latency, privacy, data, or infrastructure constraints. In this paper, we explore how the carbon footprint of energy-intensive services can be reduced by adjusting the fraction of requests served by different service quality tiers. We show that adapting this quality of responses with respect to grid carbon intensity can lead to additional carbon savings beyond resource and energy efficiency. Building on this, we introduce a forecast-based multi-horizon optimization that reaches close-to-optimal carbon savings and is able to automatically adapt service quality for best-effort users to stay within an annual carbon budget. Our approach can reduce the emissions of large-scale LLM services, which we estimate at multiple 10,000 tons of CO2 annually, by up to 10%.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
512,104
2501.11264
Code Readability in the Age of Large Language Models: An Industrial Case Study from Atlassian
Programmers spend a significant amount of time reading code during the software development process. This trend is amplified by the emergence of large language models (LLMs) that automatically generate code. However, little is known about the readability of the LLM-generated code and whether it is still important from practitioners' perspectives in this new era. In this paper, we conduct a survey to explore the practitioners' perspectives on code readability in the age of LLMs and investigate the readability of our LLM-based software development agents framework, HULA, by comparing its generated code with human-written code in real-world scenarios. Overall, the findings underscore that (1) readability remains a critical aspect of software development; (2) the readability of our LLM-generated code is comparable to human-written code, fostering the establishment of appropriate trust and driving the broad adoption of our LLM-powered software development platform.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
525,858
2106.13507
Pilot Contamination Elimination for Channel Estimation with Complete Knowledge of Large-Scale Fading in Downlink Massive MIMO Systems
Massive multiple-input multiple-output is a very important technology for future fifth-generation systems. However, massive massive multiple input multiple output systems are still limited because of pilot contamination, impacting the data rate due to the non-orthogonality of pilot sequences transmitted by users in the same cell to the neighboring cells. We propose a channel estimation with complete knowledge of large-scale fading by using an orthogonal pilot reuse sequence to eliminate PC in edge users with poor channel quality based on the estimation of large-scale fading and performance analysis of maximum ratio transmission and zero forcing precoding methods. We derived the lower bounds on the achievable downlink DR and signal-to-interference noise ratio based on assigning PRS to a user grouping that mitigated this problem when the number of antenna elements approaches infinity The simulation results showed that a high DR can be achieved due to better channel estimation and reduced performance loss
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
243,097
2308.00918
A Novel Cross-Perturbation for Single Domain Generalization
Single domain generalization aims to enhance the ability of the model to generalize to unknown domains when trained on a single source domain. However, the limited diversity in the training data hampers the learning of domain-invariant features, resulting in compromised generalization performance. To address this, data perturbation (augmentation) has emerged as a crucial method to increase data diversity. Nevertheless, existing perturbation methods often focus on either image-level or feature-level perturbations independently, neglecting their synergistic effects. To overcome these limitations, we propose CPerb, a simple yet effective cross-perturbation method. Specifically, CPerb utilizes both horizontal and vertical operations. Horizontally, it applies image-level and feature-level perturbations to enhance the diversity of the training data, mitigating the issue of limited diversity in single-source domains. Vertically, it introduces multi-route perturbation to learn domain-invariant features from different perspectives of samples with the same semantic category, thereby enhancing the generalization capability of the model. Additionally, we propose MixPatch, a novel feature-level perturbation method that exploits local image style information to further diversify the training data. Extensive experiments on various benchmark datasets validate the effectiveness of our method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
383,073
cs/0510063
Markerless Human Motion Capture for Gait Analysis
The aim of our study is to detect balance disorders and a tendency towards the falls in the elderly, knowing gait parameters. In this paper we present a new tool for gait analysis based on markerless human motion capture, from camera feeds. The system introduced here, recovers the 3D positions of several key points of the human body while walking. Foreground segmentation, an articulated body model and particle filtering are basic elements of our approach. No dynamic model is used thus this system can be described as generic and simple to implement. A modified particle filtering algorithm, which we call Interval Particle Filtering, is used to reorganise and search through the model's configurations search space in a deterministic optimal way. This algorithm was able to perform human movement tracking with success. Results from the treatment of a single cam feeds are shown and compared to results obtained using a marker based human motion capture system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
539,031
1703.07807
Learning to Partition using Score Based Compatibilities
We study the problem of learning to partition users into groups, where one must learn the compatibilities between the users to achieve optimal groupings. We define four natural objectives that optimize for average and worst case compatibilities and propose new algorithms for adaptively learning optimal groupings. When we do not impose any structure on the compatibilities, we show that the group formation objectives considered are $NP$ hard to solve and we either give approximation guarantees or prove inapproximability results. We then introduce an elegant structure, namely that of \textit{intrinsic scores}, that makes many of these problems polynomial time solvable. We explicitly characterize the optimal groupings under this structure and show that the optimal solutions are related to \emph{homophilous} and \emph{heterophilous} partitions, well-studied in the psychology literature. For one of the four objectives, we show $NP$ hardness under the score structure and give a $\frac{1}{2}$ approximation algorithm for which no constant approximation was known thus far. Finally, under the score structure, we propose an online low sample complexity PAC algorithm for learning the optimal partition. We demonstrate the efficacy of the proposed algorithm on synthetic and real world datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
70,456
1802.05730
Pedestrian-Robot Interaction Experiments in an Exit Corridor
The study of human-robot interaction (HRI) has received increasing research attention for robot navigation in pedestrian crowds. In this paper, we present empirical study of pedestrian-robot interaction in an uni-directional exit corridor. We deploy a mobile robot moving in a direction perpendicular to that of the pedestrian flow, and install a pedestrian motion tracking system to record the collective motion. We analyze both individual and collective motion of pedestrians, and measure the effect of the robot motion on the overall pedestrian flow. The experimental results show the effect of passive HRI, where the pedestrians' overall speed is slowed down in the presence of the robot, and the faster the robot moves, the lower the average pedestrian velocity becomes. Experiment results show qualitative consistency of the collective HRI effect with simulation results that was previously reported. The study can be used to guide future design of robot-assisted pedestrian evacuation algorithms.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
90,489
1905.04280
A Capacity-achieving One-message Key Agreement With Finite Blocklength Analysis
Information-theoretic secret key agreement (SKA) protocols are a fundamental cryptographic primitive that are used to establish a shared secret key between two or more parties. In a two-party SKA in source model, Alice and Bob have samples of two correlated variables, that are partially leaked to Eve, and their goal is to establish a shared secret key by communicating over a reliable public channel. Eve must have no information about the established key. In this paper, we study the problem of one-message secret key agreement where the key is established by Alice sending a single message to Bob. We propose a one-message SKA (OM-SKA) protocol, prove that it achieves the one-way secret key capacity, and derive finite blocklength approximations of the achievable secret key length. We compare our results with existing OM-SKAs and show the protocol has a unique combination of desirable properties.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
130,420
1801.09665
Cooperative repair: Constructions of optimal MDS codes for all admissible parameters
Two widely studied models of multiple-node repair in distributed storage systems are centralized repair and cooperative repair. The centralized model assumes that all the failed nodes are recreated in one location, while the cooperative one stipulates that the failed nodes may communicate but are distinct, and the amount of data exchanged between them is included in the repair bandwidth. As our first result, we prove a lower bound on the minimum bandwidth of cooperative repair. We also show that the cooperative model is stronger than the centralized one, in the sense that any MDS code with optimal repair bandwidth under the former model also has optimal bandwidth under the latter one. These results were previously known under the additional "uniform download" assumption, which is removed in our proofs. As our main result, we give explicit constructions of MDS codes with optimal cooperative repair for all possible parameters. More precisely, given any $n,k,h,d$ such that $2\le h \le n-d\le n-k$ we construct $(n,k)$ MDS codes over the field $F$ of size $|F|\ge (d+1-k)n$ that can optimally repair any $h$ erasures from any $d$ helper nodes. The repair scheme of our codes involves two rounds of communication. In the first round, each failed node downloads information from the helper nodes, and in the second one, each failed node downloads additional information from the other failed nodes. This implies that our codes achieve the optimal repair bandwidth using the smallest possible number of rounds.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
89,154
2007.12946
Duluth at SemEval-2020 Task 12: Offensive Tweet Identification in English with Logistic Regression
This paper describes the Duluth systems that participated in SemEval--2020 Task 12, Multilingual Offensive Language Identification in Social Media (OffensEval--2020). We participated in the three English language tasks. Our systems provide a simple Machine Learning baseline using logistic regression. We trained our models on the distantly supervised training data made available by the task organizers and used no other resources. As might be expected we did not rank highly in the comparative evaluation: 79th of 85 in Task A, 34th of 43 in Task B, and 24th of 39 in Task C. We carried out a qualitative analysis of our results and found that the class labels in the gold standard data are somewhat noisy. We hypothesize that the extremely high accuracy (> 90%) of the top ranked systems may reflect methods that learn the training data very well but may not generalize to the task of identifying offensive language in English. This analysis includes examples of tweets that despite being mildly redacted are still offensive.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
188,976
1305.0556
A quantum teleportation inspired algorithm produces sentence meaning from word meaning and grammatical structure
We discuss an algorithm which produces the meaning of a sentence given meanings of its words, and its resemblance to quantum teleportation. In fact, this protocol was the main source of inspiration for this algorithm which has many applications in the area of Natural Language Processing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
24,359
2110.14148
Uniform Concentration Bounds toward a Unified Framework for Robust Clustering
Recent advances in center-based clustering continue to improve upon the drawbacks of Lloyd's celebrated $k$-means algorithm over $60$ years after its introduction. Various methods seek to address poor local minima, sensitivity to outliers, and data that are not well-suited to Euclidean measures of fit, but many are supported largely empirically. Moreover, combining such approaches in a piecemeal manner can result in ad hoc methods, and the limited theoretical results supporting each individual contribution may no longer hold. Toward addressing these issues in a principled way, this paper proposes a cohesive robust framework for center-based clustering under a general class of dissimilarity measures. In particular, we present a rigorous theoretical treatment within a Median-of-Means (MoM) estimation framework, showing that it subsumes several popular $k$-means variants. In addition to unifying existing methods, we derive uniform concentration bounds that complete their analyses, and bridge these results to the MoM framework via Dudley's chaining arguments. Importantly, we neither require any assumptions on the distribution of the outlying observations nor on the relative number of observations $n$ to features $p$. We establish strong consistency and an error rate of $O(n^{-1/2})$ under mild conditions, surpassing the best-known results in the literature. The methods are empirically validated thoroughly on real and synthetic datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,430
1211.2037
Time Complexity Analysis of Binary Space Partitioning Scheme for Image Compression
Segmentation-based image coding methods provide high compression ratios when compared with traditional image coding approaches like the transform and sub band coding for low bit-rate compression applications. In this paper, a segmentation-based image coding method, namely the Binary Space Partition scheme, that divides the desired image using a recursive procedure for coding is presented. The BSP approach partitions the desired image recursively by using bisecting lines, selected from a collection of discrete optional lines, in a hierarchical manner. This partitioning procedure generates a binary tree, which is referred to as the BSP-tree representation of the desired image. The algorithm is extremely complex in computation and has high execution time. The time complexity of the BSP scheme is explored in this work.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
19,644
1205.6544
A Brief Summary of Dictionary Learning Based Approach for Classification (revised)
This note presents some representative methods which are based on dictionary learning (DL) for classification. We do not review the sophisticated methods or frameworks that involve DL for classification, such as online DL and spatial pyramid matching (SPM), but rather, we concentrate on the direct DL-based classification methods. Here, the "so-called direct DL-based method" is the approach directly deals with DL framework by adding some meaningful penalty terms. By listing some representative methods, we can roughly divide them into two categories, i.e. (1) directly making the dictionary discriminative and (2) forcing the sparse coefficients discriminative to push the discrimination power of the dictionary. From this taxonomy, we can expect some extensions of them as future researches.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
16,229
cs/0603025
Open Answer Set Programming with Guarded Programs
Open answer set programming (OASP) is an extension of answer set programming where one may ground a program with an arbitrary superset of the program's constants. We define a fixed point logic (FPL) extension of Clark's completion such that open answer sets correspond to models of FPL formulas and identify a syntactic subclass of programs, called (loosely) guarded programs. Whereas reasoning with general programs in OASP is undecidable, the FPL translation of (loosely) guarded programs falls in the decidable (loosely) guarded fixed point logic (mu(L)GF). Moreover, we reduce normal closed ASP to loosely guarded OASP, enabling for the first time, a characterization of an answer set semantics by muLGF formulas. We further extend the open answer set semantics for programs with generalized literals. Such generalized programs (gPs) have interesting properties, e.g., the ability to express infinity axioms. We restrict the syntax of gPs such that both rules and generalized literals are guarded. Via a translation to guarded fixed point logic, we deduce 2-exptime-completeness of satisfiability checking in such guarded gPs (GgPs). Bound GgPs are restricted GgPs with exptime-complete satisfiability checking, but still sufficiently expressive to optimally simulate computation tree logic (CTL). We translate Datalog lite programs to GgPs, establishing equivalence of GgPs under an open answer set semantics, alternation-free muGF, and Datalog lite.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
539,314
2103.04116
A novel approach to the classification of terrestrial drainage networks based on deep learning and preliminary results on Solar System bodies
Several approaches were proposed to describe the geomorphology of drainage networks and the abiotic/biotic factors determining their morphology. There is an intrinsic complexity of the explicit qualification of the morphological variations in response to various types of control factors and the difficulty of expressing the cause-effect links. Traditional methods of drainage network classification are based on the manual extraction of key characteristics, then applied as pattern recognition schemes. These approaches, however, have low predictive and uniform ability. We present a different approach, based on the data-driven supervised learning by images, extended also to extraterrestrial cases. With deep learning models, the extraction and classification phase is integrated within a more objective, analytical, and automatic framework. Despite the initial difficulties, due to the small number of training images available, and the similarity between the different shapes of the drainage samples, we obtained successful results, concluding that deep learning is a valid way for data exploration in geomorphology and related fields.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
223,529
2201.04494
SensatUrban: Learning Semantics from Urban-Scale Photogrammetric Point Clouds
With the recent availability and affordability of commercial depth sensors and 3D scanners, an increasing number of 3D (i.e., RGBD, point cloud) datasets have been publicized to facilitate research in 3D computer vision. However, existing datasets either cover relatively small areas or have limited semantic annotations. Fine-grained understanding of urban-scale 3D scenes is still in its infancy. In this paper, we introduce SensatUrban, an urban-scale UAV photogrammetry point cloud dataset consisting of nearly three billion points collected from three UK cities, covering 7.6 km^2. Each point in the dataset has been labelled with fine-grained semantic annotations, resulting in a dataset that is three times the size of the previous existing largest photogrammetric point cloud dataset. In addition to the more commonly encountered categories such as road and vegetation, urban-level categories including rail, bridge, and river are also included in our dataset. Based on this dataset, we further build a benchmark to evaluate the performance of state-of-the-art segmentation algorithms. In particular, we provide a comprehensive analysis and identify several key challenges limiting urban-scale point cloud understanding. The dataset is available at http://point-cloud-analysis.cs.ox.ac.uk.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
275,121
2110.08598
A Variational Bayesian Approach to Learning Latent Variables for Acoustic Knowledge Transfer
We propose a variational Bayesian (VB) approach to learning distributions of latent variables in deep neural network (DNN) models for cross-domain knowledge transfer, to address acoustic mismatches between training and testing conditions. Instead of carrying out point estimation in conventional maximum a posteriori estimation with a risk of having a curse of dimensionality in estimating a huge number of model parameters, we focus our attention on estimating a manageable number of latent variables of DNNs via a VB inference framework. To accomplish model transfer, knowledge learnt from a source domain is encoded in prior distributions of latent variables and optimally combined, in a Bayesian sense, with a small set of adaptation data from a target domain to approximate the corresponding posterior distributions. Experimental results on device adaptation in acoustic scene classification show that our proposed VB approach can obtain good improvements on target devices, and consistently outperforms 13 state-of-the-art knowledge transfer algorithms.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
261,477
2502.12894
CAST: Component-Aligned 3D Scene Reconstruction from an RGB Image
Recovering high-quality 3D scenes from a single RGB image is a challenging task in computer graphics. Current methods often struggle with domain-specific limitations or low-quality object generation. To address these, we propose CAST (Component-Aligned 3D Scene Reconstruction from a Single RGB Image), a novel method for 3D scene reconstruction and recovery. CAST starts by extracting object-level 2D segmentation and relative depth information from the input image, followed by using a GPT-based model to analyze inter-object spatial relationships. This enables the understanding of how objects relate to each other within the scene, ensuring more coherent reconstruction. CAST then employs an occlusion-aware large-scale 3D generation model to independently generate each object's full geometry, using MAE and point cloud conditioning to mitigate the effects of occlusions and partial object information, ensuring accurate alignment with the source image's geometry and texture. To align each object with the scene, the alignment generation model computes the necessary transformations, allowing the generated meshes to be accurately placed and integrated into the scene's point cloud. Finally, CAST incorporates a physics-aware correction step that leverages a fine-grained relation graph to generate a constraint graph. This graph guides the optimization of object poses, ensuring physical consistency and spatial coherence. By utilizing Signed Distance Fields (SDF), the model effectively addresses issues such as occlusions, object penetration, and floating objects, ensuring that the generated scene accurately reflects real-world physical interactions. CAST can be leveraged in robotics, enabling efficient real-to-simulation workflows and providing realistic, scalable simulation environments for robotic systems.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
535,094
2303.00031
Tiny Classifier Circuits: Evolving Accelerators for Tabular Data
A typical machine learning (ML) development cycle for edge computing is to maximise the performance during model training and then minimise the memory/area footprint of the trained model for deployment on edge devices targeting CPUs, GPUs, microcontrollers, or custom hardware accelerators. This paper proposes a methodology for automatically generating predictor circuits for classification of tabular data with comparable prediction performance to conventional ML techniques while using substantially fewer hardware resources and power. The proposed methodology uses an evolutionary algorithm to search over the space of logic gates and automatically generates a classifier circuit with maximised training prediction accuracy. Classifier circuits are so tiny (i.e., consisting of no more than 300 logic gates) that they are called "Tiny Classifier" circuits, and can efficiently be implemented in ASIC or on an FPGA. We empirically evaluate the automatic Tiny Classifier circuit generation methodology or "Auto Tiny Classifiers" on a wide range of tabular datasets, and compare it against conventional ML techniques such as Amazon's AutoGluon, Google's TabNet and a neural search over Multi-Layer Perceptrons. Despite Tiny Classifiers being constrained to a few hundred logic gates, we observe no statistically significant difference in prediction performance in comparison to the best-performing ML baseline. When synthesised as a Silicon chip, Tiny Classifiers use 8-18x less area and 4-8x less power. When implemented as an ultra-low cost chip on a flexible substrate (i.e., FlexIC), they occupy 10-75x less area and consume 13-75x less power compared to the most hardware-efficient ML baseline. On an FPGA, Tiny Classifiers consume 3-11x fewer resources.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
348,462
1707.07151
Optimal Transmit Beamforming for Secure SWIPT in Heterogeneous Networks
This letter investigates the artificial noise aided beamforming design for secure simultaneous wireless information and power transfer (SWIPT) in a two-tier downlink heterogeneous network, where one femtocell is overlaid with one macrocell in co-channel deployment. Each energy receiver (ER) in femtocell can be considered as a potential eaves- dropper for messages intended for information receiver (IR). Our objective is to maximize the secrecy rate at IR subject to the signal-to-interference-plus noise ratio (SINR) requirements of macro users (MUs), transmit power constraint and energy harvesting constraint. Due to the non-convexity of the formulated problem, it cannot be solved directly. Thus, we propose a novel reformulation by using first-order Taylor expansion and successive convex approximation (SCA) techniques. Furthermore, an SCA-based algorithm with low complexity is proposed to arrive at provably convergent solution. Finally, numerical results evaluate the performance of the proposed algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
77,558
2303.06705
Retinexformer: One-stage Retinex-based Transformer for Low-light Image Enhancement
When enhancing low-light images, many deep learning algorithms are based on the Retinex theory. However, the Retinex model does not consider the corruptions hidden in the dark or introduced by the light-up process. Besides, these methods usually require a tedious multi-stage training pipeline and rely on convolutional neural networks, showing limitations in capturing long-range dependencies. In this paper, we formulate a simple yet principled One-stage Retinex-based Framework (ORF). ORF first estimates the illumination information to light up the low-light image and then restores the corruption to produce the enhanced image. We design an Illumination-Guided Transformer (IGT) that utilizes illumination representations to direct the modeling of non-local interactions of regions with different lighting conditions. By plugging IGT into ORF, we obtain our algorithm, Retinexformer. Comprehensive quantitative and qualitative experiments demonstrate that our Retinexformer significantly outperforms state-of-the-art methods on thirteen benchmarks. The user study and application on low-light object detection also reveal the latent practical values of our method. Code, models, and results are available at https://github.com/caiyuanhao1998/Retinexformer
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
350,960
2201.04733
Adversarially Robust Classification by Conditional Generative Model Inversion
Most adversarial attack defense methods rely on obfuscating gradients. These methods are successful in defending against gradient-based attacks; however, they are easily circumvented by attacks which either do not use the gradient or by attacks which approximate and use the corrected gradient. Defenses that do not obfuscate gradients such as adversarial training exist, but these approaches generally make assumptions about the attack such as its magnitude. We propose a classification model that does not obfuscate gradients and is robust by construction without assuming prior knowledge about the attack. Our method casts classification as an optimization problem where we "invert" a conditional generator trained on unperturbed, natural images to find the class that generates the closest sample to the query image. We hypothesize that a potential source of brittleness against adversarial attacks is the high-to-low-dimensional nature of feed-forward classifiers which allows an adversary to find small perturbations in the input space that lead to large changes in the output space. On the other hand, a generative model is typically a low-to-high-dimensional mapping. While the method is related to Defense-GAN, the use of a conditional generative model and inversion in our model instead of the feed-forward classifier is a critical difference. Unlike Defense-GAN, which was shown to generate obfuscated gradients that are easily circumvented, we show that our method does not obfuscate gradients. We demonstrate that our model is extremely robust against black-box attacks and has improved robustness against white-box attacks compared to naturally trained, feed-forward classifiers.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
275,174
2011.03182
5G Embraces Satellites for 6G Ubiquitous IoT: Basic Models for Integrated Satellite Terrestrial Networks
Terrestrial communication networks mainly focus on users in urban areas but have poor coverage performance in harsh environments, such as mountains, deserts, and oceans. Satellites can be exploited to extend the coverage of terrestrial fifth-generation (5G) networks. However, satellites are restricted by their high latency and relatively low data rate. Consequently, the integration of terrestrial and satellite components has been widely studied, to take advantage of both sides and enable seamless broadband coverage. Due to the significant differences between satellite communications (SatComs) and terrestrial communications (TerComs) in terms of channel fading, transmission delay, mobility, and coverage performance, the establishment of an efficient hybrid satellite-terrestrial network (HSTN) still faces many challenges. In general, it is difficult to decompose a HSTN into a sum of separate satellite and terrestrial links due to the complicated coupling relationships therein. To uncover the complete picture of HSTNs, we regard the HSTN as a combination of basic cooperative models that contain the main traits of satellite-terrestrial integration but are much simpler and thus more tractable than the large-scale heterogeneous HSTNs. In particular, we present three basic cooperative models, i.e., model X, model L, and model V, and provide a survey of the state-of-the-art technologies for each of them. We discuss future research directions towards establishing a cell-free, hierarchical, decoupled HSTN. We also outline open issues to envision an agile, smart, and secure HSTN for the sixth-generation (6G) ubiquitous Internet of Things (IoT).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
205,166
2410.00485
A Hitchhikers Guide to Fine-Grained Face Forgery Detection Using Common Sense Reasoning
Explainability in artificial intelligence is crucial for restoring trust, particularly in areas like face forgery detection, where viewers often struggle to distinguish between real and fabricated content. Vision and Large Language Models (VLLM) bridge computer vision and natural language, offering numerous applications driven by strong common-sense reasoning. Despite their success in various tasks, the potential of vision and language remains underexplored in face forgery detection, where they hold promise for enhancing explainability by leveraging the intrinsic reasoning capabilities of language to analyse fine-grained manipulation areas. As such, there is a need for a methodology that converts face forgery detection to a Visual Question Answering (VQA) task to systematically and fairly evaluate these capabilities. Previous efforts for unified benchmarks in deepfake detection have focused on the simpler binary task, overlooking evaluation protocols for fine-grained detection and text-generative models. We propose a multi-staged approach that diverges from the traditional binary decision paradigm to address this gap. In the first stage, we assess the models' performance on the binary task and their sensitivity to given instructions using several prompts. In the second stage, we delve deeper into fine-grained detection by identifying areas of manipulation in a multiple-choice VQA setting. In the third stage, we convert the fine-grained detection to an open-ended question and compare several matching strategies for the multi-label classification task. Finally, we qualitatively evaluate the fine-grained responses of the VLLMs included in the benchmark. We apply our benchmark to several popular models, providing a detailed comparison of binary, multiple-choice, and open-ended VQA evaluation across seven datasets. \url{https://nickyfot.github.io/hitchhickersguide.github.io/}
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
493,400
1702.08089
Quantum parameter estimation via dispersive measurement in circuit QED
We investigate the quantum parameter estimation in circuit quantum electrodynamics via dispersive measurement. Based on the Metropolis Hastings (MH) algorithm and the Markov chain Monte Carlo (MCMC) integration, a new algorithm is proposed to calculate the Fisher information by the stochastic master equation for unknown parameter estimation. Here, the Fisher information is expressed in the form of log-likehood functions and further approximated by the MCMC integration. Numerical results demonstrate that the single evolution of the Fisher information can probably approach the quantum Fisher information. The same phenomenon is observed in the ensemble evolution in the short time interval. These results demonstrate the effectiveness of the proposed algorithm.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
68,913
2207.11652
Counterfactual Reasoning for Out-of-distribution Multimodal Sentiment Analysis
Existing studies on multimodal sentiment analysis heavily rely on textual modality and unavoidably induce the spurious correlations between textual words and sentiment labels. This greatly hinders the model generalization ability. To address this problem, we define the task of out-of-distribution (OOD) multimodal sentiment analysis. This task aims to estimate and mitigate the bad effect of textual modality for strong OOD generalization. To this end, we embrace causal inference, which inspects the causal relationships via a causal graph. From the graph, we find that the spurious correlations are attributed to the direct effect of textual modality on the model prediction while the indirect one is more reliable by considering multimodal semantics. Inspired by this, we devise a model-agnostic counterfactual framework for multimodal sentiment analysis, which captures the direct effect of textual modality via an extra text model and estimates the indirect one by a multimodal model. During the inference, we first estimate the direct effect by the counterfactual inference, and then subtract it from the total effect of all modalities to obtain the indirect effect for reliable prediction. Extensive experiments show the superior effectiveness and generalization ability of our proposed framework.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
309,721
1809.02479
Convolutional Neural Network: Text Classification Model for Open Domain Question Answering System
Recently machine learning is being applied to almost every data domain one of which is Question Answering Systems (QAS). A typical Question Answering System is fairly an information retrieval system, which matches documents or text and retrieve the most accurate one. The idea of open domain question answering system put forth, involves convolutional neural network text classifiers. The Classification model presented in this paper is multi-class text classifier. The neural network classifier can be trained on large dataset. We report series of experiments conducted on Convolution Neural Network (CNN) by training it on two different datasets. Neural network model is trained on top of word embedding. Softmax layer is applied to calculate loss and mapping of semantically related words. Gathered results can help justify the fact that proposed hypothetical QAS is feasible. We further propose a method to integrate Convolutional Neural Network Classifier to an open domain question answering system. The idea of Open domain will be further explained, but the generality of it indicates to the system of domain specific trainable models, thus making it an open domain.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
107,058
1607.00973
A fast marching algorithm for the factored eikonal equation
The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modelling and inversion by Gauss-Newton.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
58,162
1809.10438
Wafer Quality Inspection using Memristive LSTM, ANN, DNN and HTM
The automated wafer inspection and quality control is a complex and time-consuming task, which can speed up using neuromorphic memristive architectures, as a separate inspection device or integrating directly into sensors. This paper presents the performance analysis and comparison of different neuromorphic architectures for patterned wafer quality inspection and classification. The application of non-volatile memristive devices in these architectures ensures low power consumption, small on-chip area scalability. We demonstrate that Long-Short Term Memory (LSTM) outperforms other architectures for the same number of training iterations, and has relatively low on-chip area and power consumption.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
108,911
2007.06159
Implicit Distributional Reinforcement Learning
To improve the sample efficiency of policy-gradient based reinforcement learning algorithms, we propose implicit distributional actor-critic (IDAC) that consists of a distributional critic, built on two deep generator networks (DGNs), and a semi-implicit actor (SIA), powered by a flexible policy distribution. We adopt a distributional perspective on the discounted cumulative return and model it with a state-action-dependent implicit distribution, which is approximated by the DGNs that take state-action pairs and random noises as their input. Moreover, we use the SIA to provide a semi-implicit policy distribution, which mixes the policy parameters with a reparameterizable distribution that is not constrained by an analytic density function. In this way, the policy's marginal distribution is implicit, providing the potential to model complex properties such as covariance structure and skewness, but its parameter and entropy can still be estimated. We incorporate these features with an off-policy algorithm framework to solve problems with continuous action space and compare IDAC with state-of-the-art algorithms on representative OpenAI Gym environments. We observe that IDAC outperforms these baselines in most tasks. Python code is provided.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,920
2301.13362
Optimizing DDPM Sampling with Shortcut Fine-Tuning
In this study, we propose Shortcut Fine-Tuning (SFT), a new approach for addressing the challenge of fast sampling of pretrained Denoising Diffusion Probabilistic Models (DDPMs). SFT advocates for the fine-tuning of DDPM samplers through the direct minimization of Integral Probability Metrics (IPM), instead of learning the backward diffusion process. This enables samplers to discover an alternative and more efficient sampling shortcut, deviating from the backward diffusion process. Inspired by a control perspective, we propose a new algorithm SFT-PG: Shortcut Fine-Tuning with Policy Gradient, and prove that under certain assumptions, gradient descent of diffusion models with respect to IPM is equivalent to performing policy gradient. To our best knowledge, this is the first attempt to utilize reinforcement learning (RL) methods to train diffusion models. Through empirical evaluation, we demonstrate that our fine-tuning method can further enhance existing fast DDPM samplers, resulting in sample quality comparable to or even surpassing that of the full-step model across various datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,876
1301.3850
A Two-round Variant of EM for Gaussian Mixtures
Given a set of possible models (e.g., Bayesian network structures) and a data sample, in the unsupervised model selection problem the task is to choose the most accurate model with respect to the domain joint probability distribution. In contrast to this, in supervised model selection it is a priori known that the chosen model will be used in the future for prediction tasks involving more ``focused' predictive distributions. Although focused predictive distributions can be produced from the joint probability distribution by marginalization, in practice the best model in the unsupervised sense does not necessarily perform well in supervised domains. In particular, the standard marginal likelihood score is a criterion for the unsupervised task, and, although frequently used for supervised model selection also, does not perform well in such tasks. In this paper we study the performance of the marginal likelihood score empirically in supervised Bayesian network selection tasks by using a large number of publicly available classification data sets, and compare the results to those obtained by alternative model selection criteria, including empirical crossvalidation methods, an approximation of a supervised marginal likelihood measure, and a supervised version of Dawids prequential(predictive sequential) principle.The results demonstrate that the marginal likelihood score does NOT perform well FOR supervised model selection, WHILE the best results are obtained BY using Dawids prequential r napproach.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
21,162
1807.04897
TS2C: Tight Box Mining with Surrounding Segmentation Context for Weakly Supervised Object Detection
This work provides a simple approach to discover tight object bounding boxes with only image-level supervision, called Tight box mining with Surrounding Segmentation Context (TS2C). We observe that object candidates mined through current multiple instance learning methods are usually trapped to discriminative object parts, rather than the entire object. TS2C leverages surrounding segmentation context derived from weakly-supervised segmentation to suppress such low-quality distracting candidates and boost the high-quality ones. Specifically, TS2C is developed based on two key properties of desirable bounding boxes: 1) high purity, meaning most pixels in the box are with high object response, and 2) high completeness, meaning the box covers high object response pixels comprehensively. With such novel and computable criteria, more tight candidates can be discovered for learning a better object detector. With TS2C, we obtain 48.0% and 44.4% mAP scores on VOC 2007 and 2012 benchmarks, which are the new state-of-the-arts.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
102,824
1904.06830
ContactDB: Analyzing and Predicting Grasp Contact via Thermal Imaging
Grasping and manipulating objects is an important human skill. Since hand-object contact is fundamental to grasping, capturing it can lead to important insights. However, observing contact through external sensors is challenging because of occlusion and the complexity of the human hand. We present ContactDB, a novel dataset of contact maps for household objects that captures the rich hand-object contact that occurs during grasping, enabled by use of a thermal camera. Participants in our study grasped 3D printed objects with a post-grasp functional intent. ContactDB includes 3750 3D meshes of 50 household objects textured with contact maps and 375K frames of synchronized RGB-D+thermal images. To the best of our knowledge, this is the first large-scale dataset that records detailed contact maps for human grasps. Analysis of this data shows the influence of functional intent and object size on grasping, the tendency to touch/avoid 'active areas', and the high frequency of palm and proximal finger contact. Finally, we train state-of-the-art image translation and 3D convolution algorithms to predict diverse contact patterns from object shape. Data, code and models are available at https://contactdb.cc.gatech.edu.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
127,645
2402.16240
High-Frequency-aware Hierarchical Contrastive Selective Coding for Representation Learning on Text-attributed Graphs
We investigate node representation learning on text-attributed graphs (TAGs), where nodes are associated with text information. Although recent studies on graph neural networks (GNNs) and pretrained language models (PLMs) have exhibited their power in encoding network and text signals, respectively, less attention has been paid to delicately coupling these two types of models on TAGs. Specifically, existing GNNs rarely model text in each node in a contextualized way; existing PLMs can hardly be applied to characterize graph structures due to their sequence architecture. To address these challenges, we propose HASH-CODE, a High-frequency Aware Spectral Hierarchical Contrastive Selective Coding method that integrates GNNs and PLMs into a unified model. Different from previous "cascaded architectures" that directly add GNN layers upon a PLM, our HASH-CODE relies on five self-supervised optimization objectives to facilitate thorough mutual enhancement between network and text signals in diverse granularities. Moreover, we show that existing contrastive objective learns the low-frequency component of the augmentation graph and propose a high-frequency component (HFC)-aware contrastive learning objective that makes the learned embeddings more distinctive. Extensive experiments on six real-world benchmarks substantiate the efficacy of our proposed approach. In addition, theoretical analysis and item embedding visualization provide insights into our model interoperability.
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
432,478
1803.05084
Local Partition in Rich Graphs
Local graph partitioning is a key graph mining tool that allows researchers to identify small groups of interrelated nodes (e.g. people) and their connective edges (e.g. interactions). Because local graph partitioning is primarily focused on the network structure of the graph (vertices and edges), it often fails to consider the additional information contained in the attributes. In this paper we propose---(i) a scalable algorithm to improve local graph partitioning by taking into account both the network structure of the graph and the attribute data and (ii) an application of the proposed local graph partitioning algorithm (AttriPart) to predict the evolution of local communities (LocalForecasting). Experimental results show that our proposed AttriPart algorithm finds up to 1.6$\times$ denser local partitions, while running approximately 43$\times$ faster than traditional local partitioning techniques (PageRank-Nibble). In addition, our LocalForecasting algorithm shows a significant improvement in the number of nodes and edges correctly predicted over baseline methods.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
92,569
1808.06818
A Usefulness-based Approach for Measuring the Local and Global Effect of IIR Services
In Interactive Information Retrieval (IIR) different services such as search term suggestion can support users in their search process. The applicability and performance of such services is either measured with different user-centered studies (like usability tests or laboratory experiments) or, in the context of IR, with their contribution to measures like precision and recall. However, each evaluation methodology has its certain disadvantages. For example, user-centered experiments are often costly and small-scaled; IR experiments rely on relevance assessments and measure only relevance of documents. In this work we operationalize the usefulness model of Cole et al. (2009) on the level of system support to measure not only the local effect of an IR service, but the impact it has on the whole search process. We therefore use a log-based evaluation approach which models user interactions within sessions with positive signals and apply it for the case of a search term suggestion service. We found that the usage of the service significantly often implicates the occurrence of positive signals during the following session steps.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
105,616
2403.00261
Spatial Cascaded Clustering and Weighted Memory for Unsupervised Person Re-identification
Recent unsupervised person re-identification (re-ID) methods achieve high performance by leveraging fine-grained local context. These methods are referred to as part-based methods. However, most part-based methods obtain local contexts through horizontal division, which suffer from misalignment due to various human poses. Additionally, the misalignment of semantic information in part features restricts the use of metric learning, thus affecting the effectiveness of part-based methods. The two issues mentioned above result in the under-utilization of part features in part-based methods. We introduce the Spatial Cascaded Clustering and Weighted Memory (SCWM) method to address these challenges. SCWM aims to parse and align more accurate local contexts for different human body parts while allowing the memory module to balance hard example mining and noise suppression. Specifically, we first analyze the foreground omissions and spatial confusions issues in the previous method. Then, we propose foreground and space corrections to enhance the completeness and reasonableness of the human parsing results. Next, we introduce a weighted memory and utilize two weighting strategies. These strategies address hard sample mining for global features and enhance noise resistance for part features, which enables better utilization of both global and part features. Extensive experiments on Market-1501 and MSMT17 validate the proposed method's effectiveness over many state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
433,919
1607.07423
A Non-Parametric Control Chart For High Frequency Multivariate Data
Support Vector Data Description (SVDD) is a machine learning technique used for single class classification and outlier detection. SVDD based K-chart was first introduced by Sun and Tsung for monitoring multivariate processes when underlying distribution of process parameters or quality characteristics depart from Normality. The method first trains a SVDD model on data obtained from stable or in-control operations of the process to obtain a threshold $R^2$ and kernel center a. For each new observation, its Kernel distance from the Kernel center a is calculated. The kernel distance is compared against the threshold $R^2$ to determine if the observation is within the control limits. The non-parametric K-chart provides an attractive alternative to the traditional control charts such as the Hotelling's $T^2$ charts when distribution of the underlying multivariate data is either non-normal or is unknown. But there are challenges when K-chart is deployed in practice. The K-chart requires calculating kernel distance of each new observation but there are no guidelines on how to interpret the kernel distance plot and infer about shifts in process mean or changes in process variation. This limits the application of K-charts in big-data applications such as equipment health monitoring, where observations are generated at a very high frequency. In this scenario, the analyst using the K-chart is inundated with kernel distance results at a very high frequency, generally without any recourse for detecting presence of any assignable causes of variation. We propose a new SVDD based control chart, called as $K_T$ chart, which addresses challenges encountered when using K-chart for big-data applications. The $K_T$ charts can be used to simultaneously track process variation and central tendency. We illustrate the successful use of $K_T$ chart using the Tennessee Eastman process data.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
59,018
2010.10932
Deep learning-based citation recommendation system for patents
In this study, we address the challenges in developing a deep learning-based automatic patent citation recommendation system. Although deep learning-based recommendation systems have exhibited outstanding performance in various domains (such as movies, products, and paper citations), their validity in patent citations has not been investigated, owing to the lack of a freely available high-quality dataset and relevant benchmark model. To solve these problems, we present a novel dataset called PatentNet that includes textual information and metadata for approximately 110,000 patents from the Google Big Query service. Further, we propose strong benchmark models considering the similarity of textual information and metadata (such as cooperative patent classification code). Compared with existing recommendation methods, the proposed benchmark method achieved a mean reciprocal rank of 0.2377 on the test set, whereas the existing state-of-the-art recommendation method achieved 0.2073.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
202,059
2210.08266
MenuAI: Restaurant Food Recommendation System via a Transformer-based Deep Learning Model
Food recommendation system has proven as an effective technology to provide guidance on dietary choices, and this is especially important for patients suffering from chronic diseases. Unlike other multimedia recommendations, such as books and movies, food recommendation task is highly relied on the context at the moment, since users' food preference can be highly dynamic over time. For example, individuals tend to eat more calories earlier in the day and eat a little less at dinner. However, there are still limited research works trying to incorporate both current context and nutritional knowledge for food recommendation. Thus, a novel restaurant food recommendation system is proposed in this paper to recommend food dishes to users according to their special nutritional needs. Our proposed system utilises Optical Character Recognition (OCR) technology and a transformer-based deep learning model, Learning to Rank (LTR) model, to conduct food recommendation. Given a single RGB image of the menu, the system is then able to rank the food dishes in terms of the input search key (e.g., calorie, protein level). Due to the property of the transformer, our system can also rank unseen food dishes. Comprehensive experiments are conducted to validate our methods on a self-constructed menu dataset, known as MenuRank dataset. The promising results, with accuracy ranging from 77.2% to 99.5%, have demonstrated the great potential of LTR model in addressing food recommendation problems.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
324,067
2412.20977
UnrealZoo: Enriching Photo-realistic Virtual Worlds for Embodied AI
We introduce UnrealZoo, a rich collection of photo-realistic 3D virtual worlds built on Unreal Engine, designed to reflect the complexity and variability of the open worlds. Additionally, we offer a variety of playable entities for embodied AI agents. Based on UnrealCV, we provide a suite of easy-to-use Python APIs and tools for various potential applications, such as data collection, environment augmentation, distributed training, and benchmarking. We optimize the rendering and communication efficiency of UnrealCV to support advanced applications, such as multi-agent interaction. Our experiments benchmark agents in various complex scenes, focusing on visual navigation and tracking, which are fundamental capabilities for embodied visual intelligence. The results yield valuable insights into the advantages of diverse training environments for reinforcement learning (RL) agents and the challenges faced by current embodied vision agents, including those based on RL and large vision-language models (VLMs), in open worlds. These challenges involve latency in closed-loop control in dynamic scenes and reasoning about 3D spatial structures in unstructured terrain.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
521,418
2407.03152
Stereo Risk: A Continuous Modeling Approach to Stereo Matching
We introduce Stereo Risk, a new deep-learning approach to solve the classical stereo-matching problem in computer vision. As it is well-known that stereo matching boils down to a per-pixel disparity estimation problem, the popular state-of-the-art stereo-matching approaches widely rely on regressing the scene disparity values, yet via discretization of scene disparity values. Such discretization often fails to capture the nuanced, continuous nature of scene depth. Stereo Risk departs from the conventional discretization approach by formulating the scene disparity as an optimal solution to a continuous risk minimization problem, hence the name "stereo risk". We demonstrate that $L^1$ minimization of the proposed continuous risk function enhances stereo-matching performance for deep networks, particularly for disparities with multi-modal probability distributions. Furthermore, to enable the end-to-end network training of the non-differentiable $L^1$ risk optimization, we exploited the implicit function theorem, ensuring a fully differentiable network. A comprehensive analysis demonstrates our method's theoretical soundness and superior performance over the state-of-the-art methods across various benchmark datasets, including KITTI 2012, KITTI 2015, ETH3D, SceneFlow, and Middlebury 2014.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
470,044
2409.02070
Explicit Differentiable Slicing and Global Deformation for Cardiac Mesh Reconstruction
Mesh reconstruction of the cardiac anatomy from medical images is useful for shape and motion measurements and biophysics simulations to facilitate the assessment of cardiac function and health. However, 3D medical images are often acquired as 2D slices that are sparsely sampled and noisy, and mesh reconstruction on such data is a challenging task. Traditional voxel-based approaches rely on pre- and post-processing that compromises image fidelity, while mesh-level deep learning approaches require mesh annotations that are difficult to get. Therefore, direct cross-domain supervision from 2D images to meshes is a key technique for advancing 3D learning in medical imaging, but it has not been well-developed. While there have been attempts to approximate the optimized meshes' slicing, few existing methods directly use 2D slices to supervise mesh reconstruction in a differentiable manner. Here, we propose a novel explicit differentiable voxelization and slicing (DVS) algorithm that allows gradient backpropagation to a mesh from its slices, facilitating refined mesh optimization directly supervised by the losses defined on 2D images. Further, we propose an innovative framework for extracting patient-specific left ventricle (LV) meshes from medical images by coupling DVS with a graph harmonic deformation (GHD) mesh morphing descriptor of cardiac shape that naturally preserves mesh quality and smoothness during optimization. Experimental results demonstrate that our method achieves state-of-the-art performance in cardiac mesh reconstruction tasks from CT and MRI, with an overall Dice score of 90% on multi-datasets, outperforming existing approaches. The proposed method can further quantify clinically useful parameters such as ejection fraction and global myocardial strains, closely matching the ground truth and surpassing the traditional voxel-based approach in sparse images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
485,562
2312.15599
Preliminary Study on Incremental Learning for Large Language Model-based Recommender Systems
Adapting Large Language Models for Recommendation (LLM4Rec) has shown promising results. However, the challenges of deploying LLM4Rec in real-world scenarios remain largely unexplored. In particular, recommender models need incremental adaptation to evolving user preferences, while the suitability of traditional incremental learning methods within LLM4Rec remains ambiguous due to the unique characteristics of Large Language Models (LLMs). In this study, we empirically evaluate two commonly employed incremental learning strategies (full retraining and fine-tuning) for LLM4Rec. Surprisingly, neither approach shows significant improvements in the performance of LLM4Rec. Instead of dismissing the role of incremental learning, we attribute the lack of anticipated performance enhancement to a mismatch between the LLM4Rec architecture and incremental learning: LLM4Rec employs a single adaptation module for learning recommendations, limiting its ability to simultaneously capture long-term and short-term user preferences in the incremental learning context. To test this speculation, we introduce a Long- and Short-term Adaptation-aware Tuning (LSAT) framework for incremental learning in LLM4Rec. Unlike the single adaptation module approach, LSAT utilizes two distinct adaptation modules to independently learn long-term and short-term user preferences. Empirical results verify that LSAT enhances performance, thereby validating our speculation. We release our code at: https://github.com/TianhaoShi2001/LSAT.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
418,060
2010.05673
Is Plug-in Solver Sample-Efficient for Feature-based Reinforcement Learning?
It is believed that a model-based approach for reinforcement learning (RL) is the key to reduce sample complexity. However, the understanding of the sample optimality of model-based RL is still largely missing, even for the linear case. This work considers sample complexity of finding an $\epsilon$-optimal policy in a Markov decision process (MDP) that admits a linear additive feature representation, given only access to a generative model. We solve this problem via a plug-in solver approach, which builds an empirical model and plans in this empirical model via an arbitrary plug-in solver. We prove that under the anchor-state assumption, which implies implicit non-negativity in the feature space, the minimax sample complexity of finding an $\epsilon$-optimal policy in a $\gamma$-discounted MDP is $O(K/(1-\gamma)^3\epsilon^2)$, which only depends on the dimensionality $K$ of the feature space and has no dependence on the state or action space. We further extend our results to a relaxed setting where anchor-states may not exist and show that a plug-in approach can be sample efficient as well, providing a flexible approach to design model-based algorithms for RL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
200,220
2308.15932
Attention-based CT Scan Interpolation for Lesion Segmentation of Colorectal Liver Metastases
Small liver lesions common to colorectal liver metastases (CRLMs) are challenging for convolutional neural network (CNN) segmentation models, especially when we have a wide range of slice thicknesses in the computed tomography (CT) scans. Slice thickness of CT images may vary by clinical indication. For example, thinner slices are used for presurgical planning when fine anatomic details of small vessels are required. While keeping the effective radiation dose in patients as low as possible, various slice thicknesses are employed in CRLMs due to their limitations. However, differences in slice thickness across CTs lead to significant performance degradation in CT segmentation models based on CNNs. This paper proposes a novel unsupervised attention-based interpolation model to generate intermediate slices from consecutive triplet slices in CT scans. We integrate segmentation loss during the interpolation model's training to leverage segmentation labels in existing slices to generate middle ones. Unlike common interpolation techniques in CT volumes, our model highlights the regions of interest (liver and lesions) inside the abdominal CT scans in the interpolated slice. Moreover, our model's outputs are consistent with the original input slices while increasing the segmentation performance in two cutting-edge 3D segmentation pipelines. We tested the proposed model on the CRLM dataset to upsample subjects with thick slices and create isotropic volume for our segmentation model. The produced isotropic dataset increases the Dice score in the segmentation of lesions and outperforms other interpolation approaches in terms of interpolation metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
388,852
2006.15998
Distortion based Light-weight Security for Cyber-Physical Systems
In Cyber-Physical Systems (CPS), inference based on communicated data is of critical significance as it can be used to manipulate or damage the control operations by adversaries. This calls for efficient mechanisms for secure transmission of data since control systems are becoming increasingly distributed over larger geographical areas. Distortion based security, recently proposed as one candidate for secure transmissions in CPS, is not only more appropriate for these applications but also quite frugal in terms of prior requirements on shared keys. In this paper, we propose distortion-based metrics to protect CPS communication and show that it is possible to confuse adversaries with just a few bits of pre-shared keys. In particular, we will show that a linear dynamical system can communicate its state in a manner that prevents an eavesdropper from accurately learning the state.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
184,685
2301.06986
Structural Analysis by Modified Signature Matrix for Integro-differential-algebraic Equations
Integro-differential-algebraic equations (IDAE)s are widely used in applications of engineering and analysis. When there are hidden constraints in an IDAE, structural analysis is necessary. But if derivatives of dependent variables appear in their integrals, the existing definition of the signature matrix for an IDAE cannot be satisfied. Moreover, if an IDAE has a singular Jacobian matrix after structural analysis by the Sigma-method, improved structural analysis methods are proposed to regularize it. However, the optimal value of an IDAE may be negative which can not ensure the termination of the regularization. Furthermore, overestimation of the signature matrix may also lead to failure of its structural analysis. In this paper, firstly, we redefine the signature matrix and introduce a definition of the degree of freedom for IDAEs. Thus, the termination of improved structural analysis methods can be guaranteed. Secondly, the detection method by points is proposed to deal with the problem of overestimation of signature matrix. Thirdly, the embedding method has proved to suitable for structural unamenable IDAEs, including those types that arise from symbolic cancellation and numerical degeneration. Finally, the global numerical method is applied to an example of two-stage drive system which can help to find all solutions for IDAEs by witness points. Hopefully, through the example of pendulum curtain, the approach for IDAEs proposed in this paper can be applied to integro-partial-differential-algebraic equations (IPDAE)s.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
340,799
1809.04972
Simulation-based Distributed Coordination Maximization over Networks
In various online/offline multi-agent networked environments, it is very popular that the system can benefit from coordinating actions of two interacting agents at some cost of coordination. In this paper, we first formulate an optimization problem that captures the amount of coordination gain at the cost of node activation over networks. This problem is challenging to solve in a distributed manner, since the target gain is a function of the long-term time portion of the inter-coupled activations of two adjacent nodes, and thus a standard Lagrange duality theory is hard to apply to obtain a distributed decomposition as in the standard Network Utility Maximization. In this paper, we propose three simulation-based distributed algorithms, each having different update rules, all of which require only one-hop message passing and locally-observed information. The key idea for being distributedness is due to a stochastic approximation method that runs a Markov chain simulation incompletely over time, but provably guarantees its convergence to the optimal solution. Next, we provide a game-theoretic framework to interpret our proposed algorithms from a different perspective. We artificially select the payoff function, where the game's Nash equilibrium is asymptotically equal to the socially optimal point, i.e., no Price-of-Anarchy. We show that two stochastically-approximated variants of standard game-learning dynamics overlap with two algorithms developed from the optimization perspective. Finally, we demonstrate our theoretical findings on convergence, optimality, and further features such as a trade-off between efficiency and convergence speed through extensive simulations.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
107,687
2108.08497
Monarch: A Durable Polymorphic Memory For Data Intensive Applications
3D die stacking has often been proposed to build large-scale DRAM-based caches. Unfortunately, the power and performance overheads of DRAM limit the efficiency of high-bandwidth memories. Also, DRAM is facing serious scalability challenges that make alternative technologies more appealing. This paper examines Monarch, a resistive 3D stacked memory based on a novel reconfigurable crosspoint array called XAM. The XAM array is capable of switching between random access and content-addressable modes, which enables Monarch (i) to better utilize the in-package bandwidth and (ii) to satisfy both the random access memory and associative search requirements of various applications. Moreover, the Monarch controller ensures a given target lifetime for the resistive stack. Our simulation results on a set of parallel memory-intensive applications indicate that Monarch outperforms an ideal DRAM caching by 1.21x on average. For in-memory hash table and string matching workloads, Monarch improves performance up to 12x over the conventional high bandwidth memories.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
251,274
1904.01775
Multimodal Representation Learning using Deep Multiset Canonical Correlation
We propose Deep Multiset Canonical Correlation Analysis (dMCCA) as an extension to representation learning using CCA when the underlying signal is observed across multiple (more than two) modalities. We use deep learning framework to learn non-linear transformations from different modalities to a shared subspace such that the representations maximize the ratio of between- and within-modality covariance of the observations. Unlike linear discriminant analysis, we do not need class information to learn these representations, and we show that this model can be trained for complex data using mini-batches. Using synthetic data experiments, we show that dMCCA can effectively recover the common signal across the different modalities corrupted by multiplicative and additive noise. We also analyze the sensitivity of our model to recover the correlated components with respect to mini-batch size and dimension of the embeddings. Performance evaluation on noisy handwritten datasets shows that our model outperforms other CCA-based approaches and is comparable to deep neural network models trained end-to-end on this dataset.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
126,245
1905.09383
An Optimal Private Stochastic-MAB Algorithm Based on an Optimal Private Stopping Rule
We present a provably optimal differentially private algorithm for the stochastic multi-arm bandit problem, as opposed to the private analogue of the UCB-algorithm [Mishra and Thakurta, 2015; Tossou and Dimitrakakis, 2016] which doesn't meet the recently discovered lower-bound of $\Omega \left(\frac{K\log(T)}{\epsilon} \right)$ [Shariff and Sheffet, 2018]. Our construction is based on a different algorithm, Successive Elimination [Even-Dar et al. 2002], that repeatedly pulls all remaining arms until an arm is found to be suboptimal and is then eliminated. In order to devise a private analogue of Successive Elimination we visit the problem of private stopping rule, that takes as input a stream of i.i.d samples from an unknown distribution and returns a multiplicative $(1 \pm \alpha)$-approximation of the distribution's mean, and prove the optimality of our private stopping rule. We then present the private Successive Elimination algorithm which meets both the non-private lower bound [Lai and Robbins, 1985] and the above-mentioned private lower bound. We also compare empirically the performance of our algorithm with the private UCB algorithm.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
131,716
2412.20960
Rise of Generative Artificial Intelligence in Science
Generative Artificial Intelligence (GenAI, generative AI) has rapidly become available as a tool in scientific research. To explore the use of generative AI in science, we conduct an empirical analysis using OpenAlex. Analyzing GenAI publications and other AI publications from 2017 to 2023, we profile growth patterns, the diffusion of GenAI publications across fields of study, and the geographical spread of scientific research on generative AI. We also investigate team size and international collaborations to explore whether GenAI, as an emerging scientific research area, shows different collaboration patterns compared to other AI technologies. The results indicate that generative AI has experienced rapid growth and increasing presence in scientific publications. The use of GenAI now extends beyond computer science to other scientific research domains. Over the study period, U.S. researchers contributed nearly two-fifths of global GenAI publications. The U.S. is followed by China, with several small and medium-sized advanced economies demonstrating relatively high levels of GenAI deployment in their research publications. Although scientific research overall is becoming increasingly specialized and collaborative, our results suggest that GenAI research groups tend to have slightly smaller team sizes than found in other AI fields. Furthermore, notwithstanding recent geopolitical tensions, GenAI research continues to exhibit levels of international collaboration comparable to other AI technologies.
false
false
false
false
true
true
false
false
false
false
false
false
false
true
false
false
false
false
521,412
2403.13825
Deep Generative Models for Ultra-High Granularity Particle Physics Detector Simulation: A Voyage From Emulation to Extrapolation
Simulating ultra-high-granularity detector responses in Particle Physics represents a critical yet computationally demanding task. This thesis aims to overcome this challenge for the Pixel Vertex Detector (PXD) at the Belle II experiment, which features over 7.5M pixel channels-the highest spatial resolution detector simulation dataset ever analysed with generative models. This thesis starts off by a comprehensive and taxonomic review on generative models for simulating detector signatures. Then, it presents the Intra-Event Aware Generative Adversarial Network (IEA-GAN), a new geometry-aware generative model that introduces a relational attentive reasoning and Self-Supervised Learning to approximate an "event" in the detector. This study underscores the importance of intra-event correlation for downstream physics analyses. Building upon this, the work drifts towards a more generic approach and presents YonedaVAE, a Category Theory-inspired generative model that tackles the open problem of Out-of-Distribution (OOD) simulation. YonedaVAE introduces a learnable Yoneda embedding to capture the entirety of an event based on its sensor relationships, formulating a Category theoretical language for intra-event relational reasoning. This is complemented by introducing a Self-Supervised learnable prior for VAEs and an Adaptive Top-q sampling mechanism, enabling the model to sample point clouds with variable intra-category cardinality in a zero-shot manner. Variable Intra-event cardinality has not been approached before and is vital for simulating irregular detector geometries. Trained on an early experiment data, YonedaVAE can reach a reasonable OOD simulation precision of a later experiment with almost double luminosity. This study introduces, for the first time, the results of using deep generative models for ultra-high granularity detector simulation in Particle Physics.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
439,798
1806.09689
Convex LMI optimization for the uncertain power flow analysis
This paper investigates the uncertain power flow analysis in distribution networks within the context of renewable power resources integration such as wind and solar power. The analysis aims to bound the worst-case voltage magnitude in any node of the network for a given uncertain power generation scenario. The major difficulty of this problem is the non-linear aspect of power flow equations. The proposed approach does not require the linearization of these equations and formulates the problem as an optimization problem with polynomial constraints. A new tool to investigate the feasibility of such problems is presented and it is obtained as an extension of the $\mathcal{S}-$procedure, a fundamental result in robustness analysis. A solution to the uncertain power flow analysis problem is proposed using this new tool. The different obtained results of this paper are expressed as LMI optimization problems which guaranties an efficient numerical resolution as it will be demonstrated through an illustrative example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
101,395
1905.10045
Curriculum Loss: Robust Learning and Generalization against Label Corruption
Deep neural networks (DNNs) have great expressive power, which can even memorize samples with wrong labels. It is vitally important to reiterate robustness and generalization in DNNs against label corruption. To this end, this paper studies the 0-1 loss, which has a monotonic relationship with an empirical adversary (reweighted) risk~\citep{hu2016does}. Although the 0-1 loss has some robust properties, it is difficult to optimize. To efficiently optimize the 0-1 loss while keeping its robust properties, we propose a very simple and efficient loss, i.e. curriculum loss (CL). Our CL is a tighter upper bound of the 0-1 loss compared with conventional summation based surrogate losses. Moreover, CL can adaptively select samples for model training. As a result, our loss can be deemed as a novel perspective of curriculum sample selection strategy, which bridges a connection between curriculum learning and robust learning. Experimental results on benchmark datasets validate the robustness of the proposed loss.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
131,933
1605.00855
Improving Image Captioning by Concept-based Sentence Reranking
This paper describes our winning entry in the ImageCLEF 2015 image sentence generation task. We improve Google's CNN-LSTM model by introducing concept-based sentence reranking, a data-driven approach which exploits the large amounts of concept-level annotations on Flickr. Different from previous usage of concept detection that is tailored to specific image captioning models, the propose approach reranks predicted sentences in terms of their matches with detected concepts, essentially treating the underlying model as a black box. This property makes the approach applicable to a number of existing solutions. We also experiment with fine tuning on the deep language model, which improves the performance further. Scoring METEOR of 0.1875 on the ImageCLEF 2015 test set, our system outperforms the runner-up (METEOR of 0.1687) with a clear margin.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
55,397
2406.19531
Off-policy Evaluation with Deeply-abstracted States
Off-policy evaluation (OPE) is crucial for assessing a target policy's impact offline before its deployment. However, achieving accurate OPE in large state spaces remains challenging. This paper studies state abstractions -- originally designed for policy learning -- in the context of OPE. Our contributions are three-fold: (i) We define a set of irrelevance conditions central to learning state abstractions for OPE, and derive a backward-model-irrelevance condition for achieving irrelevance in %sequential and (marginalized) importance sampling ratios by constructing a time-reversed Markov decision process (MDP). (ii) We propose a novel iterative procedure that sequentially projects the original state space into a smaller space, resulting in a deeply-abstracted state, which substantially simplifies the sample complexity of OPE arising from high cardinality. (iii) We prove the Fisher consistencies of various OPE estimators when applied to our proposed abstract state spaces.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,450
2011.12683
GraphHINGE: Learning Interaction Models of Structured Neighborhood on Heterogeneous Information Network
Heterogeneous information network (HIN) has been widely used to characterize entities of various types and their complex relations. Recent attempts either rely on explicit path reachability to leverage path-based semantic relatedness or graph neighborhood to learn heterogeneous network representations before predictions. These weakly coupled manners overlook the rich interactions among neighbor nodes, which introduces an early summarization issue. In this paper, we propose GraphHINGE (Heterogeneous INteract and aggreGatE), which captures and aggregates the interactive patterns between each pair of nodes through their structured neighborhoods. Specifically, we first introduce Neighborhood-based Interaction (NI) module to model the interactive patterns under the same metapaths, and then extend it to Cross Neighborhood-based Interaction (CNI) module to deal with different metapaths. Next, in order to address the complexity issue on large-scale networks, we formulate the interaction modules via a convolutional framework and learn the parameters efficiently with fast Fourier transform. Furthermore, we design a novel neighborhood-based selection (NS) mechanism, a sampling strategy, to filter high-order neighborhood information based on their low-order performance. The extensive experiments on six different types of heterogeneous graphs demonstrate the performance gains by comparing with state-of-the-arts in both click-through rate prediction and top-N recommendation tasks.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
208,237
2302.05098
Confidence-based Reliable Learning under Dual Noises
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks, where massive labeled images are routinely required for model optimization. Yet, the data collected from the open world are unavoidably polluted by noise, which may significantly undermine the efficacy of the learned models. Various attempts have been made to reliably train DNNs under data noise, but they separately account for either the noise existing in the labels or that existing in the images. A naive combination of the two lines of works would suffer from the limitations in both sides, and miss the opportunities to handle the two kinds of noise in parallel. This work provides a first, unified framework for reliable learning under the joint (image, label)-noise. Technically, we develop a confidence-based sample filter to progressively filter out noisy data without the need of pre-specifying noise ratio. Then, we penalize the model uncertainty of the detected noisy data instead of letting the model continue over-fitting the misleading information in them. Experimental results on various challenging synthetic and real-world noisy datasets verify that the proposed method can outperform competing baselines in the aspect of classification performance.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
344,933
2007.04649
Learning to Reweight with Deep Interactions
Recently, the concept of teaching has been introduced into machine learning, in which a teacher model is used to guide the training of a student model (which will be used in real tasks) through data selection, loss function design, etc. Learning to reweight, which is a specific kind of teaching that reweights training data using a teacher model, receives much attention due to its simplicity and effectiveness. In existing learning to reweight works, the teacher model only utilizes shallow/surface information such as training iteration number and loss/accuracy of the student model from training/validation sets, but ignores the internal states of the student model, which limits the potential of learning to reweight. In this work, we propose an improved data reweighting algorithm, in which the student model provides its internal states to the teacher model, and the teacher model returns adaptive weights of training samples to enhance the training of the student model. The teacher model is jointly trained with the student model using meta gradients propagated from a validation set. Experiments on image classification with clean/noisy labels and neural machine translation empirically demonstrate that our algorithm makes significant improvement over previous methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
186,431
2403.01747
Towards Self-Contained Answers: Entity-Based Answer Rewriting in Conversational Search
Conversational information-seeking (CIS) is an emerging paradigm for knowledge acquisition and exploratory search. Traditional web search interfaces enable easy exploration of entities, but this is limited in conversational settings due to the limited-bandwidth interface. This paper explore ways to rewrite answers in CIS, so that users can understand them without having to resort to external services or sources. Specifically, we focus on salient entities -- entities that are central to understanding the answer. As our first contribution, we create a dataset of conversations annotated with entities for saliency. Our analysis of the collected data reveals that the majority of answers contain salient entities. As our second contribution, we propose two answer rewriting strategies aimed at improving the overall user experience in CIS. One approach expands answers with inline definitions of salient entities, making the answer self-contained. The other approach complements answers with follow-up questions, offering users the possibility to learn more about specific entities. Results of a crowdsourcing-based study indicate that rewritten answers are clearly preferred over the original ones. We also find that inline definitions tend to be favored over follow-up questions, but this choice is highly subjective, thereby providing a promising future direction for personalization.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
434,554
2409.09143
DomURLs_BERT: Pre-trained BERT-based Model for Malicious Domains and URLs Detection and Classification
Detecting and classifying suspicious or malicious domain names and URLs is fundamental task in cybersecurity. To leverage such indicators of compromise, cybersecurity vendors and practitioners often maintain and update blacklists of known malicious domains and URLs. However, blacklists frequently fail to identify emerging and obfuscated threats. Over the past few decades, there has been significant interest in developing machine learning models that automatically detect malicious domains and URLs, addressing the limitations of blacklists maintenance and updates. In this paper, we introduce DomURLs_BERT, a pre-trained BERT-based encoder adapted for detecting and classifying suspicious/malicious domains and URLs. DomURLs_BERT is pre-trained using the Masked Language Modeling (MLM) objective on a large multilingual corpus of URLs, domain names, and Domain Generation Algorithms (DGA) dataset. In order to assess the performance of DomURLs_BERT, we have conducted experiments on several binary and multi-class classification tasks involving domain names and URLs, covering phishing, malware, DGA, and DNS tunneling. The evaluations results show that the proposed encoder outperforms state-of-the-art character-based deep learning models and cybersecurity-focused BERT models across multiple tasks and datasets. The pre-training dataset, the pre-trained DomURLs_BERT encoder, and the experiments source code are publicly available.
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
488,194
1004.3071
Subspace Methods for Joint Sparse Recovery
We propose robust and efficient algorithms for the joint sparse recovery problem in compressed sensing, which simultaneously recover the supports of jointly sparse signals from their multiple measurement vectors obtained through a common sensing matrix. In a favorable situation, the unknown matrix, which consists of the jointly sparse signals, has linearly independent nonzero rows. In this case, the MUSIC (MUltiple SIgnal Classification) algorithm, originally proposed by Schmidt for the direction of arrival problem in sensor array processing and later proposed and analyzed for joint sparse recovery by Feng and Bresler, provides a guarantee with the minimum number of measurements. We focus instead on the unfavorable but practically significant case of rank-defect or ill-conditioning. This situation arises with limited number of measurement vectors, or with highly correlated signal components. In this case MUSIC fails, and in practice none of the existing methods can consistently approach the fundamental limit. We propose subspace-augmented MUSIC (SA-MUSIC), which improves on MUSIC so that the support is reliably recovered under such unfavorable conditions. Combined with subspace-based greedy algorithms also proposed and analyzed in this paper, SA-MUSIC provides a computationally efficient algorithm with a performance guarantee. The performance guarantees are given in terms of a version of restricted isometry property. In particular, we also present a non-asymptotic perturbation analysis of the signal subspace estimation that has been missing in the previous study of MUSIC.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
6,194
2305.00567
Scaling Pareto-Efficient Decision Making Via Offline Multi-Objective RL
The goal of multi-objective reinforcement learning (MORL) is to learn policies that simultaneously optimize multiple competing objectives. In practice, an agent's preferences over the objectives may not be known apriori, and hence, we require policies that can generalize to arbitrary preferences at test time. In this work, we propose a new data-driven setup for offline MORL, where we wish to learn a preference-agnostic policy agent using only a finite dataset of offline demonstrations of other agents and their preferences. The key contributions of this work are two-fold. First, we introduce D4MORL, (D)atasets for MORL that are specifically designed for offline settings. It contains 1.8 million annotated demonstrations obtained by rolling out reference policies that optimize for randomly sampled preferences on 6 MuJoCo environments with 2-3 objectives each. Second, we propose Pareto-Efficient Decision Agents (PEDA), a family of offline MORL algorithms that builds and extends Decision Transformers via a novel preference-and-return-conditioned policy. Empirically, we show that PEDA closely approximates the behavioral policy on the D4MORL benchmark and provides an excellent approximation of the Pareto-front with appropriate conditioning, as measured by the hypervolume and sparsity metrics.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
361,379
2306.03055
Analyzing Syntactic Generalization Capacity of Pre-trained Language Models on Japanese Honorific Conversion
Using Japanese honorifics is challenging because it requires not only knowledge of the grammatical rules but also contextual information, such as social relationships. It remains unclear whether pre-trained large language models (LLMs) can flexibly handle Japanese honorifics like humans. To analyze this, we introduce an honorific conversion task that considers social relationships among people mentioned in a conversation. We construct a Japanese honorifics dataset from problem templates of various sentence structures to investigate the syntactic generalization capacity of GPT-3, one of the leading LLMs, on this task under two settings: fine-tuning and prompt learning. Our results showed that the fine-tuned GPT-3 performed better in a context-aware honorific conversion task than the prompt-based one. The fine-tuned model demonstrated overall syntactic generalizability towards compound honorific sentences, except when tested with the data involving direct speech.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
371,173
2011.02506
The dynamic effect of mechanical losses of actuators on the equations of motion of legged robots
Industrial manipulators do not collapse under their own weight when powered off due to the friction in their joints. Although these mechanism are effective for stiff position control of pick-and-place, they are inappropriate for legged robots which must rapidly regulate compliant interactions with the environment. However, no metric exists to quantify the robot's perform degradation due to mechanical losses in the actuators. This letter provides a novel formulation which describes how the efficiency of individual actuators propagate to the equations of motion of the whole robot. We quantitatively demonstrate the intuitive fact that the apparent inertia of the robots increase in the presence of joint friction. We also reproduce the empirical result that robots which employ high gearing and low efficiency actuators can statically sustain more substantial external loads. We expect that the framework presented here will provide the foundations to design the next generation of legged robots which can effectively interact with the world.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
204,936
1203.3482
Formula-Based Probabilistic Inference
Computing the probability of a formula given the probabilities or weights associated with other formulas is a natural extension of logical inference to the probabilistic setting. Surprisingly, this problem has received little attention in the literature to date, particularly considering that it includes many standard inference problems as special cases. In this paper, we propose two algorithms for this problem: formula decomposition and conditioning, which is an exact method, and formula importance sampling, which is an approximate method. The latter is, to our knowledge, the first application of model counting to approximate probabilistic inference. Unlike conventional variable-based algorithms, our algorithms work in the dual realm of logical formulas. Theoretically, we show that our algorithms can greatly improve efficiency by exploiting the structural information in the formulas. Empirically, we show that they are indeed quite powerful, often achieving substantial performance gains over state-of-the-art schemes.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
14,930
2111.11398
Why Do Self-Supervised Models Transfer? Investigating the Impact of Invariance on Downstream Tasks
Self-supervised learning is a powerful paradigm for representation learning on unlabelled images. A wealth of effective new methods based on instance matching rely on data-augmentation to drive learning, and these have reached a rough agreement on an augmentation scheme that optimises popular recognition benchmarks. However, there is strong reason to suspect that different tasks in computer vision require features to encode different (in)variances, and therefore likely require different augmentation strategies. In this paper, we measure the invariances learned by contrastive methods and confirm that they do learn invariance to the augmentations used and further show that this invariance largely transfers to related real-world changes in pose and lighting. We show that learned invariances strongly affect downstream task performance and confirm that different downstream tasks benefit from polar opposite (in)variances, leading to performance loss when the standard augmentation strategy is used. Finally, we demonstrate that a simple fusion of representations with complementary invariances ensures wide transferability to all the diverse downstream tasks considered.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
267,653
2311.13356
Uncertainty Estimation in Multi-Agent Distributed Learning
Traditionally, IoT edge devices have been perceived primarily as low-power components with limited capabilities for autonomous operations. Yet, with emerging advancements in embedded AI hardware design, a foundational shift paves the way for future possibilities. Thus, the aim of the KDT NEUROKIT2E project is to establish a new open-source framework to further facilitate AI applications on edge devices by developing new methods in quantization, pruning-aware training, and sparsification. These innovations hold the potential to expand the functional range of such devices considerably, enabling them to manage complex Machine Learning (ML) tasks utilizing local resources and laying the groundwork for innovative learning approaches. In the context of 6G's transformative potential, distributed learning among independent agents emerges as a pivotal application, attributed to 6G networks' support for ultra-reliable low-latency communication, enhanced data rates, and advanced edge computing capabilities. Our research focuses on the mechanisms and methodologies that allow edge network-enabled agents to engage in collaborative learning in distributed environments. Particularly, one of the key issues within distributed collaborative learning is determining the degree of confidence in the learning results, considering the spatio-temporal locality of data sets perceived by independent agents.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
409,715