id
stringlengths 9
16
| title
stringlengths 4
278
| abstract
stringlengths 3
4.08k
| cs.HC
bool 2
classes | cs.CE
bool 2
classes | cs.SD
bool 2
classes | cs.SI
bool 2
classes | cs.AI
bool 2
classes | cs.IR
bool 2
classes | cs.LG
bool 2
classes | cs.RO
bool 2
classes | cs.CL
bool 2
classes | cs.IT
bool 2
classes | cs.SY
bool 2
classes | cs.CV
bool 2
classes | cs.CR
bool 2
classes | cs.CY
bool 2
classes | cs.MA
bool 2
classes | cs.NE
bool 2
classes | cs.DB
bool 2
classes | Other
bool 2
classes | __index_level_0__
int64 0
541k
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2305.03981
|
Pre-training Language Model as a Multi-perspective Course Learner
|
ELECTRA, the generator-discriminator pre-training framework, has achieved impressive semantic construction capability among various downstream tasks. Despite the convincing performance, ELECTRA still faces the challenges of monotonous training and deficient interaction. Generator with only masked language modeling (MLM) leads to biased learning and label imbalance for discriminator, decreasing learning efficiency; no explicit feedback loop from discriminator to generator results in the chasm between these two components, underutilizing the course learning. In this study, a multi-perspective course learning (MCL) method is proposed to fetch a many degrees and visual angles for sample-efficient pre-training, and to fully leverage the relationship between generator and discriminator. Concretely, three self-supervision courses are designed to alleviate inherent flaws of MLM and balance the label in a multi-perspective way. Besides, two self-correction courses are proposed to bridge the chasm between the two encoders by creating a "correction notebook" for secondary-supervision. Moreover, a course soups trial is conducted to solve the "tug-of-war" dynamics problem of MCL, evolving a stronger pre-trained model. Experimental results show that our method significantly improves ELECTRA's average performance by 2.8% and 3.2% absolute points respectively on GLUE and SQuAD 2.0 benchmarks, and overshadows recent advanced ELECTRA-style models under the same settings. The pre-trained MCL model is available at https://huggingface.co/McmanusChen/MCL-base.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 362,590
|
2501.15042
|
SCCD: A Session-based Dataset for Chinese Cyberbullying Detection
|
The rampant spread of cyberbullying content poses a growing threat to societal well-being. However, research on cyberbullying detection in Chinese remains underdeveloped, primarily due to the lack of comprehensive and reliable datasets. Notably, no existing Chinese dataset is specifically tailored for cyberbullying detection. Moreover, while comments play a crucial role within sessions, current session-based datasets often lack detailed, fine-grained annotations at the comment level. To address these limitations, we present a novel Chinese cyber-bullying dataset, termed SCCD, which consists of 677 session-level samples sourced from a major social media platform Weibo. Moreover, each comment within the sessions is annotated with fine-grained labels rather than conventional binary class labels. Empirically, we evaluate the performance of various baseline methods on SCCD, highlighting the challenges for effective Chinese cyberbullying detection.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 527,370
|
2105.08601
|
Graph Neural Networks for Decentralized Multi-Robot Submodular Action
Selection
|
The problem of decentralized multi-robot target tracking asks for jointly selecting actions, e.g., motion primitives, for the robots to maximize target tracking performance with local communications. One major challenge for practical implementations is to make target tracking approaches scalable for large-scale problem instances. In this work, we propose a general-purpose learning architecture toward collaborative target tracking at scale, with decentralized communications. Particularly, our learning architecture leverages a graph neural network (GNN) to capture local interactions of the robots and learns decentralized decision-making for the robots. We train the learning model by imitating an expert solution and implement the resulting model for decentralized action selection involving local observations and communications only. We demonstrate the performance of our GNN-based learning approach in a scenario of active target tracking with large networks of robots. The simulation results show our approach nearly matches the tracking performance of the expert algorithm, and yet runs several orders faster with up to 100 robots. Moreover, it slightly outperforms a decentralized greedy algorithm but runs faster (especially with more than 20 robots). The results also exhibit our approach's generalization capability in previously unseen scenarios, e.g., larger environments and larger networks of robots.
| false
| false
| false
| false
| true
| false
| true
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| 235,809
|
2307.05449
|
On the hull and complementarity of one generator quasi-cyclic codes and
four-circulant codes
|
We study one generator quasi-cyclic codes and four-circulant codes, which are also quasi-cyclic but have two generators. We state the hull dimensions for both classes of codes in terms of the polynomials in their generating elements. We prove results such as the hull dimension of a four-circulant code is even and one-dimensional hull for double-circulant codes, which are special one generator codes, is not possible when the alphabet size $q$ is congruent to 3 mod 4. We also characterize linear complementary pairs among both classes of codes. Computational results on the code families in consideration are provided as well.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 378,745
|
2002.12699
|
Automatic Section Recognition in Obituaries
|
Obituaries contain information about people's values across times and cultures, which makes them a useful resource for exploring cultural history. They are typically structured similarly, with sections corresponding to Personal Information, Biographical Sketch, Characteristics, Family, Gratitude, Tribute, Funeral Information and Other aspects of the person. To make this information available for further studies, we propose a statistical model which recognizes these sections. To achieve that, we collect a corpus of 20058 English obituaries from TheDaily Item, Remembering.CA and The London Free Press. The evaluation of our annotation guidelines with three annotators on 1008 obituaries shows a substantial agreement of Fleiss k = 0.87. Formulated as an automatic segmentation task, a convolutional neural network outperforms bag-of-words and embedding-based BiLSTMs and BiLSTM-CRFs with a micro F1 = 0.81.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 166,116
|
1706.05394
|
A Closer Look at Memorization in Deep Networks
|
We examine the role of memorization in deep learning, drawing connections to capacity, generalization, and adversarial robustness. While deep networks are capable of memorizing noise data, our results suggest that they tend to prioritize learning simple patterns first. In our experiments, we expose qualitative differences in gradient-based optimization of deep neural networks (DNNs) on noise vs. real data. We also demonstrate that for appropriately tuned explicit regularization (e.g., dropout) we can degrade DNN training performance on noise datasets without compromising generalization on real data. Our analysis suggests that the notions of effective capacity which are dataset independent are unlikely to explain the generalization performance of deep networks when trained with gradient based methods because training data itself plays an important role in determining the degree of memorization.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 75,505
|
2006.15935
|
Is Japanese gendered language used on Twitter ? A large scale study
|
This study analyzes the usage of Japanese gendered language on Twitter. Starting from a collection of 408 million Japanese tweets from 2015 till 2019 and an additional sample of 2355 manually classified Twitter accounts timelines into gender and categories (politicians, musicians, etc). A large scale textual analysis is performed on this corpus to identify and examine sentence-final particles (SFPs) and first-person pronouns appearing in the texts. It turns out that gendered language is in fact used also on Twitter, in about 6% of the tweets, and that the prescriptive classification into "male" and "female" language does not always meet the expectations, with remarkable exceptions. Further, SFPs and pronouns show increasing or decreasing trends, indicating an evolution of the language used on Twitter.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 184,664
|
2309.12854
|
ThinResNet: A New Baseline for Structured Convolutional Networks Pruning
|
Pruning is a compression method which aims to improve the efficiency of neural networks by reducing their number of parameters while maintaining a good performance, thus enhancing the performance-to-cost ratio in nontrivial ways. Of particular interest are structured pruning techniques, in which whole portions of parameters are removed altogether, resulting in easier to leverage shrunk architectures. Since its growth in popularity in the recent years, pruning gave birth to countless papers and contributions, resulting first in critical inconsistencies in the way results are compared, and then to a collective effort to establish standardized benchmarks. However, said benchmarks are based on training practices that date from several years ago and do not align with current practices. In this work, we verify how results in the recent literature of pruning hold up against networks that underwent both state-of-the-art training methods and trivial model scaling. We find that the latter clearly and utterly outperform all the literature we compared to, proving that updating standard pruning benchmarks and re-evaluating classical methods in their light is an absolute necessity. We thus introduce a new challenging baseline to compare structured pruning to: ThinResNet.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 393,956
|
1606.02809
|
On Uplink User Capacity for Massive MIMO Cellular Networks
|
Under the conditions where performance in a massive MIMO network is limited by pilot contamination, the reverse link signal-to-interference ratio (SIR) exhibits different distributions when using different pilot allocation schemes. By utilising different sets of orthogonal pilot sequences, as opposed to reused sequences amongst adjacent cells, the resulting SIR distribution is more favourable with respect to maximising the number of users on the network while maintaining a given quality of service (QoS) for all users. This paper provides a simple expression for uplink user capacity on such networks and presents uplink user capacity figures for both pilot allocation schemes for a selection of quality of service targets.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 57,017
|
1802.03644
|
Learning to Match via Inverse Optimal Transport
|
We propose a unified data-driven framework based on inverse optimal transport that can learn adaptive, nonlinear interaction cost function from noisy and incomplete empirical matching matrix and predict new matching in various matching contexts. We emphasize that the discrete optimal transport plays the role of a variational principle which gives rise to an optimization-based framework for modeling the observed empirical matching data. Our formulation leads to a non-convex optimization problem which can be solved efficiently by an alternating optimization method. A key novel aspect of our formulation is the incorporation of marginal relaxation via regularized Wasserstein distance, significantly improving the robustness of the method in the face of noisy or missing empirical matching data. Our model falls into the category of prescriptive models, which not only predict potential future matching, but is also able to explain what leads to empirical matching and quantifies the impact of changes in matching factors. The proposed approach has wide applicability including predicting matching in online dating, labor market, college application and crowdsourcing. We back up our claims with numerical experiments on both synthetic data and real world data sets.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 90,029
|
1804.04093
|
SHAPED: Shared-Private Encoder-Decoder for Text Style Adaptation
|
Supervised training of abstractive language generation models results in learning conditional probabilities over language sequences based on the supervised training signal. When the training signal contains a variety of writing styles, such models may end up learning an 'average' style that is directly influenced by the training data make-up and cannot be controlled by the needs of an application. We describe a family of model architectures capable of capturing both generic language characteristics via shared model parameters, as well as particular style characteristics via private model parameters. Such models are able to generate language according to a specific learned style, while still taking advantage of their power to model generic language phenomena. Furthermore, we describe an extension that uses a mixture of output distributions from all learned styles to perform on-the fly style adaptation based on the textual input alone. Experimentally, we find that the proposed models consistently outperform models that encapsulate single-style or average-style language generation capabilities.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 94,757
|
2202.11757
|
Degradation-Reducing Control for Dynamically Reconfigurable Batteries
|
Cascaded circuits such as modular multilevel con-verters (MMC) offer attractive qualities in reconfigurable battery applications. In contrast to conventional hard-wired dc battery packs, the MMC topology loads modules with ac current, which may lead to additional ageing of batteries. As recent studies reveal, such ageing of batteries occurs at low-frequency load ripple, and almost vanishes at high frequencies. State of the art in MMC bat-tery control focuses on state of charge and temperature balancing of individual modules. Previous methods to suppress ripple rely on slow feedback loops and low dynamics, which tends to form low-frequency patterns in the module load that negatively contribute to their ageing. This paper presents a novel module-current-oriented high-bandwidth control technique which minimizes low-frequency components in the module load spectrum. The control method respects limitations related to module data acquisition and enhanc-es the feedback bandwidth using observation techniques. We verify the proposed method experimentally on a laboratory setup and estimate the influence on the battery cells.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 281,983
|
1604.01651
|
Representing model inadequacy: A stochastic operator approach
|
Mathematical models of physical systems are subject to many uncertainties such as measurement errors and uncertain initial and boundary conditions. After accounting for these uncertainties, it is often revealed that discrepancies between the model output and the observations remain; if so, the model is said to be inadequate. In practice, the inadequate model may be the best that is available or tractable, and so despite its inadequacy the model may be used to make predictions of unobserved quantities. In this case, a representation of the inadequacy is necessary, so the impact of the observed discrepancy can be determined. We investigate this problem in the context of chemical kinetics and propose a new technique to account for model inadequacy that is both probabilistic and physically meaningful. A stochastic inadequacy operator $\mathcal{S}$ is introduced which is embedded in the ODEs describing the evolution of chemical species concentrations and which respects certain physical constraints such as conservation laws. The parameters of $\mathcal{S}$ are governed by probability distributions, which in turn are characterized by a set of hyperparameters. The model parameters and hyperparameters are calibrated using high-dimensional hierarchical Bayesian inference. We apply the method to a typical problem in chemical kinetics---the reaction mechanism of hydrogen combustion.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 54,220
|
1104.5391
|
On Optimality of Greedy Policy for a Class of Standard Reward Function
of Restless Multi-armed Bandit Problem
|
In this paper,we consider the restless bandit problem, which is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit problem in decision theory. However, it is known be PSPACE-Hard to approximate to any non-trivial factor. Thus the optimality is very difficult to obtain due to its high complexity. A natural method is to obtain the greedy policy considering its stability and simplicity. However, the greedy policy will result in the optimality loss for its intrinsic myopic behavior generally. In this paper, by analyzing one class of so-called standard reward function, we establish the closed-form condition about the discounted factor \beta such that the optimality of the greedy policy is guaranteed under the discounted expected reward criterion, especially, the condition \beta = 1 indicating the optimality of the greedy policy under the average accumulative reward criterion. Thus, the standard form of reward function can easily be used to judge the optimality of the greedy policy without any complicated calculation. Some examples in cognitive radio networks are presented to verify the effectiveness of the mathematical result in judging the optimality of the greedy policy.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 10,156
|
1810.05561
|
Optimal Source Codes for Timely Updates
|
A transmitter observing a sequence of independent and identically distributed random variables seeks to keep a receiver updated about its latest observations. The receiver need not be apprised about each symbol seen by the transmitter, but needs to output a symbol at each time instant $t$. If at time $t$ the receiver outputs the symbol seen by the transmitter at time $U(t)\leq t$, the age of information at the receiver at time $t$ is $t-U(t)$. We study the design of lossless source codes that enable transmission with minimum average age at the receiver. We show that the asymptotic minimum average age can be attained up to a constant gap by the Shannon codes for a tilted version of the original pmf generating the symbols, which can be computed easily by solving an optimization problem. Furthermore, we exhibit an example with alphabet $\X$ where Shannon codes for the original pmf incur an asymptotic average age of a factor $O(\sqrt{\log |\X|})$ more than that achieved by our codes. Underlying our prescription for optimal codes is a new variational formula for integer moments of random variables, which may be of independent interest. Also, we discuss possible extensions of our formulation to randomized schemes and to the erasure channel, and include a treatment of the related problem of source coding for minimum average queuing delay.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 110,259
|
1609.02226
|
Fitted Learning: Models with Awareness of their Limits
|
Though deep learning has pushed the boundaries of classification forward, in recent years hints of the limits of standard classification have begun to emerge. Problems such as fooling, adding new classes over time, and the need to retrain learning models only for small changes to the original problem all point to a potential shortcoming in the classic classification regime, where a comprehensive a priori knowledge of the possible classes or concepts is critical. Without such knowledge, classifiers misjudge the limits of their knowledge and overgeneralization therefore becomes a serious obstacle to consistent performance. In response to these challenges, this paper extends the classic regime by reframing classification instead with the assumption that concepts present in the training set are only a sample of the hypothetical final set of concepts. To bring learning models into this new paradigm, a novel elaboration of standard architectures called the competitive overcomplete output layer (COOL) neural network is introduced. Experiments demonstrate the effectiveness of COOL by applying it to fooling, separable concept learning, one-class neural networks, and standard classification benchmarks. The results suggest that, unlike conventional classifiers, the amount of generalization in COOL networks can be tuned to match the problem.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| 60,704
|
2009.09488
|
Consistency, Acyclicity, and Positive Semirings
|
In several different settings, one comes across situations in which the objects of study are locally consistent but globally inconsistent. Earlier work about probability distributions by Vorob'ev (1962) and about database relations by Beeri, Fagin, Maier, Yannakakis (1983) produced characterizations of when local consistency always implies global consistency. Towards a common generalization of these results, we consider K-relations, that is, relations over a set of attributes such that each tuple in the relation is associated with an element from an arbitrary, but fixed, positive semiring K. We introduce the notions of projection of a K-relation, consistency of two K-relations, and global consistency of a collection of K-relations; these notions are natural extensions of the corresponding notions about probability distributions and database relations. We then show that a collection of sets of attributes has the property that every pairwise consistent collection of K-relations over those attributes is globally consistent if and only if the sets of attributes form an acyclic hypergraph. This generalizes the aforementioned results by Vorob'ev and by Beeri et al., and demonstrates that K-relations over positive semirings constitute a natural framework for the study of the interplay between local and global consistency. In the course of the proof, we introduce a notion of join of two K-relations and argue that it is the "right" generalization of the join of two database relations. Furthermore, to show that non-acyclic hypergraphs yield pairwise consistent K-relations that are globally inconsistent, we generalize a construction by Tseitin (1968) in his study of hard-to-prove tautologies in propositional logic.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| 196,598
|
2406.00980
|
Selectively Answering Visual Questions
|
Recently, large multi-modal models (LMMs) have emerged with the capacity to perform vision tasks such as captioning and visual question answering (VQA) with unprecedented accuracy. Applications such as helping the blind or visually impaired have a critical need for precise answers. It is specially important for models to be well calibrated and be able to quantify their uncertainty in order to selectively decide when to answer and when to abstain or ask for clarifications. We perform the first in-depth analysis of calibration methods and metrics for VQA with in-context learning LMMs. Studying VQA on two answerability benchmarks, we show that the likelihood score of visually grounded models is better calibrated than in their text-only counterparts for in-context learning, where sampling based methods are generally superior, but no clear winner arises. We propose Avg BLEU, a calibration score combining the benefits of both sampling and likelihood methods across modalities.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 460,110
|
2303.17568
|
CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual
Benchmarking on HumanEval-X
|
Large pre-trained code generation models, such as OpenAI Codex, can generate syntax- and function-correct code, making the coding of programmers more productive and our pursuit of artificial general intelligence closer. In this paper, we introduce CodeGeeX, a multilingual model with 13 billion parameters for code generation. CodeGeeX is pre-trained on 850 billion tokens of 23 programming languages as of June 2022. Our extensive experiments suggest that CodeGeeX outperforms multilingual code models of similar scale for both the tasks of code generation and translation on HumanEval-X. Building upon HumanEval (Python only), we develop the HumanEval-X benchmark for evaluating multilingual models by hand-writing the solutions in C++, Java, JavaScript, and Go. In addition, we build CodeGeeX-based extensions on Visual Studio Code, JetBrains, and Cloud Studio, generating 4.7 billion tokens for tens of thousands of active users per week. Our user study demonstrates that CodeGeeX can help to increase coding efficiency for 83.4% of its users. Finally, CodeGeeX is publicly accessible and in Sep. 2022, we open-sourced its code, model weights (the version of 850B tokens), API, extensions, and HumanEval-X at https://github.com/THUDM/CodeGeeX.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 355,252
|
2305.17575
|
Mobile Safety Application for Pedestrians
|
Vulnerable Road User (VRU) safety has been an important issue throughout the years as corresponding fatality numbers in traffic have been increasing each year. With the developments in connected vehicle technology, there are new and easier ways of implementing Vehicle to Everything (V2X) communication which can be utilized to provide safety and early warning benefits for VRUs. Mobile phones are one important point of interest with their sensors being increased in quantity and quality and improved in terms of accuracy. Bluetooth and extended Bluetooth technology in mobile phones has enhanced support to carry larger chunks of information to longer distances. The work we discuss in this paper is related to a mobile application that utilizes the mobile phone sensors and Bluetooth communication to implement Personal Safety Message (PSM) broadcast using the SAE J2735 standard to create a Pedestrian to Vehicle (P2V) based safety warning structure. This implementation allows the drivers to receive a warning on their mobile phones and be more careful about the pedestrian intending to cross the street. As a result, the driver has much more time to safely slow down and stop at the intersection. Most importantly, thanks to the wireless nature of Bluetooth connection and long-range mode in Bluetooth 5.0, most dangerous cases such as reduced visibility or No-Line-of-Sight (NLOS) conditions can be remedied.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 368,659
|
1711.09485
|
SkipNet: Learning Dynamic Routing in Convolutional Networks
|
While deeper convolutional networks are needed to achieve maximum accuracy in visual perception tasks, for many inputs shallower networks are sufficient. We exploit this observation by learning to skip convolutional layers on a per-input basis. We introduce SkipNet, a modified residual network, that uses a gating network to selectively skip convolutional blocks based on the activations of the previous layer. We formulate the dynamic skipping problem in the context of sequential decision making and propose a hybrid learning algorithm that combines supervised learning and reinforcement learning to address the challenges of non-differentiable skipping decisions. We show SkipNet reduces computation by 30-90% while preserving the accuracy of the original model on four benchmark datasets and outperforms the state-of-the-art dynamic networks and static compression methods. We also qualitatively evaluate the gating policy to reveal a relationship between image scale and saliency and the number of layers skipped.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 85,413
|
2409.16984
|
AXCEL: Automated eXplainable Consistency Evaluation using LLMs
|
Large Language Models (LLMs) are widely used in both industry and academia for various tasks, yet evaluating the consistency of generated text responses continues to be a challenge. Traditional metrics like ROUGE and BLEU show a weak correlation with human judgment. More sophisticated metrics using Natural Language Inference (NLI) have shown improved correlations but are complex to implement, require domain-specific training due to poor cross-domain generalization, and lack explainability. More recently, prompt-based metrics using LLMs as evaluators have emerged; while they are easier to implement, they still lack explainability and depend on task-specific prompts, which limits their generalizability. This work introduces Automated eXplainable Consistency Evaluation using LLMs (AXCEL), a prompt-based consistency metric which offers explanations for the consistency scores by providing detailed reasoning and pinpointing inconsistent text spans. AXCEL is also a generalizable metric which can be adopted to multiple tasks without changing the prompt. AXCEL outperforms both non-prompt and prompt-based state-of-the-art (SOTA) metrics in detecting inconsistencies across summarization by 8.7%, free text generation by 6.2%, and data-to-text conversion tasks by 29.4%. We also evaluate the influence of underlying LLMs on prompt based metric performance and recalibrate the SOTA prompt-based metrics with the latest LLMs for fair comparison. Further, we show that AXCEL demonstrates strong performance using open source LLMs.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 491,605
|
2306.09147
|
Probabilistic Learning of Multivariate Time Series with Temporal
Irregularity
|
Probabilistic forecasting of multivariate time series is essential for various downstream tasks. Most existing approaches rely on the sequences being uniformly spaced and aligned across all variables. However, real-world multivariate time series often suffer from temporal irregularities, including nonuniform intervals and misaligned variables, which pose significant challenges for accurate forecasting. To address these challenges, we propose an end-to-end framework that models temporal irregularities while capturing the joint distribution of variables at arbitrary continuous-time points. Specifically, we introduce a dynamic conditional continuous normalizing flow to model data distributions in a non-parametric manner, accommodating the complex, non-Gaussian characteristics commonly found in real-world datasets. Then, by leveraging a carefully factorized log-likelihood objective, our approach captures both temporal and cross-sectional dependencies efficiently. Extensive experiments on a range of real-world datasets demonstrate the superiority and adaptability of our method compared to existing approaches.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 373,696
|
1503.02302
|
DESAT: an SSW tool for SDO/AIA image de-saturation
|
Saturation affects a significant rate of images recorded by the Atmospheric Imaging Assembly on the Solar Dynamics Observatory. This paper describes a computational method and a technological pipeline for the de-saturation of such images, based on several mathematical ingredients like Expectation Maximization, image correlation and interpolation. An analysis of the computational properties and demands of the pipeline, together with an assessment of its reliability are performed against a set of data recorded from the Feburary 25 2014 flaring event.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 40,920
|
2302.14390
|
Your time series is worth a binary image: machine vision assisted deep
framework for time series forecasting
|
Time series forecasting (TSF) has been a challenging research area, and various models have been developed to address this task. However, almost all these models are trained with numerical time series data, which is not as effectively processed by the neural system as visual information. To address this challenge, this paper proposes a novel machine vision assisted deep time series analysis (MV-DTSA) framework. The MV-DTSA framework operates by analyzing time series data in a novel binary machine vision time series metric space, which includes a mapping and an inverse mapping function from the numerical time series space to the binary machine vision space, and a deep machine vision model designed to address the TSF task in the binary space. A comprehensive computational analysis demonstrates that the proposed MV-DTSA framework outperforms state-of-the-art deep TSF models, without requiring sophisticated data decomposition or model customization. The code for our framework is accessible at https://github.com/IkeYang/ machine-vision-assisted-deep-time-series-analysis-MV-DTSA-.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 348,276
|
2402.06733
|
NICE: To Optimize In-Context Examples or Not?
|
Recent work shows that in-context learning and optimization of in-context examples (ICE) can significantly improve the accuracy of large language models (LLMs) on a wide range of tasks, leading to an apparent consensus that ICE optimization is crucial for better performance. However, most of these studies assume a fixed or no instruction provided in the prompt. We challenge this consensus by investigating the necessity of optimizing ICE when task-specific instructions are provided and find that there are many tasks for which it yields diminishing returns. In particular, using a diverse set of tasks and a systematically created instruction set with gradually added details, we find that as the prompt instruction becomes more detailed, the returns on ICE optimization diminish. To characterize this behavior, we introduce a task-specific metric called Normalized Invariability to Choice of Examples (NICE) that quantifies the learnability of tasks from a given instruction, and provides a heuristic to help decide whether to optimize instructions or ICE for a new task. Given a task, the proposed metric can reliably predict the utility of optimizing ICE compared to using random ICE. Our code is available at https://github.com/microsoft/nice-icl.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 428,415
|
2305.14764
|
Detection of Non-uniformity in Parameters for Magnetic Domain Pattern
Generation by Machine Learning
|
We estimate the spatial distribution of heterogeneous physical parameters involved in the formation of magnetic domain patterns of polycrystalline thin films by using convolutional neural networks. We propose a method to obtain a spatial map of physical parameters by estimating the parameters from patterns within a small subregion window of the full magnetic domain and subsequently shifting this window. To enhance the accuracy of parameter estimation in such subregions, we employ large-scale models utilized for natural image classification and exploit the benefits of pretraining. Using a model with high estimation accuracy on these subregions, we conduct inference on simulation data featuring spatially varying parameters and demonstrate the capability to detect such parameter variations.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 367,241
|
2310.18899
|
Multi-task deep learning for large-scale building detail extraction from
high-resolution satellite imagery
|
Understanding urban dynamics and promoting sustainable development requires comprehensive insights about buildings. While geospatial artificial intelligence has advanced the extraction of such details from Earth observational data, existing methods often suffer from computational inefficiencies and inconsistencies when compiling unified building-related datasets for practical applications. To bridge this gap, we introduce the Multi-task Building Refiner (MT-BR), an adaptable neural network tailored for simultaneous extraction of spatial and attributional building details from high-resolution satellite imagery, exemplified by building rooftops, urban functional types, and roof architectural types. Notably, MT-BR can be fine-tuned to incorporate additional building details, extending its applicability. For large-scale applications, we devise a novel spatial sampling scheme that strategically selects limited but representative image samples. This process optimizes both the spatial distribution of samples and the urban environmental characteristics they contain, thus enhancing extraction effectiveness while curtailing data preparation expenditures. We further enhance MT-BR's predictive performance and generalization capabilities through the integration of advanced augmentation techniques. Our quantitative results highlight the efficacy of the proposed methods. Specifically, networks trained with datasets curated via our sampling method demonstrate improved predictive accuracy relative to those using alternative sampling approaches, with no alterations to network architecture. Moreover, MT-BR consistently outperforms other state-of-the-art methods in extracting building details across various metrics. The real-world practicality is also demonstrated in an application across Shanghai, generating a unified dataset that encompasses both the spatial and attributional details of buildings.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 403,755
|
1902.10756
|
Insights into LSTM Fully Convolutional Networks for Time Series
Classification
|
Long Short Term Memory Fully Convolutional Neural Networks (LSTM-FCN) and Attention LSTM-FCN (ALSTM-FCN) have shown to achieve state-of-the-art performance on the task of classifying time series signals on the old University of California-Riverside (UCR) time series repository. However, there has been no study on why LSTM-FCN and ALSTM-FCN perform well. In this paper, we perform a series of ablation tests (3627 experiments) on LSTM-FCN and ALSTM-FCN to provide a better understanding of the model and each of its sub-module. Results from the ablation tests on ALSTM-FCN and LSTM-FCN show that the LSTM and the FCN blocks perform better when applied in a conjoined manner. Two z-normalizing techniques, z-normalizing each sample independently and z-normalizing the whole dataset, are compared using a Wilcoxson signed-rank test to show a statistical difference in performance. In addition, we provide an understanding of the impact dimension shuffle has on LSTM-FCN by comparing its performance with LSTM-FCN when no dimension shuffle is applied. Finally, we demonstrate the performance of the LSTM-FCN when the LSTM block is replaced by a GRU, basic RNN, and Dense Block.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 122,762
|
2401.01288
|
Physics-informed Generalizable Wireless Channel Modeling with
Segmentation and Deep Learning: Fundamentals, Methodologies, and Challenges
|
Channel modeling is fundamental in advancing wireless systems and has thus attracted considerable research focus. Recent trends have seen a growing reliance on data-driven techniques to facilitate the modeling process and yield accurate channel predictions. In this work, we first provide a concise overview of data-driven channel modeling methods, highlighting their limitations. Subsequently, we introduce the concept and advantages of physics-informed neural network (PINN)-based modeling and a summary of recent contributions in this area. Our findings demonstrate that PINN-based approaches in channel modeling exhibit promising attributes such as generalizability, interpretability, and robustness. We offer a comprehensive architecture for PINN methodology, designed to inform and inspire future model development. A case-study of our recent work on precise indoor channel prediction with semantic segmentation and deep learning is presented. The study concludes by addressing the challenges faced and suggesting potential research directions in this field.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| 419,304
|
1907.07556
|
Optimal Secure Control with Linear Temporal Logic Constraints
|
Prior work on automatic control synthesis for cyber-physical systems under logical constraints has primarily focused on environmental disturbances or modeling uncertainties, however, the impact of deliberate and malicious attacks has been less studied. In this paper, we consider a discrete-time dynamical system with a linear temporal logic (LTL) constraint in the presence of an adversary, which is modeled as a stochastic game. We assume that the adversary observes the control policy before choosing an attack strategy. We investigate two problems. In the first problem, we synthesize a robust control policy for the stochastic game that maximizes the probability of satisfying the LTL constraint. A value iteration based algorithm is proposed to compute the optimal control policy. In the second problem, we focus on a subclass of LTL constraints, which consist of an arbitrary LTL formula and an invariant constraint. We then investigate the problem of computing a control policy that minimizes the expected number of invariant constraint violations while maximizing the probability of satisfying the arbitrary LTL constraint. We characterize the optimality condition for the desired control policy. A policy iteration based algorithm is proposed to compute the control policy. We illustrate the proposed approaches using two numerical case studies.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 138,906
|
1612.05441
|
A Message Passing Algorithm for the Minimum Cost Multicut Problem
|
We propose a dual decomposition and linear program relaxation of the NP -hard minimum cost multicut problem. Unlike other polyhedral relaxations of the multicut polytope, it is amenable to efficient optimization by message passing. Like other polyhedral elaxations, it can be tightened efficiently by cutting planes. We define an algorithm that alternates between message passing and efficient separation of cycle- and odd-wheel inequalities. This algorithm is more efficient than state-of-the-art algorithms based on linear programming, including algorithms written in the framework of leading commercial software, as we show in experiments with large instances of the problem from applications in computer vision, biomedical image analysis and data mining.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 65,682
|
2402.09623
|
Conformalized Adaptive Forecasting of Heterogeneous Trajectories
|
This paper presents a new conformal method for generating simultaneous forecasting bands guaranteed to cover the entire path of a new random trajectory with sufficiently high probability. Prompted by the need for dependable uncertainty estimates in motion planning applications where the behavior of diverse objects may be more or less unpredictable, we blend different techniques from online conformal prediction of single and multiple time series, as well as ideas for addressing heteroscedasticity in regression. This solution is both principled, providing precise finite-sample guarantees, and effective, often leading to more informative predictions than prior methods.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 429,603
|
2310.03043
|
A Deep Reinforcement Learning Approach for Interactive Search with
Sentence-level Feedback
|
Interactive search can provide a better experience by incorporating interaction feedback from the users. This can significantly improve search accuracy as it helps avoid irrelevant information and captures the users' search intents. Existing state-of-the-art (SOTA) systems use reinforcement learning (RL) models to incorporate the interactions but focus on item-level feedback, ignoring the fine-grained information found in sentence-level feedback. Yet such feedback requires extensive RL action space exploration and large amounts of annotated data. This work addresses these challenges by proposing a new deep Q-learning (DQ) approach, DQrank. DQrank adapts BERT-based models, the SOTA in natural language processing, to select crucial sentences based on users' engagement and rank the items to obtain more satisfactory responses. We also propose two mechanisms to better explore optimal actions. DQrank further utilizes the experience replay mechanism in DQ to store the feedback sentences to obtain a better initial ranking performance. We validate the effectiveness of DQrank on three search datasets. The results show that DQRank performs at least 12% better than the previous SOTA RL approaches. We also conduct detailed ablation studies. The ablation results demonstrate that each model component can efficiently extract and accumulate long-term engagement effects from the users' sentence-level feedback. This structure offers new technologies with promised performance to construct a search system with sentence-level interaction.
| true
| false
| false
| false
| true
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 397,113
|
2210.13005
|
Towards Out-of-Distribution Sequential Event Prediction: A Causal
Treatment
|
The goal of sequential event prediction is to estimate the next event based on a sequence of historical events, with applications to sequential recommendation, user behavior analysis and clinical treatment. In practice, the next-event prediction models are trained with sequential data collected at one time and need to generalize to newly arrived sequences in remote future, which requires models to handle temporal distribution shift from training to testing. In this paper, we first take a data-generating perspective to reveal a negative result that existing approaches with maximum likelihood estimation would fail for distribution shift due to the latent context confounder, i.e., the common cause for the historical events and the next event. Then we devise a new learning objective based on backdoor adjustment and further harness variational inference to make it tractable for sequence learning problems. On top of that, we propose a framework with hierarchical branching structures for learning context-specific representations. Comprehensive experiments on diverse tasks (e.g., sequential recommendation) demonstrate the effectiveness, applicability and scalability of our method with various off-the-shelf models as backbones.
| false
| false
| false
| false
| false
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 326,007
|
2412.02482
|
What should a neuron aim for? Designing local objective functions based
on information theory
|
In modern deep neural networks, the learning dynamics of the individual neurons is often obscure, as the networks are trained via global optimization. Conversely, biological systems build on self-organized, local learning, achieving robustness and efficiency with limited global information. We here show how self-organization between individual artificial neurons can be achieved by designing abstract bio-inspired local learning goals. These goals are parameterized using a recent extension of information theory, Partial Information Decomposition (PID), which decomposes the information that a set of information sources holds about an outcome into unique, redundant and synergistic contributions. Our framework enables neurons to locally shape the integration of information from various input classes, i.e. feedforward, feedback, and lateral, by selecting which of the three inputs should contribute uniquely, redundantly or synergistically to the output. This selection is expressed as a weighted sum of PID terms, which, for a given problem, can be directly derived from intuitive reasoning or via numerical optimization, offering a window into understanding task-relevant local information processing. Achieving neuron-level interpretability while enabling strong performance using local learning, our work advances a principled information-theoretic foundation for local learning strategies.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| 513,553
|
2301.06796
|
Database Matching Under Noisy Synchronization Errors
|
The re-identification or de-anonymization of users from anonymized data through matching with publicly available correlated user data has raised privacy concerns, leading to the complementary measure of obfuscation in addition to anonymization. Recent research provides a fundamental understanding of the conditions under which privacy attacks, in the form of database matching, are successful in the presence of obfuscation. Motivated by synchronization errors stemming from the sampling of time-indexed databases, this paper presents a unified framework considering both obfuscation and synchronization errors and investigates the matching of databases under noisy entry repetitions. By investigating different structures for the repetition pattern, replica detection and seeded deletion detection algorithms are devised and sufficient and necessary conditions for successful matching are derived. Finally, the impacts of some variations of the underlying assumptions, such as the adversarial deletion model, seedless database matching, and zero-rate regime, on the results are discussed. Overall, our results provide insights into the privacy-preserving publication of anonymized and obfuscated time-indexed data as well as the closely related problem of the capacity of synchronization channels.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 340,743
|
1904.03301
|
Hot Streaks on Social Media
|
Measuring the impact and success of human performance is common in various disciplines, including art, science, and sports. Quantifying impact also plays a key role on social media, where impact is usually defined as the reach of a user's content as captured by metrics such as the number of views, likes, retweets, or shares. In this paper, we study entire careers of Twitter users to understand properties of impact. We show that user impact tends to have certain characteristics: First, impact is clustered in time, such that the most impactful tweets of a user appear close to each other. Second, users commonly have 'hot streaks' of impact, i.e., extended periods of high-impact tweets. Third, impact tends to gradually build up before, and fall off after, a user's most impactful tweet. We attempt to explain these characteristics using various properties measured on social media, including the user's network, content, activity, and experience, and find that changes in impact are associated with significant changes in these properties. Our findings open interesting avenues for future research on virality and influence on social media.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 126,673
|
2009.13370
|
Replica Analysis of the Linear Model with Markov or Hidden Markov Signal
Priors
|
This paper estimates free energy, average mutual information, and minimum mean square error (MMSE) of a linear model under two assumptions: (1) the source is generated by a Markov chain, (2) the source is generated via a hidden Markov model. Our estimates are based on the replica method in statistical physics. We show that under the posterior mean estimator, the linear model with Markov sources or hidden Markov sources is decoupled into single-input AWGN channels with state information available at both encoder and decoder where the state distribution follows the left Perron-Frobenius eigenvector with unit Manhattan norm of the stochastic matrix of Markov chains. Numerical results show that the free energies and MSEs obtained via the replica method are closely approximate to their counterparts achieved by the Metropolis-Hastings algorithm or some well-known approximate message passing algorithms in the research literature.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 197,728
|
2006.15090
|
Relative gradient optimization of the Jacobian term in unsupervised deep
learning
|
Learning expressive probabilistic models correctly describing the data is a ubiquitous problem in machine learning. A popular approach for solving it is mapping the observations into a representation space with a simple joint distribution, which can typically be written as a product of its marginals -- thus drawing a connection with the field of nonlinear independent component analysis. Deep density models have been widely used for this task, but their maximum likelihood based training requires estimating the log-determinant of the Jacobian and is computationally expensive, thus imposing a trade-off between computation and expressive power. In this work, we propose a new approach for exact training of such neural networks. Based on relative gradients, we exploit the matrix structure of neural network parameters to compute updates efficiently even in high-dimensional spaces; the computational cost of the training is quadratic in the input size, in contrast with the cubic scaling of naive approaches. This allows fast training with objective functions involving the log-determinant of the Jacobian, without imposing constraints on its structure, in stark contrast to autoregressive normalizing flows.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 184,411
|
1909.03750
|
Combining SMT and NMT Back-Translated Data for Efficient NMT
|
Neural Machine Translation (NMT) models achieve their best performance when large sets of parallel data are used for training. Consequently, techniques for augmenting the training set have become popular recently. One of these methods is back-translation (Sennrich et al., 2016), which consists on generating synthetic sentences by translating a set of monolingual, target-language sentences using a Machine Translation (MT) model. Generally, NMT models are used for back-translation. In this work, we analyze the performance of models when the training data is extended with synthetic data using different MT approaches. In particular we investigate back-translated data generated not only by NMT but also by Statistical Machine Translation (SMT) models and combinations of both. The results reveal that the models achieve the best performances when the training set is augmented with back-translated data created by merging different MT approaches.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 144,591
|
2407.15622
|
HyperSurf: Quadruped Robot Leg Capable of Surface Recognition with GRU
and Real-to-Sim Transferring
|
This paper introduces a system of data collection acceleration and real-to-sim transferring for surface recognition on a quadruped robot. The system features a mechanical single-leg setup capable of stepping on various easily interchangeable surfaces. Additionally, it incorporates a GRU-based Surface Recognition System, inspired by the system detailed in the Dog-Surf paper. This setup facilitates the expansion of dataset collection for model training, enabling data acquisition from hard-to-reach surfaces in laboratory conditions. Furthermore, it opens avenues for transferring surface properties from reality to simulation, thereby allowing the training of optimal gaits for legged robots in simulation environments using a pre-prepared library of digital twins of surfaces. Moreover, enhancements have been made to the GRU-based Surface Recognition System, allowing for the integration of data from both the quadruped robot and the single-leg setup. The dataset and code have been made publicly available.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 475,261
|
2403.18874
|
Neural Attributed Community Search at Billion Scale
|
Community search has been extensively studied in the past decades. In recent years, there is a growing interest in attributed community search that aims to identify a community based on both the query nodes and query attributes. A set of techniques have been investigated. Though the recent methods based on advanced learning models such as graph neural networks (GNNs) can achieve state-of-the-art performance in terms of accuracy, we notice that 1) they suffer from severe efficiency issues; 2) they directly model community search as a node classification problem and thus cannot make good use of interdependence among different entities in the graph. Motivated by these, in this paper, we propose a new neurAL attrIbuted Community sEarch model for large-scale graphs, termed ALICE. ALICE first extracts a candidate subgraph to reduce the search scope and subsequently predicts the community by the Consistency-aware Net , termed ConNet. Specifically, in the extraction phase, we introduce the density sketch modularity that uses a unified form to combine the strengths of two existing powerful modularities, i.e., classical modularity and density modularity. Based on the new modularity metric, we first adaptively obtain the candidate subgraph, formed by the k-hop neighbors of the query nodes, with the maximum modularity. Then, we construct a node-attribute bipartite graph to take attributes into consideration. After that, ConNet adopts a cross-attention encoder to encode the interaction between the query and the graph. The training of the model is guided by the structure-attribute consistency and the local consistency to achieve better performance. Extensive experiments over 11 real-world datasets including one billion-scale graph demonstrate the superiority of ALICE in terms of accuracy, efficiency, and scalability.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 442,117
|
2112.02057
|
Snake Robot Gait Decomposition and Gait Parameter Optimization
|
This paper proposes Gait Decomposition (G.D), a method of mathematically decomposing snake movements, and Gait Parameter Gradient (GPG), a method of optimizing decomposed gait parameters. G.D is a method that can express the snake gait mathematically and concisely from generating movement using the curve function to the motor control order when generating movement of snake robot. Through this method, the gait of the snake robot can be intuitively classified into a matrix, as well as flexibly adjusting the parameters of the curve function required for gait generation. This can solve the problem that parameter tuning, which is the reason why it is difficult for a snake robot to practical use, is difficult. Therefore, if this G.D is applied to snake robots, various gaits can be generated with a few of parameters, so snake robots can be used in many fields. We also implemented the GPG algorithm to optimize the gait curve function as well as define the gait of the snake robot through G.D.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 269,714
|
2305.17926
|
Large Language Models are not Fair Evaluators
|
In this paper, we uncover a systematic bias in the evaluation paradigm of adopting large language models~(LLMs), e.g., GPT-4, as a referee to score and compare the quality of responses generated by candidate models. We find that the quality ranking of candidate responses can be easily hacked by simply altering their order of appearance in the context. This manipulation allows us to skew the evaluation result, making one model appear considerably superior to the other, e.g., Vicuna-13B could beat ChatGPT on 66 over 80 tested queries with ChatGPT as an evaluator. To address this issue, we propose a calibration framework with three simple yet effective strategies: 1) Multiple Evidence Calibration, which requires the evaluator model to generate multiple evaluation evidence before assigning ratings; 2) Balanced Position Calibration, which aggregates results across various orders to determine the final score; 3) Human-in-the-Loop Calibration, which introduces a balanced position diversity entropy to measure the difficulty of each example and seeks human assistance when needed. We also manually annotate the "win/tie/lose" outcomes of responses from ChatGPT and Vicuna-13B in the Vicuna Benchmark's question prompt, and extensive experiments demonstrate that our approach successfully mitigates evaluation bias, resulting in closer alignment with human judgments. We release our code and human annotation at \url{https://github.com/i-Eval/FairEval} to facilitate future research.
| false
| false
| false
| false
| true
| true
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 368,798
|
2401.17191
|
Semantic Belief Behavior Graph: Enabling Autonomous Robot Inspection in
Unknown Environments
|
This paper addresses the problem of autonomous robotic inspection in complex and unknown environments. This capability is crucial for efficient and precise inspections in various real-world scenarios, even when faced with perceptual uncertainty and lack of prior knowledge of the environment. Existing methods for real-world autonomous inspections typically rely on predefined targets and waypoints and often fail to adapt to dynamic or unknown settings. In this work, we introduce the Semantic Belief Behavior Graph (SB2G) framework as a novel approach to semantic-aware autonomous robot inspection. SB2G generates a control policy for the robot, featuring behavior nodes that encapsulate various semantic-based policies designed for inspecting different classes of objects. We design an active semantic search behavior to guide the robot in locating objects for inspection while reducing semantic information uncertainty. The edges in the SB2G encode transitions between these behaviors. We validate our approach through simulation and real-world urban inspections using a legged robotic platform. Our results show that SB2G enables a more efficient inspection policy, exhibiting performance comparable to human-operated inspections.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 425,136
|
2110.00615
|
Predicting erectile dysfunction after treatment for localized prostate
cancer
|
While the 10-year survival rate for localized prostate cancer patients is very good (>98%), side effects of treatment may limit quality of life significantly. Erectile dysfunction (ED) is a common burden associated with increasing age as well as prostate cancer treatment. Although many studies have investigated the factors affecting erectile dysfunction (ED) after prostate cancer treatment, only limited studies have investigated whether ED can be predicted before the start of treatment. The advent of machine learning (ML) based prediction tools in oncology offers a promising approach to improve accuracy of prediction and quality of care. Predicting ED may help aid shared decision making by making the advantages and disadvantages of certain treatments clear, so that a tailored treatment for an individual patient can be chosen. This study aimed to predict ED at 1-year and 2-year post-diagnosis based on patient demographics, clinical data and patient-reported outcomes (PROMs) measured at diagnosis.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 258,454
|
2202.00210
|
INPUT Team Description Paper in 2022
|
INPUT is a team participating in the RoboCup Soccer Small League (SSL). It aims to show the world the technological capabilities of the Nagaoka region of Niigata Prefecture, which is where the team members are from. For this purpose, we are working on one of the projects from the Nagaoka Activation Zone of Energy (NAZE). Herein, we introduce two robots, v2019 and v2022, as well as AI systems that will be used in RoboCup 2022. In addition, we describe our efforts to develop robots in collaboration with companies in the Nagaoka area.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 278,068
|
1903.09366
|
Macro Action Reinforcement Learning with Sequence Disentanglement using
Variational Autoencoder
|
One problem in the application of reinforcement learning to real-world problems is the curse of dimensionality on the action space. Macro actions, a sequence of primitive actions, have been studied to diminish the dimensionality of the action space with regard to the time axis. However, previous studies relied on humans defining macro actions or assumed macro actions as repetitions of the same primitive actions. We present Factorized Macro Action Reinforcement Learning (FaMARL) which autonomously learns disentangled factor representation of a sequence of actions to generate macro actions that can be directly applied to general reinforcement learning algorithms. FaMARL exhibits higher scores than other reinforcement learning algorithms on environments that require an extensive amount of search.
| false
| false
| false
| false
| true
| false
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 125,047
|
2304.05055
|
A Comprehensive Survey on Deep Graph Representation Learning
|
Graph representation learning aims to effectively encode high-dimensional sparse graph-structured data into low-dimensional dense vectors, which is a fundamental task that has been widely studied in a range of fields, including machine learning and data mining. Classic graph embedding methods follow the basic idea that the embedding vectors of interconnected nodes in the graph can still maintain a relatively close distance, thereby preserving the structural information between the nodes in the graph. However, this is sub-optimal due to: (i) traditional methods have limited model capacity which limits the learning performance; (ii) existing techniques typically rely on unsupervised learning strategies and fail to couple with the latest learning paradigms; (iii) representation learning and downstream tasks are dependent on each other which should be jointly enhanced. With the remarkable success of deep learning, deep graph representation learning has shown great potential and advantages over shallow (traditional) methods, there exist a large number of deep graph representation learning techniques have been proposed in the past decade, especially graph neural networks. In this survey, we conduct a comprehensive survey on current deep graph representation learning algorithms by proposing a new taxonomy of existing state-of-the-art literature. Specifically, we systematically summarize the essential components of graph representation learning and categorize existing approaches by the ways of graph neural network architectures and the most recent advanced learning paradigms. Moreover, this survey also provides the practical and promising applications of deep graph representation learning. Last but not least, we state new perspectives and suggest challenging directions which deserve further investigations in the future.
| false
| false
| false
| false
| true
| true
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 357,466
|
2202.00567
|
Cardiac Disease Diagnosis on Imbalanced Electrocardiography Data Through
Optimal Transport Augmentation
|
In this paper, we focus on a new method of data augmentation to solve the data imbalance problem within imbalanced ECG datasets to improve the robustness and accuracy of heart disease detection. By using Optimal Transport, we augment the ECG disease data from normal ECG beats to balance the data among different categories. We build a Multi-Feature Transformer (MF-Transformer) as our classification model, where different features are extracted from both time and frequency domains to diagnose various heart conditions. Learning from 12-lead ECG signals, our model is able to distinguish five categories of cardiac conditions. Our results demonstrate 1) the classification models' ability to make competitive predictions on five ECG categories; 2) improvements in accuracy and robustness reflecting the effectiveness of our data augmentation method.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 278,188
|
2210.00764
|
Relational program synthesis with numerical reasoning
|
Program synthesis approaches struggle to learn programs with numerical values. An especially difficult problem is learning continuous values over multiple examples, such as intervals. To overcome this limitation, we introduce an inductive logic programming approach which combines relational learning with numerical reasoning. Our approach, which we call NUMSYNTH, uses satisfiability modulo theories solvers to efficiently learn programs with numerical values. Our approach can identify numerical values in linear arithmetic fragments, such as real difference logic, and from infinite domains, such as real numbers or integers. Our experiments on four diverse domains, including game playing and program synthesis, show that our approach can (i) learn programs with numerical values from linear arithmetical reasoning, and (ii) outperform existing approaches in terms of predictive accuracies and learning times.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 320,999
|
2403.05829
|
Measuring Robustness in Cyber-Physical Systems under Sensor Attacks
|
This paper contributes a formal framework for quantitative analysis of bounded sensor attacks on cyber-physical systems, using the formalism of differential dynamic logic. Given a precondition and postcondition of a system, we formalize two quantitative safety notions, quantitative forward and backward safety, which respectively express (1) how strong the strongest postcondition of the system is with respect to the specified postcondition, and (2) how strong the specified precondition is with respect to the weakest precondition of the system needed to ensure the specified postcondition holds. We introduce two notions, forward and backward robustness, to characterize the robustness of a system against sensor attacks as the loss of safety. To reason about robustness, we introduce two simulation distances, forward and backward simulation distances, which are defined based on the behavioral distances between the original system and the system with compromised sensors. Forward and backward distances, respectively, characterize upper bounds of the degree of forward and backward safety loss caused by the sensor attacks. We verify the two simulation distances by expressing them as modalities, i.e., formulas of differential dynamic logic, and develop an ad-hoc proof system to reason with such formulas. We showcase our formal notions and reasoning techniques on two non-trivial case studies: an autonomous vehicle that needs to avoid collision and a water tank system.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| 436,180
|
cs/0206026
|
Interleaved semantic interpretation in environment-based parsing
|
This paper extends a polynomial-time parsing algorithm that resolves structural ambiguity in input to a speech-based user interface by calculating and comparing the denotations of rival constituents, given some model of the interfaced application environment (Schuler 2001). The algorithm is extended to incorporate a full set of logical operators, including quantifiers and conjunctions, into this calculation without increasing the complexity of the overall algorithm beyond polynomial time, both in terms of the length of the input and the number of entities in the environment model.
| true
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 537,612
|
2102.04022
|
Towards Hierarchical Task Decomposition using Deep Reinforcement
Learning for Pick and Place Subtasks
|
Deep Reinforcement Learning (DRL) is emerging as a promising approach to generate adaptive behaviors for robotic platforms. However, a major drawback of using DRL is the data-hungry training regime that requires millions of trial and error attempts, which is impractical when running experiments on robotic systems. Learning from Demonstrations (LfD) has been introduced to solve this issue by cloning the behavior of expert demonstrations. However, LfD requires a large number of demonstrations that are difficult to be acquired since dedicated complex setups are required. To overcome these limitations, we propose a multi-subtask reinforcement learning methodology where complex pick and place tasks can be decomposed into low-level subtasks. These subtasks are parametrized as expert networks and learned via DRL methods. Trained subtasks are then combined by a high-level choreographer to accomplish the intended pick and place task considering different initial configurations. As a testbed, we use a pick and place robotic simulator to demonstrate our methodology and show that our method outperforms a benchmark methodology based on LfD in terms of sample-efficiency. We transfer the learned policy to the real robotic system and demonstrate robust grasping using various geometric-shaped objects.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 218,964
|
2111.03731
|
Frugal Machine Learning
|
Machine learning, already at the core of increasingly many systems and applications, is set to become even more ubiquitous with the rapid rise of wearable devices and the Internet of Things. In most machine learning applications, the main focus is on the quality of the results achieved (e.g., prediction accuracy), and hence vast amounts of data are being collected, requiring significant computational resources to build models. In many scenarios, however, it is infeasible or impractical to set up large centralized data repositories. In personal health, for instance, privacy issues may inhibit the sharing of detailed personal data. In such cases, machine learning should ideally be performed on wearable devices themselves, which raises major computational limitations such as the battery capacity of smartwatches. This paper thus investigates frugal learning, aimed to build the most accurate possible models using the least amount of resources. A wide range of learning algorithms is examined through a frugal lens, analyzing their accuracy/runtime performance on a wide range of data sets. The most promising algorithms are thereafter assessed in a real-world scenario by implementing them in a smartwatch and letting them learn activity recognition models on the watch itself.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 265,259
|
1906.04660
|
Two-step Constructive Approaches for Dungeon Generation
|
This paper presents a two-step generative approach for creating dungeons in the rogue-like puzzle game MiniDungeons 2. Generation is split into two steps, initially producing the architectural layout of the level as its walls and floor tiles, and then furnishing it with game objects representing the player's start and goal position, challenges and rewards. Three layout creators and three furnishers are introduced in this paper, which can be combined in different ways in the two-step generative process for producing diverse dungeons levels. Layout creators generate the floors and walls of a level, while furnishers populate it with monsters, traps, and treasures. We test the generated levels on several expressivity measures, and in simulations with procedural persona agents.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 134,790
|
2501.07766
|
Large Language Models for Knowledge Graph Embedding Techniques, Methods,
and Challenges: A Survey
|
Large Language Models (LLMs) have attracted a lot of attention in various fields due to their superior performance, aiming to train hundreds of millions or more parameters on large amounts of text data to understand and generate natural language. As the superior performance of LLMs becomes apparent, they are increasingly being applied to knowledge graph embedding (KGE) related tasks to improve the processing results. As a deep learning model in the field of Natural Language Processing (NLP), it learns a large amount of textual data to predict the next word or generate content related to a given text. However, LLMs have recently been invoked to varying degrees in different types of KGE related scenarios such as multi-modal KGE and open KGE according to their task characteristics. In this paper, we investigate a wide range of approaches for performing LLMs-related tasks in different types of KGE scenarios. To better compare the various approaches, we summarize each KGE scenario in a classification. In addition to the categorization methods, we provide a tabular overview of the methods and their source code links for a more direct comparison. In the article we also discuss the applications in which the methods are mainly used and suggest several forward-looking directions for the development of this new research area.
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 524,504
|
2106.02394
|
On the Strategyproofness of the Geometric Median
|
The geometric median, an instrumental component of the secure machine learning toolbox, is known to be effective when robustly aggregating models (or gradients), gathered from potentially malicious (or strategic) users. What is less known is the extent to which the geometric median incentivizes dishonest behaviors. This paper addresses this fundamental question by quantifying its strategyproofness. While we observe that the geometric median is not even approximately strategyproof, we prove that it is asymptotically $\alpha$-strategyproof: when the number of users is large enough, a user that misbehaves can gain at most a multiplicative factor $\alpha$, which we compute as a function of the distribution followed by the users. We then generalize our results to the case where users actually care more about specific dimensions, determining how this impacts $\alpha$. We also show how the skewed geometric medians can be used to improve strategyproofness.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| true
| 238,858
|
2208.04360
|
SDWPF: A Dataset for Spatial Dynamic Wind Power Forecasting Challenge at
KDD Cup 2022
|
The variability of wind power supply can present substantial challenges to incorporating wind power into a grid system. Thus, Wind Power Forecasting (WPF) has been widely recognized as one of the most critical issues in wind power integration and operation. There has been an explosion of studies on wind power forecasting problems in the past decades. Nevertheless, how to well handle the WPF problem is still challenging, since high prediction accuracy is always demanded to ensure grid stability and security of supply. We present a unique Spatial Dynamic Wind Power Forecasting dataset: SDWPF, which includes the spatial distribution of wind turbines, as well as the dynamic context factors. Whereas, most of the existing datasets have only a small number of wind turbines without knowing the locations and context information of wind turbines at a fine-grained time scale. By contrast, SDWPF provides the wind power data of 134 wind turbines from a wind farm over half a year with their relative positions and internal statuses. We use this dataset to launch the Baidu KDD Cup 2022 to examine the limit of current WPF solutions. The dataset is released at https://aistudio.baidu.com/aistudio/competition/detail/152/0/datasets.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 312,084
|
2404.06287
|
Counterfactual Reasoning for Multi-Label Image Classification via
Patching-Based Training
|
The key to multi-label image classification (MLC) is to improve model performance by leveraging label correlations. Unfortunately, it has been shown that overemphasizing co-occurrence relationships can cause the overfitting issue of the model, ultimately leading to performance degradation. In this paper, we provide a causal inference framework to show that the correlative features caused by the target object and its co-occurring objects can be regarded as a mediator, which has both positive and negative impacts on model predictions. On the positive side, the mediator enhances the recognition performance of the model by capturing co-occurrence relationships; on the negative side, it has the harmful causal effect that causes the model to make an incorrect prediction for the target object, even when only co-occurring objects are present in an image. To address this problem, we propose a counterfactual reasoning method to measure the total direct effect, achieved by enhancing the direct effect caused only by the target object. Due to the unknown location of the target object, we propose patching-based training and inference to accomplish this goal, which divides an image into multiple patches and identifies the pivot patch that contains the target object. Experimental results on multiple benchmark datasets with diverse configurations validate that the proposed method can achieve state-of-the-art performance.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 445,407
|
1406.5979
|
Reinforcement and Imitation Learning via Interactive No-Regret Learning
|
Recent work has demonstrated that problems-- particularly imitation learning and structured prediction-- where a learner's predictions influence the input-distribution it is tested on can be naturally addressed by an interactive approach and analyzed using no-regret online learning. These approaches to imitation learning, however, neither require nor benefit from information about the cost of actions. We extend existing results in two directions: first, we develop an interactive imitation learning approach that leverages cost information; second, we extend the technique to address reinforcement learning. The results provide theoretical support to the commonly observed successes of online approximate policy iteration. Our approach suggests a broad new family of algorithms and provides a unifying view of existing techniques for imitation and reinforcement learning.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 34,080
|
1505.06699
|
Simulation Algorithms with Exponential Integration for Time-Domain
Analysis of Large-Scale Power Delivery Networks
|
We design an algorithmic framework using matrix exponentials for time-domain simulation of power delivery network (PDN). Our framework can reuse factorized matrices to simulate the large-scale linear PDN system with variable stepsizes. In contrast, current conventional PDN simulation solvers have to use fixed step-size approach in order to reuse factorized matrices generated by the expensive matrix decomposition. Based on the proposed exponential integration framework, we design a PDN solver R-MATEX with the flexible time-stepping capability. The key operation of matrix exponential and vector product (MEVP) is computed by the rational Krylov subspace method. To further improve the runtime, we also propose a distributed computing framework DR-MATEX. DR-MATEX reduces Krylov subspace generations caused by frequent breakpoints from a large number of current sources during simulation. By virtue of the superposition property of linear system and scaling invariance property of Krylov subspace, DR-MATEX can divide the whole simulation task into subtasks based on the alignments of breakpoints among those sources. The subtasks are processed in parallel at different computing nodes without any communication during the computation of transient simulation. The final result is obtained by summing up the partial results among all the computing nodes after they finish the assigned subtasks. Therefore, our computation model belongs to the category known as Embarrassingly Parallel model. Experimental results show R-MATEX and DR-MATEX can achieve up to around 14.4X and 98.0X runtime speedups over traditional trapezoidal integration based solver with fixed timestep approach.
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 43,463
|
1407.0511
|
Braided Convolutional Codes -- A Class of Spatially Coupled Turbo-Like
Codes
|
In this paper, we investigate the impact of spatial coupling on the thresholds of turbo-like codes. Parallel concatenated and serially concatenated convolutional codes as well as braided convolutional codes (BCCs) are compared by means of an exact density evolution (DE) analysis for the binary erasure channel (BEC). We propose two extensions of the original BCC ensemble to improve its threshold and demonstrate that their BP thresholds approach the maximum-a-posteriori (MAP) threshold of the uncoupled ensemble. A comparison of the different ensembles shows that parallel concatenated ensembles can be outperformed by both serially concatenated and BCC ensembles, although they have the best BP thresholds in the uncoupled case.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 34,340
|
cs/0002015
|
Genetic Algorithms for Extension Search in Default Logic
|
A default theory can be characterized by its sets of plausible conclusions, called its extensions. But, due to the theoretical complexity of Default Logic (Sigma_2p-complete), the problem of finding such an extension is very difficult if one wants to deal with non trivial knowledge bases. Based on the principle of natural selection, Genetic Algorithms have been quite successfully applied to combinatorial problems and seem useful for problems with huge search spaces and when no tractable algorithm is available. The purpose of this paper is to show that techniques issued from Genetic Algorithms can be used in order to build an efficient default reasoning system. After providing a formal description of the components required for an extension search based on Genetic Algorithms principles, we exhibit some experimental results.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 537,014
|
1703.03912
|
Mitigating the Curse of Correlation in Security Games by Entropy
Maximization
|
In Stackelberg security games, a defender seeks to randomly allocate limited security resources to protect critical targets from an attack. In this paper, we study a fundamental, yet underexplored, phenomenon in security games, which we term the \emph{Curse of Correlation} (CoC). Specifically, we observe that there are inevitable correlations among the protection status of different targets. Such correlation is a crucial concern, especially in \emph{spatio-temporal} domains like conservation area patrolling, where attackers can surveil patrollers at certain areas and then infer their patrolling routes using such correlations. To mitigate this issue, we propose to design entropy-maximizing defending strategies for spatio-temporal security games, which frequently suffer from CoC. We prove that the problem is \#P-hard in general. However, it admits efficient algorithms in well-motivated special settings. Our experiments show significant advantages of max-entropy algorithms over previous algorithms. A scalable implementation of our algorithm is currently under pre-deployment testing for integration into FAMS software to improve the scheduling of US federal air marshals.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| 69,795
|
2310.10851
|
Tracking China's cross-strait bot networks against Taiwan
|
The cross-strait relationship between China and Taiwan is marked by increasing hostility around potential reunification. We analyze an unattributed bot network and how repeater bots engaged in an influence campaign against Taiwan following US House Speaker Nancy Pelosi's visit to Taiwan in 2022. We examine the message amplification tactics employed by four key bot sub-communities, the widespread dissemination of information across multiple platforms through URLs, and the potential targeted audiences of this bot network. We find that URL link sharing reveals circumvention around YouTube suspensions, in addition to the potential effectiveness of algorithmic bot connectivity to appear less bot-like, and detail a sequence of coordination within a sub-community for message amplification. We additionally find the narratives and targeted audience potentially shifting after account activity discrepancies, demonstrating how dynamic these bot networks can operate.
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 400,408
|
2407.14564
|
APS-USCT: Ultrasound Computed Tomography on Sparse Data via AI-Physic
Synergy
|
Ultrasound computed tomography (USCT) is a promising technique that achieves superior medical imaging reconstruction resolution by fully leveraging waveform information, outperforming conventional ultrasound methods. Despite its advantages, high-quality USCT reconstruction relies on extensive data acquisition by a large number of transducers, leading to increased costs, computational demands, extended patient scanning times, and manufacturing complexities. To mitigate these issues, we propose a new USCT method called APS-USCT, which facilitates imaging with sparse data, substantially reducing dependence on high-cost dense data acquisition. Our APS-USCT method consists of two primary components: APS-wave and APS-FWI. The APS-wave component, an encoder-decoder system, preprocesses the waveform data, converting sparse data into dense waveforms to augment sample density prior to reconstruction. The APS-FWI component, utilizing the InversionNet, directly reconstructs the speed of sound (SOS) from the ultrasound waveform data. We further improve the model's performance by incorporating Squeeze-and-Excitation (SE) Blocks and source encoding techniques. Testing our method on a breast cancer dataset yielded promising results. It demonstrated outstanding performance with an average Structural Similarity Index (SSIM) of 0.8431. Notably, over 82% of samples achieved an SSIM above 0.8, with nearly 61% exceeding 0.85, highlighting the significant potential of our approach in improving USCT image reconstruction by efficiently utilizing sparse data.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 474,829
|
2112.07746
|
CEM-GD: Cross-Entropy Method with Gradient Descent Planner for
Model-Based Reinforcement Learning
|
Current state-of-the-art model-based reinforcement learning algorithms use trajectory sampling methods, such as the Cross-Entropy Method (CEM), for planning in continuous control settings. These zeroth-order optimizers require sampling a large number of trajectory rollouts to select an optimal action, which scales poorly for large prediction horizons or high dimensional action spaces. First-order methods that use the gradients of the rewards with respect to the actions as an update can mitigate this issue, but suffer from local optima due to the non-convex optimization landscape. To overcome these issues and achieve the best of both worlds, we propose a novel planner, Cross-Entropy Method with Gradient Descent (CEM-GD), that combines first-order methods with CEM. At the beginning of execution, CEM-GD uses CEM to sample a significant amount of trajectory rollouts to explore the optimization landscape and avoid poor local minima. It then uses the top trajectories as initialization for gradient descent and applies gradient updates to each of these trajectories to find the optimal action sequence. At each subsequent time step, however, CEM-GD samples much fewer trajectories from CEM before applying gradient updates. We show that as the dimensionality of the planning problem increases, CEM-GD maintains desirable performance with a constant small number of samples by using the gradient information, while avoiding local optima using initially well-sampled trajectories. Furthermore, CEM-GD achieves better performance than CEM on a variety of continuous control benchmarks in MuJoCo with 100x fewer samples per time step, resulting in around 25% less computation time and 10% less memory usage. The implementation of CEM-GD is available at $\href{https://github.com/KevinHuang8/CEM-GD}{\text{https://github.com/KevinHuang8/CEM-GD}}$.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 271,570
|
2104.06328
|
List Message Passing Decoding of Non-binary Low-Density Parity-Check
Codes
|
A decoding algorithm for $q$-ary low-density parity-check codes over the $q$-ary symmetric channel is introduced. The exchanged messages are lists of symbols from $\Fq$. A density evolution analysis for maximum list sizes $1$ and $2$ is developed. Thresholds for selected regular low-density parity-check code ensembles are computed showing gains with respect to a similar algorithm in the literature. Finite-length simulation results confirm the asymptotic analysis.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 230,036
|
1708.03431
|
Iterative Deep Convolutional Encoder-Decoder Network for Medical Image
Segmentation
|
In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 78,775
|
2404.13821
|
Robotic Blended Sonification: Consequential Robot Sound as Creative
Material for Human-Robot Interaction
|
Current research in robotic sounds generally focuses on either masking the consequential sound produced by the robot or on sonifying data about the robot to create a synthetic robot sound. We propose to capture, modify, and utilise rather than mask the sounds that robots are already producing. In short, this approach relies on capturing a robot's sounds, processing them according to contextual information (e.g., collaborators' proximity or particular work sequences), and playing back the modified sound. Previous research indicates the usefulness of non-semantic, and even mechanical, sounds as a communication tool for conveying robotic affect and function. Adding to this, this paper presents a novel approach which makes two key contributions: (1) a technique for real-time capture and processing of consequential robot sounds, and (2) an approach to explore these sounds through direct human-robot interaction. Drawing on methodologies from design, human-robot interaction, and creative practice, the resulting 'Robotic Blended Sonification' is a concept which transforms the consequential robot sounds into a creative material that can be explored artistically and within application-based studies.
| true
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 448,456
|
1710.10932
|
Structure-preserving discrete-time optimal maneuvers of a wheeled
inverted pendulum
|
The Wheeled Inverted Pendulum (WIP) is a nonholonomic, underactuated mechanical system, and has been popularized commercially as the {\it Segway}. Designing optimal control laws for point-to-point state-transfer for this autonomous mechanical system, while respecting momentum and torque constraints as well as the underlying manifold, continues to pose challenging problems. In this article we present a successful effort in this direction: We employ geometric mechanics to obtain a discrete-time model of the system, followed by the synthesis of an energy-optimal control based on a discrete-time maximum principle applicable to mechanical systems whose configuration manifold is a Lie group. Moreover, we incorporate state and momentum constraints into the discrete-time control directly at the synthesis stage. The control is implemented on a WIP with parameters obtained from an existing prototype; the results are highly encouraging, as demonstrated by numerical experiments.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 83,495
|
2310.19990
|
Unveiling the Limits of Learned Local Search Heuristics: Are You the
Mightiest of the Meek?
|
In recent years, combining neural networks with local search heuristics has become popular in the field of combinatorial optimization. Despite its considerable computational demands, this approach has exhibited promising outcomes with minimal manual engineering. However, we have identified three critical limitations in the empirical evaluation of these integration attempts. Firstly, instances with moderate complexity and weak baselines pose a challenge in accurately evaluating the effectiveness of learning-based approaches. Secondly, the absence of an ablation study makes it difficult to quantify and attribute improvements accurately to the deep learning architecture. Lastly, the generalization of learned heuristics across diverse distributions remains underexplored. In this study, we conduct a comprehensive investigation into these identified limitations. Surprisingly, we demonstrate that a simple learned heuristic based on Tabu Search surpasses state-of-the-art (SOTA) learned heuristics in terms of performance and generalizability. Our findings challenge prevailing assumptions and open up exciting avenues for future research and innovation in combinatorial optimization.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 404,209
|
2306.17162
|
Can Machines Garden? Systematically Comparing the AlphaGarden vs.
Professional Horticulturalists
|
The AlphaGarden is an automated testbed for indoor polyculture farming which combines a first-order plant simulator, a gantry robot, a seed planting algorithm, plant phenotyping and tracking algorithms, irrigation sensors and algorithms, and custom pruning tools and algorithms. In this paper, we systematically compare the performance of the AlphaGarden to professional horticulturalists on the staff of the UC Berkeley Oxford Tract Greenhouse. The humans and the machine tend side-by-side polyculture gardens with the same seed arrangement. We compare performance in terms of canopy coverage, plant diversity, and water consumption. Results from two 60-day cycles suggest that the automated AlphaGarden performs comparably to professional horticulturalists in terms of coverage and diversity, and reduces water consumption by as much as 44%. Code, videos, and datasets are available at https://sites.google.com/berkeley.edu/systematiccomparison.
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 376,611
|
1109.5490
|
A General Framework for the Optimization of Energy Harvesting
Communication Systems with Battery Imperfections
|
Energy harvesting has emerged as a powerful technology for complementing current battery-powered communication systems in order to extend their lifetime. In this paper a general framework is introduced for the optimization of communication systems in which the transmitter is able to harvest energy from its environment. Assuming that the energy arrival process is known non-causally at the transmitter, the structure of the optimal transmission scheme, which maximizes the amount of transmitted data by a given deadline, is identified. Our framework includes models with continuous energy arrival as well as battery constraints. A battery that suffers from energy leakage is studied further, and the optimal transmission scheme is characterized for a constant leakage rate.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 12,328
|
1309.7910
|
A Simple Proof of Maxwell Saturation for Coupled Scalar Recursions
|
Low-density parity-check (LDPC) convolutional codes (or spatially-coupled codes) were recently shown to approach capacity on the binary erasure channel (BEC) and binary-input memoryless symmetric channels. The mechanism behind this spectacular performance is now called threshold saturation via spatial coupling. This new phenomenon is characterized by the belief-propagation threshold of the spatially-coupled ensemble increasing to an intrinsic noise threshold defined by the uncoupled system. In this paper, we present a simple proof of threshold saturation that applies to a wide class of coupled scalar recursions. Our approach is based on constructing potential functions for both the coupled and uncoupled recursions. Our results actually show that the fixed point of the coupled recursion is essentially determined by the minimum of the uncoupled potential function and we refer to this phenomenon as Maxwell saturation. A variety of examples are considered including the density-evolution equations for: irregular LDPC codes on the BEC, irregular low-density generator matrix codes on the BEC, a class of generalized LDPC codes with BCH component codes, the joint iterative decoding of LDPC codes on intersymbol-interference channels with erasure noise, and the compressed sensing of random vectors with i.i.d. components.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 27,431
|
2403.10642
|
Using Uncertainty Quantification to Characterize and Improve
Out-of-Domain Learning for PDEs
|
Existing work in scientific machine learning (SciML) has shown that data-driven learning of solution operators can provide a fast approximate alternative to classical numerical partial differential equation (PDE) solvers. Of these, Neural Operators (NOs) have emerged as particularly promising. We observe that several uncertainty quantification (UQ) methods for NOs fail for test inputs that are even moderately out-of-domain (OOD), even when the model approximates the solution well for in-domain tasks. To address this limitation, we show that ensembling several NOs can identify high-error regions and provide good uncertainty estimates that are well-correlated with prediction errors. Based on this, we propose a cost-effective alternative, DiverseNO, that mimics the properties of the ensemble by encouraging diverse predictions from its multiple heads in the last feed-forward layer. We then introduce Operator-ProbConserv, a method that uses these well-calibrated UQ estimates within the ProbConserv framework to update the model. Our empirical results show that Operator-ProbConserv enhances OOD model performance for a variety of challenging PDE problems and satisfies physical constraints such as conservation laws.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 438,284
|
2412.17532
|
Exploring Dynamic Novel View Synthesis Technologies for Cinematography
|
Novel view synthesis (NVS) has shown significant promise for applications in cinematographic production, particularly through the exploitation of Neural Radiance Fields (NeRF) and Gaussian Splatting (GS). These methods model real 3D scenes, enabling the creation of new shots that are challenging to capture in the real world due to set topology or expensive equipment requirement. This innovation also offers cinematographic advantages such as smooth camera movements, virtual re-shoots, slow-motion effects, etc. This paper explores dynamic NVS with the aim of facilitating the model selection process. We showcase its potential through a short montage filmed using various NVS models.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 520,012
|
2307.03910
|
A Survey of Spiking Neural Network Accelerator on FPGA
|
Due to the ability to implement customized topology, FPGA is increasingly used to deploy SNNs in both embedded and high-performance applications. In this paper, we survey state-of-the-art SNN implementations and their applications on FPGA. We collect the recent widely-used spiking neuron models, network structures, and signal encoding formats, followed by the enumeration of related hardware design schemes for FPGA-based SNN implementations. Compared with the previous surveys, this manuscript enumerates the application instances that applied the above-mentioned technical schemes in recent research. Based on that, we discuss the actual acceleration potential of implementing SNN on FPGA. According to our above discussion, the upcoming trends are discussed in this paper and give a guideline for further advancement in related subjects.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| true
| 378,203
|
2007.06968
|
Deep composition of tensor-trains using squared inverse Rosenblatt
transports
|
Characterising intractable high-dimensional random variables is one of the fundamental challenges in stochastic computation. The recent surge of transport maps offers a mathematical foundation and new insights for tackling this challenge by coupling intractable random variables with tractable reference random variables. This paper generalises the functional tensor-train approximation of the inverse Rosenblatt transport recently developed by Dolgov et al. (Stat Comput 30:603--625, 2020) to a wide class of high-dimensional non-negative functions, such as unnormalised probability density functions. First, we extend the inverse Rosenblatt transform to enable the transport to general reference measures other than the uniform measure. We develop an efficient procedure to compute this transport from a squared tensor-train decomposition which preserves the monotonicity. More crucially, we integrate the proposed order-preserving functional tensor-train transport into a nested variable transformation framework inspired by the layered structure of deep neural networks. The resulting deep inverse Rosenblatt transport significantly expands the capability of tensor approximations and transport maps to random variables with complicated nonlinear interactions and concentrated density functions. We demonstrate the efficiency of the proposed approach on a range of applications in statistical learning and uncertainty quantification, including parameter estimation for dynamical systems and inverse problems constrained by partial differential equations.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 187,191
|
1901.02662
|
Deep Semantic Multimodal Hashing Network for Scalable Image-Text and
Video-Text Retrievals
|
Hashing has been widely applied to multimodal retrieval on large-scale multimedia data due to its efficiency in computation and storage. In this article, we propose a novel deep semantic multimodal hashing network (DSMHN) for scalable image-text and video-text retrieval. The proposed deep hashing framework leverages 2-D convolutional neural networks (CNN) as the backbone network to capture the spatial information for image-text retrieval, while the 3-D CNN as the backbone network to capture the spatial and temporal information for video-text retrieval. In the DSMHN, two sets of modality-specific hash functions are jointly learned by explicitly preserving both intermodality similarities and intramodality semantic labels. Specifically, with the assumption that the learned hash codes should be optimal for the classification task, two stream networks are jointly trained to learn the hash functions by embedding the semantic labels on the resultant hash codes. Moreover, a unified deep multimodal hashing framework is proposed to learn compact and high-quality hash codes by exploiting the feature representation learning, intermodality similarity-preserving learning, semantic label-preserving learning, and hash function learning with different types of loss functions simultaneously. The proposed DSMHN method is a generic and scalable deep hashing framework for both image-text and video-text retrievals, which can be flexibly integrated with different types of loss functions. We conduct extensive experiments for both single modal- and cross-modal-retrieval tasks on four widely used multimodal-retrieval data sets. Experimental results on both image-text- and video-text-retrieval tasks demonstrate that the DSMHN significantly outperforms the state-of-the-art methods.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 118,257
|
2411.12876
|
Puppet-CNN: Input-Adaptive Convolutional Neural Networks with Model
Compression using Ordinary Differential Equation
|
Convolutional Neural Network (CNN) has been applied to more and more scenarios due to its excellent performance in many machine learning tasks, especially with deep and complex structures. However, as the network goes deeper, more parameters need to be stored and optimized. Besides, almost all common CNN models adopt "train-and-use" strategy where the structure is pre-defined and the kernel parameters are fixed after the training with the same structure and set of parameters used for all data without considering the content complexity. In this paper, we propose a new CNN framework, named as $\textit{Puppet-CNN}$, which contains two modules: a $\textit{puppet module}$ and a $\textit{puppeteer module}$. The puppet module is a CNN model used to actually process the input data just like other works, but its depth and kernels are generated by the puppeteer module (realized with Ordinary Differential Equation (ODE)) based on the input complexity each time. By recurrently generating kernel parameters in the puppet module, we can take advantage of the dependence among kernels of different convolutional layers to significantly reduce the size of CNN model by only storing and training the parameters of the much smaller puppeteer ODE module. Through experiments on several datasets, our method has proven to be superior than the traditional CNNs on both performance and efficiency. The model size can be reduced more than 10 times.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 509,579
|
2203.07853
|
Concentration Properties of Random Codes
|
This paper studies the concentration properties of random codes. Specifically, we show that, for discrete memoryless channels, the error exponent of a randomly generated code with pairwise-independent codewords converges in probability to its expectation -- the typical error exponent. For high rates, the result is a consequence of the fact that the random-coding error exponent and the sphere-packing error exponent coincide. For low rates, instead, the convergence is based on the fact that the union bound accurately characterizes the probability of error. The paper also zooms into the behavior at asymptotically low rates and shows that the error exponent converges in distribution to a Gaussian-like distribution. Finally, we present several results on the convergence of the error probability and error exponent for generic ensembles and channels.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| 285,586
|
2205.12407
|
Convolutional Neural Processes for Inpainting Satellite Images
|
The widespread availability of satellite images has allowed researchers to model complex systems such as disease dynamics. However, many satellite images have missing values due to measurement defects, which render them unusable without data imputation. For example, the scanline corrector for the LANDSAT 7 satellite broke down in 2003, resulting in a loss of around 20\% of its data. Inpainting involves predicting what is missing based on the known pixels and is an old problem in image processing, classically based on PDEs or interpolation methods, but recent deep learning approaches have shown promise. However, many of these methods do not explicitly take into account the inherent spatiotemporal structure of satellite images. In this work, we cast satellite image inpainting as a natural meta-learning problem, and propose using convolutional neural processes (ConvNPs) where we frame each satellite image as its own task or 2D regression problem. We show ConvNPs can outperform classical methods and state-of-the-art deep learning inpainting models on a scanline inpainting problem for LANDSAT 7 satellite images, assessed on a variety of in and out-of-distribution images.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 298,518
|
2401.09555
|
Improving Classification Performance With Human Feedback: Label a few,
we label the rest
|
In the realm of artificial intelligence, where a vast majority of data is unstructured, obtaining substantial amounts of labeled data to train supervised machine learning models poses a significant challenge. To address this, we delve into few-shot and active learning, where are goal is to improve AI models with human feedback on a few labeled examples. This paper focuses on understanding how a continuous feedback loop can refine models, thereby enhancing their accuracy, recall, and precision through incremental human input. By employing Large Language Models (LLMs) such as GPT-3.5, BERT, and SetFit, we aim to analyze the efficacy of using a limited number of labeled examples to substantially improve model accuracy. We benchmark this approach on the Financial Phrasebank, Banking, Craigslist, Trec, Amazon Reviews datasets to prove that with just a few labeled examples, we are able to surpass the accuracy of zero shot large language models to provide enhanced text classification performance. We demonstrate that rather than needing to manually label millions of rows of data, we just need to label a few and the model can effectively predict the rest.
| false
| false
| false
| false
| true
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 422,303
|
2009.04253
|
Structured Equilibria for Dynamic Games with Asymmetric Information and
Dependent Types
|
We consider a dynamic game with asymmetric information where each player observes privately a noisy version of a (hidden) state of the world V, resulting in dependent private observations. We study structured perfect Bayesian equilibria that use private beliefs in their strategies as sufficient statistics for summarizing their observation history. The main difficulty in finding the appropriate sufficient statistic (state) for the structured strategies arises from the fact that players need to construct (private) beliefs on other players' private beliefs on V, which in turn would imply that an infinite hierarchy of beliefs on beliefs needs to be constructed, rendering the problem unsolvable. We show that this is not the case: each player's belief on other players' beliefs on V can be characterized by her own belief on V and some appropriately defined public belief. We then specialize this setting to the case of a Linear Quadratic Gaussian (LQG) non-zero-sum game and we characterize linear structured PBE that can be found through a backward/forward algorithm akin to dynamic programming for the standard LQG control problem. Unlike the standard LQG problem, however, some of the required quantities for the Kalman filter are observation-dependent and thus cannot be evaluated off-line through a forward recursion.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| 194,998
|
2405.20721
|
ContextGS: Compact 3D Gaussian Splatting with Anchor Level Context Model
|
Recently, 3D Gaussian Splatting (3DGS) has become a promising framework for novel view synthesis, offering fast rendering speeds and high fidelity. However, the large number of Gaussians and their associated attributes require effective compression techniques. Existing methods primarily compress neural Gaussians individually and independently, i.e., coding all the neural Gaussians at the same time, with little design for their interactions and spatial dependence. Inspired by the effectiveness of the context model in image compression, we propose the first autoregressive model at the anchor level for 3DGS compression in this work. We divide anchors into different levels and the anchors that are not coded yet can be predicted based on the already coded ones in all the coarser levels, leading to more accurate modeling and higher coding efficiency. To further improve the efficiency of entropy coding, e.g., to code the coarsest level with no already coded anchors, we propose to introduce a low-dimensional quantized feature as the hyperprior for each anchor, which can be effectively compressed. Our work pioneers the context model in the anchor level for 3DGS representation, yielding an impressive size reduction of over 100 times compared to vanilla 3DGS and 15 times compared to the most recent state-of-the-art work Scaffold-GS, while achieving comparable or even higher rendering quality.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 459,491
|
2411.10406
|
How to Build a Quantum Supercomputer: Scaling from Hundreds to Millions
of Qubits
|
In the span of four decades, quantum computation has evolved from an intellectual curiosity to a potentially realizable technology. Today, small-scale demonstrations have become possible for quantum algorithmic primitives on hundreds of physical qubits and proof-of-principle error-correction on a single logical qubit. Nevertheless, despite significant progress and excitement, the path toward a full-stack scalable technology is largely unknown. There are significant outstanding quantum hardware, fabrication, software architecture, and algorithmic challenges that are either unresolved or overlooked. These issues could seriously undermine the arrival of utility-scale quantum computers for the foreseeable future. Here, we provide a comprehensive review of these scaling challenges. We show how the road to scaling could be paved by adopting existing semiconductor technology to build much higher-quality qubits, employing system engineering approaches, and performing distributed quantum computation within heterogeneous high-performance computing infrastructures. These opportunities for research and development could unlock certain promising applications, in particular, efficient quantum simulation/learning of quantum data generated by natural or engineered quantum systems. To estimate the true cost of such promises, we provide a detailed resource and sensitivity analysis for classically hard quantum chemistry calculations on surface-code error-corrected quantum computers given current, target, and desired hardware specifications based on superconducting qubits, accounting for a realistic distribution of errors. Furthermore, we argue that, to tackle industry-scale classical optimization and machine learning problems in a cost-effective manner, heterogeneous quantum-probabilistic computing with custom-designed accelerators should be considered as a complementary path toward scalability.
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| 508,620
|
2308.07641
|
Ternary Singular Value Decomposition as a Better Parameterized Form in
Linear Mapping
|
We present a simple yet novel parameterized form of linear mapping to achieves remarkable network compression performance: a pseudo SVD called Ternary SVD (TSVD). Unlike vanilla SVD, TSVD limits the $U$ and $V$ matrices in SVD to ternary matrices form in $\{\pm 1, 0\}$. This means that instead of using the expensive multiplication instructions, TSVD only requires addition instructions when computing $U(\cdot)$ and $V(\cdot)$. We provide direct and training transition algorithms for TSVD like Post Training Quantization and Quantization Aware Training respectively. Additionally, we analyze the convergence of the direct transition algorithms in theory. In experiments, we demonstrate that TSVD can achieve state-of-the-art network compression performance in various types of networks and tasks, including current baseline models such as ConvNext, Swim, BERT, and large language model like OPT.
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 385,594
|
2210.05709
|
Shapley Head Pruning: Identifying and Removing Interference in
Multilingual Transformers
|
Multilingual transformer-based models demonstrate remarkable zero and few-shot transfer across languages by learning and reusing language-agnostic features. However, as a fixed-size model acquires more languages, its performance across all languages degrades, a phenomenon termed interference. Often attributed to limited model capacity, interference is commonly addressed by adding additional parameters despite evidence that transformer-based models are overparameterized. In this work, we show that it is possible to reduce interference by instead identifying and pruning language-specific parameters. First, we use Shapley Values, a credit allocation metric from coalitional game theory, to identify attention heads that introduce interference. Then, we show that removing identified attention heads from a fixed model improves performance for a target language on both sentence classification and structural prediction, seeing gains as large as 24.7\%. Finally, we provide insights on language-agnostic and language-specific attention heads using attention visualization.
| false
| false
| false
| false
| false
| false
| true
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 322,965
|
2205.10683
|
Scalable and Efficient Training of Large Convolutional Neural Networks
with Differential Privacy
|
Large convolutional neural networks (CNN) can be difficult to train in the differentially private (DP) regime, since the optimization algorithms require a computationally expensive operation, known as the per-sample gradient clipping. We propose an efficient and scalable implementation of this clipping on convolutional layers, termed as the mixed ghost clipping, that significantly eases the private training in terms of both time and space complexities, without affecting the accuracy. The improvement in efficiency is rigorously studied through the first complexity analysis for the mixed ghost clipping and existing DP training algorithms. Extensive experiments on vision classification tasks, with large ResNet, VGG, and Vision Transformers, demonstrate that DP training with mixed ghost clipping adds $1\sim 10\%$ memory overhead and $<2\times$ slowdown to the standard non-private training. Specifically, when training VGG19 on CIFAR10, the mixed ghost clipping is $3\times$ faster than state-of-the-art Opacus library with $18\times$ larger maximum batch size. To emphasize the significance of efficient DP training on convolutional layers, we achieve 96.7\% accuracy on CIFAR10 and 83.0\% on CIFAR100 at $\epsilon=1$ using BEiT, while the previous best results are 94.8\% and 67.4\%, respectively. We open-source a privacy engine (\url{https://github.com/woodyx218/private_vision}) that implements DP training of CNN with a few lines of code.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| 297,814
|
1906.04798
|
Table-Based Neural Units: Fully Quantizing Networks for Multiply-Free
Inference
|
In this work, we propose to quantize all parts of standard classification networks and replace the activation-weight--multiply step with a simple table-based lookup. This approach results in networks that are free of floating-point operations and free of multiplications, suitable for direct FPGA and ASIC implementations. It also provides us with two simple measures of per-layer and network-wide compactness as well as insight into the distribution characteristics of activationoutput and weight values. We run controlled studies across different quantization schemes, both fixed and adaptive and, within the set of adaptive approaches, both parametric and model-free. We implement our approach to quantization with minimal, localized changes to the training process, allowing us to benefit from advances in training continuous-valued network architectures. We apply our approach successfully to AlexNet, ResNet, and MobileNet. We show results that are within 1.6% of the reported, non-quantized performance on MobileNet using only 40 entries in our table. This performance gap narrows to zero when we allow tables with 320 entries. Our results give the best accuracies among multiply-free networks.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 134,839
|
2210.17520
|
Fully Adaptive Composition for Gaussian Differential Privacy
|
We show that Gaussian Differential Privacy, a variant of differential privacy tailored to the analysis of Gaussian noise addition, composes gracefully even in the presence of a fully adaptive analyst. Such an analyst selects mechanisms (to be run on a sensitive data set) and their privacy budgets adaptively, that is, based on the answers from other mechanisms run previously on the same data set. In the language of Rogers, Roth, Ullman and Vadhan, this gives a filter for GDP with the same parameters as for nonadaptive composition.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 327,719
|
2407.19271
|
Sewer Image Super-Resolution with Depth Priors and Its Lightweight
Network
|
The Quick-view (QV) technique serves as a primary method for detecting defects within sewerage systems. However, the effectiveness of QV is impeded by the limited visual range of its hardware, resulting in suboptimal image quality for distant portions of the sewer network. Image super-resolution is an effective way to improve image quality and has been applied in a variety of scenes. However, research on super-resolution for sewer images remains considerably unexplored. In response, this study leverages the inherent depth relationships present within QV images and introduces a novel Depth-guided, Reference-based Super-Resolution framework denoted as DSRNet. It comprises two core components: a depth extraction module and a depth information matching module (DMM). DSRNet utilizes the adjacent frames of the low-resolution image as reference images and helps them recover texture information based on the correlation. By combining these modules, the integration of depth priors significantly enhances both visual quality and performance benchmarks. Besides, in pursuit of computational efficiency and compactness, a super-resolution knowledge distillation model based on an attention mechanism is introduced. This mechanism facilitates the acquisition of feature similarity between a more complex teacher model and a streamlined student model, with the latter being a lightweight version of DSRNet. Experimental results demonstrate that DSRNet significantly improves PSNR and SSIM compared with other methods. This study also conducts experiments on sewer defect semantic segmentation, object detection, and classification on the Pipe dataset and Sewer-ML dataset. Experiments show that the method can improve the performance of low-resolution sewer images in these tasks.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 476,712
|
2305.09293
|
Out-of-Distribution Detection for Adaptive Computer Vision
|
It is well known that computer vision can be unreliable when faced with previously unseen imaging conditions. This paper proposes a method to adapt camera parameters according to a normalizing flow-based out-of-distibution detector. A small-scale study is conducted which shows that adapting camera parameters according to this out-of-distibution detector leads to an average increase of 3 to 4 percentage points in mAP, mAR and F1 performance metrics of a YOLOv4 object detector. As a secondary result, this paper also shows that it is possible to train a normalizing flow model for out-of-distribution detection on the COCO dataset, which is larger and more diverse than most benchmarks for out-of-distibution detectors.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| 364,589
|
2404.19597
|
TuBA: Cross-Lingual Transferability of Backdoor Attacks in LLMs with
Instruction Tuning
|
The implications of backdoor attacks on English-centric large language models (LLMs) have been widely examined - such attacks can be achieved by embedding malicious behaviors during training and activated under specific conditions that trigger malicious outputs. Despite the increasing support for multilingual capabilities in open-source and proprietary LLMs, the impact of backdoor attacks on these systems remains largely under-explored. Our research focuses on cross-lingual backdoor attacks against multilingual LLMs, particularly investigating how poisoning the instruction-tuning data for one or two languages can affect the outputs for languages whose instruction-tuning data were not poisoned. Despite its simplicity, our empirical analysis reveals that our method exhibits remarkable efficacy in models like mT5 and GPT-4o, with high attack success rates, surpassing 90% in more than 7 out of 12 languages across various scenarios. Our findings also indicate that more powerful models show increased susceptibility to transferable cross-lingual backdoor attacks, which also applies to LLMs predominantly pre-trained on English data, such as Llama2, Llama3, and Gemma. Moreover, our experiments demonstrate 1) High Transferability: the backdoor mechanism operates successfully in cross-lingual response scenarios across 26 languages, achieving an average attack success rate of 99%, and 2) Robustness: the proposed attack remains effective even after defenses are applied. These findings expose critical security vulnerabilities in multilingual LLMs and highlight the urgent need for more robust, targeted defense strategies to address the unique challenges posed by cross-lingual backdoor transfer.
| false
| false
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| true
| false
| false
| false
| false
| false
| 450,705
|
2302.06601
|
Multiple Instance Learning with Trainable Decision Tree Ensembles
|
A new random forest based model for solving the Multiple Instance Learning (MIL) problem under small tabular data, called Soft Tree Ensemble MIL (STE-MIL), is proposed. A new type of soft decision trees is considered, which is similar to the well-known soft oblique trees, but with a smaller number of trainable parameters. In order to train the trees, it is proposed to convert them into neural networks of a specific form, which approximate the tree functions. It is also proposed to aggregate the instance and bag embeddings (output vectors) by using the attention mechanism. The whole STE-MIL model, including soft decision trees, neural networks, the attention mechanism and a classifier, is trained in an end-to-end manner. Numerical experiments with tabular datasets illustrate STE-MIL. The corresponding code implementing the model is publicly available.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 345,471
|
2104.01241
|
TreeToaster: Towards an IVM-Optimized Compiler
|
A compiler's optimizer operates over abstract syntax trees (ASTs), continuously applying rewrite rules to replace subtrees of the AST with more efficient ones. Especially on large source repositories, even simply finding opportunities for a rewrite can be expensive, as optimizer traverses the AST naively. In this paper, we leverage the need to repeatedly find rewrites, and explore options for making the search faster through indexing and incremental view maintenance (IVM). Concretely, we consider bolt-on approaches that make use of embedded IVM systems like DBToaster, as well as two new approaches: Label-indexing and TreeToaster, an AST-specialized form of IVM. We integrate these approaches into an existing just-in-time data structure compiler and show experimentally that TreeToaster can significantly improve performance with minimal memory overheads.
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| true
| true
| 228,275
|
2502.13777
|
Herglotz-NET: Implicit Neural Representation of Spherical Data with
Harmonic Positional Encoding
|
Representing and processing data in spherical domains presents unique challenges, primarily due to the curvature of the domain, which complicates the application of classical Euclidean techniques. Implicit neural representations (INRs) have emerged as a promising alternative for high-fidelity data representation; however, to effectively handle spherical domains, these methods must be adapted to the inherent geometry of the sphere to maintain both accuracy and stability. In this context, we propose Herglotz-NET (HNET), a novel INR architecture that employs a harmonic positional encoding based on complex Herglotz mappings. This encoding yields a well-posed representation on the sphere with interpretable and robust spectral properties. Moreover, we present a unified expressivity analysis showing that any spherical-based INR satisfying a mild condition exhibits a predictable spectral expansion that scales with network depth. Our results establish HNET as a scalable and flexible framework for accurate modeling of spherical data.
| false
| false
| false
| false
| false
| false
| true
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| false
| 535,514
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.