id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2312.12288 | New Qutrit Codes from Pure and Bordered Multidimensional Circulant
Construction | We use multidimensional circulant approach to construct new qutrit stabilizer $\dsb{\ell, 0, d}$ codes with parameters $(\ell, d) \in \{(51, 16), (52, 16), (54, 17), (55, 17), (57, 17)\}$ through symplectic self-dual additive codes over $\F_9$. In addition to these five new codes, we use bordered construction to derive two more qutrit codes with parameters $(\ell, d) \in \{(53, 16), (56, 17)\}$ that improve upon previously best known parameters. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 416,904 |
2209.05251 | Spotting Virus from Satellites: Modeling the Circulation of West Nile
Virus Through Graph Neural Networks | The occurrence of West Nile Virus (WNV) represents one of the most common mosquito-borne zoonosis viral infections. Its circulation is usually associated with climatic and environmental conditions suitable for vector proliferation and virus replication. On top of that, several statistical models have been developed to shape and forecast WNV circulation: in particular, the recent massive availability of Earth Observation (EO) data, coupled with the continuous advances in the field of Artificial Intelligence, offer valuable opportunities. In this paper, we seek to predict WNV circulation by feeding Deep Neural Networks (DNNs) with satellite images, which have been extensively shown to hold environmental and climatic features. Notably, while previous approaches analyze each geographical site independently, we propose a spatial-aware approach that considers also the characteristics of close sites. Specifically, we build upon Graph Neural Networks (GNN) to aggregate features from neighbouring places, and further extend these modules to consider multiple relations, such as the difference in temperature and soil moisture between two sites, as well as the geographical distance. Moreover, we inject time-related information directly into the model to take into account the seasonality of virus spread. We design an experimental setting that combines satellite images - from Landsat and Sentinel missions - with ground truth observations of WNV circulation in Italy. We show that our proposed Multi-Adjacency Graph Attention Network (MAGAT) consistently leads to higher performance when paired with an appropriate pre-training stage. Finally, we assess the importance of each component of MAGAT in our ablation studies. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 317,038 |
1402.5380 | Godseed: Benevolent or Malevolent? | It is hypothesized by some thinkers that benign looking AI objectives may result in powerful AI drives that may pose an existential risk to human society. We analyze this scenario and find the underlying assumptions to be unlikely. We examine the alternative scenario of what happens when universal goals that are not human-centric are used for designing AI agents. We follow a design approach that tries to exclude malevolent motivations from AI agents, however, we see that objectives that seem benevolent may pose significant risk. We consider the following meta-rules: preserve and pervade life and culture, maximize the number of free minds, maximize intelligence, maximize wisdom, maximize energy production, behave like human, seek pleasure, accelerate evolution, survive, maximize control, and maximize capital. We also discuss various solution approaches for benevolent behavior including selfless goals, hybrid designs, Darwinism, universal constraints, semi-autonomy, and generalization of robot laws. A "prime directive" for AI may help in formulating an encompassing constraint for avoiding malicious behavior. We hypothesize that social instincts for autonomous robots may be effective such as attachment learning. We mention multiple beneficial scenarios for an advanced semi-autonomous AGI agent in the near future including space exploration, automation of industries, state functions, and cities. We conclude that a beneficial AI agent with intelligence beyond human-level is possible and has many practical use cases. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 31,053 |
2409.15349 | Damage detection in an uncertain nonlinear beam based on stochastic
Volterra series | The damage detection problem in mechanical systems, using vibration measurements, is commonly called Structural Health Monitoring (SHM). Many tools are able to detect damages by changes in the vibration pattern, mainly, when damages induce nonlinear behavior. However, a more difficult problem is to detect structural variation associated with damage, when the mechanical system has nonlinear behavior even in the reference condition. In these cases, more sophisticated methods are required to detect if the changes in the response are based on some structural variation or changes in the vibration regime, because both can generate nonlinearities. Among the many ways to solve this problem, the use of the Volterra series has several favorable points, because they are a generalization of the linear convolution, allowing the separation of linear and nonlinear contributions by input filtering through the Volterra kernels. On the other hand, the presence of uncertainties in mechanical systems, due to noise, geometric imperfections, manufacturing irregularities, environmental conditions, and others, can also change the responses, becoming more difficult the damage detection procedure. An approach based on a stochastic version of Volterra series is proposed to be used in the detection of a breathing crack in a beam vibrating in a nonlinear regime of motion, even in reference condition (without crack). The system uncertainties are simulated by the variation imposed in the linear stiffness and damping coefficient. The results show, that the nonlinear analysis done, considering the high order Volterra kernels, allows the approach to detect the crack with a small propagation and probability confidence, even in the presence of uncertainties. | false | true | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 490,878 |
2212.04708 | PATO: Policy Assisted TeleOperation for Scalable Robot Data Collection | Large-scale data is an essential component of machine learning as demonstrated in recent advances in natural language processing and computer vision research. However, collecting large-scale robotic data is much more expensive and slower as each operator can control only a single robot at a time. To make this costly data collection process efficient and scalable, we propose Policy Assisted TeleOperation (PATO), a system which automates part of the demonstration collection process using a learned assistive policy. PATO autonomously executes repetitive behaviors in data collection and asks for human input only when it is uncertain about which subtask or behavior to execute. We conduct teleoperation user studies both with a real robot and a simulated robot fleet and demonstrate that our assisted teleoperation system reduces human operators' mental load while improving data collection efficiency. Further, it enables a single operator to control multiple robots in parallel, which is a first step towards scalable robotic data collection. For code and video results, see https://clvrai.com/pato | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 335,549 |
2410.08601 | StraGo: Harnessing Strategic Guidance for Prompt Optimization | Prompt engineering is pivotal for harnessing the capabilities of large language models (LLMs) across diverse applications. While existing prompt optimization methods improve prompt effectiveness, they often lead to prompt drifting, where newly generated prompts can adversely impact previously successful cases while addressing failures. Furthermore, these methods tend to rely heavily on LLMs' intrinsic capabilities for prompt optimization tasks. In this paper, we introduce StraGo (Strategic-Guided Optimization), a novel approach designed to mitigate prompt drifting by leveraging insights from both successful and failed cases to identify critical factors for achieving optimization objectives. StraGo employs a how-to-do methodology, integrating in-context learning to formulate specific, actionable strategies that provide detailed, step-by-step guidance for prompt optimization. Extensive experiments conducted across a range of tasks, including reasoning, natural language understanding, domain-specific knowledge, and industrial applications, demonstrate StraGo's superior performance. It establishes a new state-of-the-art in prompt optimization, showcasing its ability to deliver stable and effective prompt improvements. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 497,188 |
2210.08008 | Inductive Logical Query Answering in Knowledge Graphs | Formulating and answering logical queries is a standard communication interface for knowledge graphs (KGs). Alleviating the notorious incompleteness of real-world KGs, neural methods achieved impressive results in link prediction and complex query answering tasks by learning representations of entities, relations, and queries. Still, most existing query answering methods rely on transductive entity embeddings and cannot generalize to KGs containing new entities without retraining the entity embeddings. In this work, we study the inductive query answering task where inference is performed on a graph containing new entities with queries over both seen and unseen entities. To this end, we devise two mechanisms leveraging inductive node and relational structure representations powered by graph neural networks (GNNs). Experimentally, we show that inductive models are able to perform logical reasoning at inference time over unseen nodes generalizing to graphs up to 500% larger than training ones. Exploring the efficiency--effectiveness trade-off, we find the inductive relational structure representation method generally achieves higher performance, while the inductive node representation method is able to answer complex queries in the inference-only regime without any training on queries and scales to graphs of millions of nodes. Code is available at https://github.com/DeepGraphLearning/InductiveQE. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 323,946 |
2211.02332 | Once-for-All Sequence Compression for Self-Supervised Speech Models | The sequence length along the time axis is often the dominant factor of the computation in speech processing. Works have been proposed to reduce the sequence length for lowering the computational cost in self-supervised speech models. However, different downstream tasks have different tolerance of sequence compressing, so a model that produces a fixed compressing rate may not fit all tasks. In this work, we introduce a once-for-all (OFA) sequence compression framework for self-supervised speech models that supports a continuous range of operating compressing rates. The framework is evaluated on various tasks, showing marginal degradation compared to the fixed compressing rate variants with a smooth performance-efficiency trade-off. We further explore adaptive compressing rate learning, demonstrating the ability to select task-specific preferred frame periods without needing a grid search. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 328,551 |
0705.1390 | Machine and Component Residual Life Estimation through the Application
of Neural Networks | This paper concerns the use of neural networks for predicting the residual life of machines and components. In addition, the advantage of using condition-monitoring data to enhance the predictive capability of these neural networks was also investigated. A number of neural network variations were trained and tested with the data of two different reliability-related datasets. The first dataset represents the renewal case where the failed unit is repaired and restored to a good-as-new condition. Data was collected in the laboratory by subjecting a series of similar test pieces to fatigue loading with a hydraulic actuator. The average prediction error of the various neural networks being compared varied from 431 to 841 seconds on this dataset, where test pieces had a characteristic life of 8,971 seconds. The second dataset was collected from a group of pumps used to circulate a water and magnetite solution within a plant. The data therefore originated from a repaired system affected by reliability degradation. When optimized, the multi-layer perceptron neural networks trained with the Levenberg-Marquardt algorithm and the general regression neural network produced a sum-of-squares error within 11.1% of each other. The potential for using neural networks for residual life prediction and the advantage of incorporating condition-based data into the model were proven for both examples. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 211 |
0805.0459 | Phase transition in SONFIS&SORST | In this study, we introduce general frame of MAny Connected Intelligent Particles Systems (MACIPS). Connections and interconnections between particles get a complex behavior of such merely simple system (system in system).Contribution of natural computing, under information granulation theory, are the main topics of this spacious skeleton. Upon this clue, we organize two algorithms involved a few prominent intelligent computing and approximate reasoning methods: self organizing feature map (SOM), Neuro- Fuzzy Inference System and Rough Set Theory (RST). Over this, we show how our algorithms can be taken as a linkage of government-society interaction, where government catches various fashions of behavior: solid (absolute) or flexible. So, transition of such society, by changing of connectivity parameters (noise) from order to disorder is inferred. Add to this, one may find an indirect mapping among financial systems and eventual market fluctuations with MACIPS. Keywords: phase transition, SONFIS, SORST, many connected intelligent particles system, society-government interaction | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 1,711 |
0909.3444 | Analyse en d\'ependances \`a l'aide des grammaires d'interaction | This article proposes a method to extract dependency structures from phrase-structure level parsing with Interaction Grammars. Interaction Grammars are a formalism which expresses interactions among words using a polarity system. Syntactical composition is led by the saturation of polarities. Interactions take place between constituents, but as grammars are lexicalized, these interactions can be translated at the level of words. Dependency relations are extracted from the parsing process: every dependency is the consequence of a polarity saturation. The dependency relations we obtain can be seen as a refinement of the usual dependency tree. Generally speaking, this work sheds new light on links between phrase structure and dependency parsing. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 4,526 |
2402.00137 | Multimodal Neurodegenerative Disease Subtyping Explained by ChatGPT | Alzheimer's disease (AD) is the most prevalent neurodegenerative disease; yet its currently available treatments are limited to stopping disease progression. Moreover, effectiveness of these treatments is not guaranteed due to the heterogenetiy of the disease. Therefore, it is essential to be able to identify the disease subtypes at a very early stage. Current data driven approaches are able to classify the subtypes at later stages of AD or related disorders, but struggle when predicting at the asymptomatic or prodromal stage. Moreover, most existing models either lack explainability behind the classification or only use a single modality for the assessment, limiting scope of its analysis. Thus, we propose a multimodal framework that uses early-stage indicators such as imaging, genetics and clinical assessments to classify AD patients into subtypes at early stages. Similarly, we build prompts and use large language models, such as ChatGPT, to interpret the findings of our model. In our framework, we propose a tri-modal co-attention mechanism (Tri-COAT) to explicitly learn the cross-modal feature associations. Our proposed model outperforms baseline models and provides insight into key cross-modal feature associations supported by known biological mechanisms. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 425,496 |
2407.16847 | SPLAT: A framework for optimised GPU code-generation for SParse reguLar
ATtention | Multi-head-self-attention (MHSA) mechanisms achieve state-of-the-art (SOTA) performance across natural language processing and vision tasks. However, their quadratic dependence on sequence lengths has bottlenecked inference speeds. To circumvent this bottleneck, researchers have proposed various sparse-MHSA models, where a subset of full attention is computed. Despite their promise, current sparse libraries and compilers do not support high-performance implementations for diverse sparse-MHSA patterns due to the underlying sparse formats they operate on. These formats, which are typically designed for high-performance & scientific computing applications, are either curated for extreme amounts of random sparsity (<1% non-zero values), or specific sparsity patterns. However, the sparsity patterns in sparse-MHSA are moderately sparse (10-50% non-zero values) and varied, resulting in existing sparse-formats trading off generality for performance. We bridge this gap, achieving both generality and performance, by proposing a novel sparse format: affine-compressed-sparse-row (ACSR) and supporting code-generation scheme, SPLAT, that generates high-performance implementations for diverse sparse-MHSA patterns on GPUs. Core to our proposed format and code generation algorithm is the observation that common sparse-MHSA patterns have uniquely regular geometric properties. These properties, which can be analyzed just-in-time, expose novel optimizations and tiling strategies that SPLAT exploits to generate high-performance implementations for diverse patterns. To demonstrate SPLAT's efficacy, we use it to generate code for various sparse-MHSA models, achieving geomean speedups of 2.05x and 4.05x over hand-written kernels written in triton and TVM respectively on A100 GPUs. Moreover, its interfaces are intuitive and easy to use with existing implementations of MHSA in JAX. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 475,745 |
2309.15286 | Composable Coresets for Determinant Maximization: Greedy is Almost
Optimal | Given a set of $n$ vectors in $\mathbb{R}^d$, the goal of the \emph{determinant maximization} problem is to pick $k$ vectors with the maximum volume. Determinant maximization is the MAP-inference task for determinantal point processes (DPP) and has recently received considerable attention for modeling diversity. As most applications for the problem use large amounts of data, this problem has been studied in the relevant \textit{composable coreset} setting. In particular, [Indyk-Mahabadi-OveisGharan-Rezaei--SODA'20, ICML'19] showed that one can get composable coresets with optimal approximation factor of $\tilde O(k)^k$ for the problem, and that a local search algorithm achieves an almost optimal approximation guarantee of $O(k)^{2k}$. In this work, we show that the widely-used Greedy algorithm also provides composable coresets with an almost optimal approximation factor of $O(k)^{3k}$, which improves over the previously known guarantee of $C^{k^2}$, and supports the prior experimental results showing the practicality of the greedy algorithm as a coreset. Our main result follows by showing a local optimality property for Greedy: swapping a single point from the greedy solution with a vector that was not picked by the greedy algorithm can increase the volume by a factor of at most $(1+\sqrt{k})$. This is tight up to the additive constant $1$. Finally, our experiments show that the local optimality of the greedy algorithm is even lower than the theoretical bound on real data sets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 394,913 |
2501.13120 | Multilinguality in LLM-Designed Reward Functions for Restless Bandits:
Effects on Task Performance and Fairness | Restless Multi-Armed Bandits (RMABs) have been successfully applied to resource allocation problems in a variety of settings, including public health. With the rapid development of powerful large language models (LLMs), they are increasingly used to design reward functions to better match human preferences. Recent work has shown that LLMs can be used to tailor automated allocation decisions to community needs using language prompts. However, this has been studied primarily for English prompts and with a focus on task performance only. This can be an issue since grassroots workers, especially in developing countries like India, prefer to work in local languages, some of which are low-resource. Further, given the nature of the problem, biases along population groups unintended by the user are also undesirable. In this work, we study the effects on both task performance and fairness when the DLM algorithm, a recent work on using LLMs to design reward functions for RMABs, is prompted with non-English language commands. Specifically, we run the model on a synthetic environment for various prompts translated into multiple languages. The prompts themselves vary in complexity. Our results show that the LLM-proposed reward functions are significantly better when prompted in English compared to other languages. We also find that the exact phrasing of the prompt impacts task performance. Further, as prompt complexity increases, performance worsens for all languages; however, it is more robust with English prompts than with lower-resource languages. On the fairness side, we find that low-resource languages and more complex prompts are both highly likely to create unfairness along unintended dimensions. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | true | false | false | false | 526,566 |
1804.09039 | Robust Decentralized Navigation of Multi-Agent Systems with Collision
Avoidance and Connectivity Maintenance Using Model Predictive Controllers | This paper addresses the problem of navigation control of a general class of 2nd order uncertain nonlinear multi-agent systems in a bounded workspace, which is a subset of $R^3$ , with static obstacles. In particular, we propose a decentralized control protocol such that each agent reaches a predefined position at the workspace, while using local information based on a limited sensing radius. The proposed scheme guarantees that the initially connected agents remain always connected. In addition, by introducing certain distance constraints, we guarantee inter-agent collision avoidance as well as collision avoidance with the obstacles and the boundary of the workspace. The proposed controllers employ a class of Decentralized Nonlinear Model Predictive Controllers (DNMPC) under the presence of disturbances and uncertainties. Finally, simulation results verify the validity of the proposed framework. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 95,892 |
1711.04993 | Consistent distributed state estimation with global observability over
sensor network | This paper studies the distributed state estimation problem for a class of discrete time-varying systems over sensor networks. Firstly, it is shown that a networked Kalman filter with optimal gain parameter is actually a centralized filter, since it requires each sensor to have global information which is usually forbidden in large networks. Then, a sub-optimal distributed Kalman filter (DKF) is proposed by employing the covariance intersection (CI) fusion strategy. It is proven that the proposed DKF is of consistency, that is, the upper bound of error covariance matrix can be provided by the filter in real time. The consistency also enables the design of adaptive CI weights for better filter precision. Furthermore, the boundedness of covariance matrix and the convergence of the proposed filter are proven based on the strong connectivity of directed network topology and the global observability which permits the sub-system with local sensor's measurements to be unobservable. Meanwhile, to keep the covariance of the estimation error bounded, the proposed DKF does not require the system matrix to be nonsingular at each moment, which seems to be a necessary condition in the main DKF designs under global observability. Finally, simulation results of two examples show the effectiveness of the algorithm in the considered scenarios. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 84,476 |
2308.09046 | Fault Detection and Classification using Wavelet and ANN in DFIG and
TCSC Connected Transmission Line | This paper presents fault detection and classification using Wavelet and ANN based methods in a DFIG-based series compensated system. The state-of-the art methods include Wavelet transform, Fourier transform, and Wavelet-neuro fuzzy methods-based system for fault detection and classification. However, the accuracy of these state-of-the-art methods diminishes during variable conditions such as changes in wind speed, high impedance faults, and the changes in the series compensation level. Specifically, in Wavelet transform based methods, the threshold values need to be adapted based on the variable field conditions. To solve this problem, this paper has proposed a Wavelet-ANN based fault detection method where Wavelet is used as an identifier and ANN is used as a classifier for detecting various fault cases. This methodology is also effective under SSR condition. The proposed methodology is evaluated on various fault and non-fault cases generated on an IEEE first benchmark model under varying compensation levels from 20% to 55%, impedance faults, and wind velocity from 6m/sec to 10m/sec using MATLAB/Simulink, OPALRT(OP4510) manufactured real-time digital simulator environment, Arduino board I/O ports communicating with external PC in which ANN model dumped, using Arduino support package of MATLAB. The preliminary results are compared with the state-of-the-art fault detection method, where the proposed method shows robust performance under varying field conditions. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 386,132 |
2501.09274 | Large Language Model is Secretly a Protein Sequence Optimizer | We consider the protein sequence engineering problem, which aims to find protein sequences with high fitness levels, starting from a given wild-type sequence. Directed evolution has been a dominating paradigm in this field which has an iterative process to generate variants and select via experimental feedback. We demonstrate large language models (LLMs), despite being trained on massive texts, are secretly protein sequence optimizers. With a directed evolutionary method, LLM can perform protein engineering through Pareto and experiment-budget constrained optimization, demonstrating success on both synthetic and experimental fitness landscapes. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 525,074 |
1905.07799 | Adaptive Attention Span in Transformers | We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 131,329 |
2410.21014 | Informed Deep Abstaining Classifier: Investigating noise-robust training
for diagnostic decision support systems | Image-based diagnostic decision support systems (DDSS) utilizing deep learning have the potential to optimize clinical workflows. However, developing DDSS requires extensive datasets with expert annotations and is therefore costly. Leveraging report contents from radiological data bases with Natural Language Processing to annotate the corresponding image data promises to replace labor-intensive manual annotation. As mining "real world" databases can introduce label noise, noise-robust training losses are of great interest. However, current noise-robust losses do not consider noise estimations that can for example be derived based on the performance of the automatic label generator used. In this study, we expand the noise-robust Deep Abstaining Classifier (DAC) loss to an Informed Deep Abstaining Classifier (IDAC) loss by incorporating noise level estimations during training. Our findings demonstrate that IDAC enhances the noise robustness compared to DAC and several state-of-the-art loss functions. The results are obtained on various simulated noise levels using a public chest X-ray data set. These findings are reproduced on an in-house noisy data set, where labels were extracted from the clinical systems of the University Hospital Bonn by a text-based transformer. The IDAC can therefore be a valuable tool for researchers, companies or clinics aiming to develop accurate and reliable DDSS from routine clinical data. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 503,052 |
2408.14847 | Intraoperative Glioma Segmentation with YOLO + SAM for Improved Accuracy
in Tumor Resection | Gliomas, a common type of malignant brain tumor, present significant surgical challenges due to their similarity to healthy tissue. Preoperative Magnetic Resonance Imaging (MRI) images are often ineffective during surgery due to factors such as brain shift, which alters the position of brain structures and tumors. This makes real-time intraoperative MRI (ioMRI) crucial, as it provides updated imaging that accounts for these shifts, ensuring more accurate tumor localization and safer resections. This paper presents a deep learning pipeline combining You Only Look Once Version 8 (YOLOv8) and Segment Anything Model Vision Transformer-base (SAM ViT-b) to enhance glioma detection and segmentation during ioMRI. Our model was trained using the Brain Tumor Segmentation 2021 (BraTS 2021) dataset, which includes standard magnetic resonance imaging (MRI) images, and noise-augmented MRI images that simulate ioMRI images. Noised MRI images are harder for a deep learning pipeline to segment, but they are more representative of surgical conditions. Achieving a Dice Similarity Coefficient (DICE) score of 0.79, our model performs comparably to state-of-the-art segmentation models tested on noiseless data. This performance demonstrates the model's potential to assist surgeons in maximizing tumor resection and improving surgical outcomes. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 483,704 |
2406.06518 | Data Augmentation for Multivariate Time Series Classification: An
Experimental Study | Our study investigates the impact of data augmentation on the performance of multivariate time series models, focusing on datasets from the UCR archive. Despite the limited size of these datasets, we achieved classification accuracy improvements in 10 out of 13 datasets using the Rocket and InceptionTime models. This highlights the essential role of sufficient data in training effective models, paralleling the advancements seen in computer vision. Our work delves into adapting and applying existing methods in innovative ways to the domain of multivariate time series classification. Our comprehensive exploration of these techniques sets a new standard for addressing data scarcity in time series analysis, emphasizing that diverse augmentation strategies are crucial for unlocking the potential of both traditional and deep learning models. Moreover, by meticulously analyzing and applying a variety of augmentation techniques, we demonstrate that strategic data enrichment can enhance model accuracy. This not only establishes a benchmark for future research in time series analysis but also underscores the importance of adopting varied augmentation approaches to improve model performance in the face of limited data availability. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 462,619 |
2310.16334 | Structured Multi-Track Accompaniment Arrangement via Style Prior
Modelling | In the realm of music AI, arranging rich and structured multi-track accompaniments from a simple lead sheet presents significant challenges. Such challenges include maintaining track cohesion, ensuring long-term coherence, and optimizing computational efficiency. In this paper, we introduce a novel system that leverages prior modelling over disentangled style factors to address these challenges. Our method presents a two-stage process: initially, a piano arrangement is derived from the lead sheet by retrieving piano texture styles; subsequently, a multi-track orchestration is generated by infusing orchestral function styles into the piano arrangement. Our key design is the use of vector quantization and a unique multi-stream Transformer to model the long-term flow of the orchestration style, which enables flexible, controllable, and structured music generation. Experiments show that by factorizing the arrangement task into interpretable sub-stages, our approach enhances generative capacity while improving efficiency. Additionally, our system supports a variety of music genres and provides style control at different composition hierarchies. We further show that our system achieves superior coherence, structure, and overall arrangement quality compared to existing baselines. | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 402,682 |
1705.09970 | Probabilistic Program Abstractions | Abstraction is a fundamental tool for reasoning about complex systems. Program abstraction has been utilized to great effect for analyzing deterministic programs. At the heart of program abstraction is the relationship between a concrete program, which is difficult to analyze, and an abstract program, which is more tractable. Program abstractions, however, are typically not probabilistic. We generalize non-deterministic program abstractions to probabilistic program abstractions by explicitly quantifying the non-deterministic choices. Our framework upgrades key definitions and properties of abstractions to the probabilistic context. We also discuss preliminary ideas for performing inference on probabilistic abstractions and general probabilistic programs. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 74,312 |
1701.00623 | Bottom-Up Evaluation of Datalog: Preliminary Report | Bottom-up evaluation of Datalog has been studied for a long time, and is standard material in textbooks. However, if one actually wants to develop a deductive database system, it turns out that there are many implementation options. For instance, the sequence in which rule instances are applied is not given. In this paper, we study a method that immediately uses a derived tuple to derive more tuples (called the Push method). In this way, storage space for intermediate results can be reduced. The main contribution of our method is the way in which we minimize the copying of values at runtime, and do much work already at compile-time. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | true | 66,296 |
1708.06521 | Strider-lsa: Massive RDF Stream Reasoning in the Cloud | Reasoning over semantically annotated data is an emerging trend in stream processing aiming to produce sound and complete answers to a set of continuous queries. It usually comes at the cost of finding a trade-off between data throughput and the cost of expressive inferences. Strider-lsa proposes such a trade-off and combines a scalable RDF stream processing engine with an efficient reasoning system. The main reasoning tasks are based on a query rewriting approach for SPARQL that benefits from an intelligent encoding of RDFS+ (RDFS + owl:sameAs) ontology elements. Strider-lsa runs in production at a major international water management company to detect anomalies from sensor streams. The system is evaluated along different dimensions and over multiple datasets to emphasize its performance. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 79,337 |
2108.07199 | Real-time Human-Centric Segmentation for Complex Video Scenes | Most existing video tasks related to "human" focus on the segmentation of salient humans, ignoring the unspecified others in the video. Few studies have focused on segmenting and tracking all humans in a complex video, including pedestrians and humans of other states (e.g., seated, riding, or occluded). In this paper, we propose a novel framework, abbreviated as HVISNet, that segments and tracks all presented people in given videos based on a one-stage detector. To better evaluate complex scenes, we offer a new benchmark called HVIS (Human Video Instance Segmentation), which comprises 1447 human instance masks in 805 high-resolution videos in diverse scenes. Extensive experiments show that our proposed HVISNet outperforms the state-of-the-art methods in terms of accuracy at a real-time inference speed (30 FPS), especially on complex video scenes. We also notice that using the center of the bounding box to distinguish different individuals severely deteriorates the segmentation accuracy, especially in heavily occluded conditions. This common phenomenon is referred to as the ambiguous positive samples problem. To alleviate this problem, we propose a mechanism named Inner Center Sampling to improve the accuracy of instance segmentation. Such a plug-and-play inner center sampling mechanism can be incorporated in any instance segmentation models based on a one-stage detector to improve the performance. In particular, it gains 4.1 mAP improvement on the state-of-the-art method in the case of occluded humans. Code and data are available at https://github.com/IIGROUP/HVISNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 250,856 |
2006.08465 | Neural Certificates for Safe Control Policies | This paper develops an approach to learn a policy of a dynamical system that is guaranteed to be both provably safe and goal-reaching. Here, the safety means that a policy must not drive the state of the system to any unsafe region, while the goal-reaching requires the trajectory of the controlled system asymptotically converges to a goal region (a generalization of stability). We obtain the safe and goal-reaching policy by jointly learning two additional certificate functions: a barrier function that guarantees the safety and a developed Lyapunov-like function to fulfill the goal-reaching requirement, both of which are represented by neural networks. We show the effectiveness of the method to learn both safe and goal-reaching policies on various systems, including pendulums, cart-poles, and UAVs. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 182,200 |
1810.12778 | Reinforcement Learning and Deep Learning based Lateral Control for
Autonomous Driving | This paper investigates the vision-based autonomous driving with deep learning and reinforcement learning methods. Different from the end-to-end learning method, our method breaks the vision-based lateral control system down into a perception module and a control module. The perception module which is based on a multi-task learning neural network first takes a driver-view image as its input and predicts the track features. The control module which is based on reinforcement learning then makes a control decision based on these features. In order to improve the data efficiency, we propose visual TORCS (VTORCS), a deep reinforcement learning environment which is based on the open racing car simulator (TORCS). By means of the provided functions, one can train an agent with the input of an image or various physical sensor measurement, or evaluate the perception algorithm on this simulator. The trained reinforcement learning controller outperforms the linear quadratic regulator (LQR) controller and model predictive control (MPC) controller on different tracks. The experiments demonstrate that the perception module shows promising performance and the controller is capable of controlling the vehicle drive well along the track center with visual input. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 111,845 |
0909.1011 | Bits About the Channel: Multi-round Protocols for Two-way Fading
Channels | Most communication systems use some form of feedback, often related to channel state information. In this paper, we study diversity multiplexing tradeoff for both FDD and TDD systems, when both receiver and transmitter knowledge about the channel is noisy and potentially mismatched. For FDD systems, we first extend the achievable tradeoff region for 1.5 rounds of message passing to get higher diversity compared to the best known scheme, in the regime of higher multiplexing gains. We then break the mold of all current channel state based protocols by using multiple rounds of conferencing to extract more bits about the actual channel. This iterative refinement of the channel increases the diversity order with every round of communication. The protocols are on-demand in nature, using high powers for training and feedback only when the channel is in poor states. The key result is that the diversity multiplexing tradeoff with perfect training and K levels of perfect feedback can be achieved, even when there are errors in training the receiver and errors in the feedback link, with a multi-round protocol which has K rounds of training and K-1 rounds of binary feedback. The above result can be viewed as a generalization of Zheng and Tse, and Aggarwal and Sabharwal, where the result was shown to hold for K=1 and K=2 respectively. For TDD systems, we also develop new achievable strategies with multiple rounds of communication between the transmitter and the receiver, which use the reciprocity of the forward and the feedback channel. The multi-round TDD protocol achieves a diversity-multiplexing tradeoff which uniformly dominates its FDD counterparts, where no channel reciprocity is available. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,407 |
1301.6491 | SINR-based k-coverage probability in cellular networks with arbitrary
shadowing | We give numerically tractable, explicit integral expressions for the distribution of the signal-to-interference-and-noise-ratio (SINR) experienced by a typical user in the down-link channel from the k-th strongest base stations of a cellular network modelled by Poisson point process on the plane. Our signal propagation-loss model comprises of a power-law path-loss function with arbitrarily distributed shadowing, independent across all base stations, with and without Rayleigh fading. Our results are valid in the whole domain of SINR, in particular for SINR<1, where one observes multiple coverage. In this latter aspect our paper complements previous studies reported in [Dhillon et al. JSAC 2012]. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 21,449 |
2108.09271 | Multi-Server Private Linear Computation with Joint and Individual
Privacy Guarantees | This paper considers the problem of multi-server Private Linear Computation, under the joint and individual privacy guarantees. In this problem, identical copies of a dataset comprised of $K$ messages are stored on $N$ non-colluding servers, and a user wishes to obtain one linear combination of a $D$-subset of messages belonging to the dataset. The goal is to design a scheme for performing the computation such that the total amount of information downloaded from the servers is minimized, while the privacy of the $D$ messages required for the computation is protected. When joint privacy is required, the identities of all of these $D$ messages must be kept private jointly, and when individual privacy is required, the identity of every one of these $D$ messages must be kept private individually. In this work, we characterize the capacity, which is defined as the maximum achievable download rate, under both joint and individual privacy requirements. In particular, we show that when joint privacy is required the capacity is given by ${(1+1/N+\dots+1/N^{K-D})^{-1}}$, and when individual privacy is required the capacity is given by ${(1+1/N+\dots+1/N^{\lceil K/D\rceil-1})^{-1}}$ assuming that $D$ divides $K$, or $K\pmod D$ divides $D$. Our converse proofs are based on reduction from two variants of the multi-server Private Information Retrieval problem in the presence of side information. Our achievability schemes build up on our recently proposed schemes for single-server Private Linear Transformation and the multi-server private computation scheme proposed by Sun and Jafar. Using similar proof techniques, we also establish upper and lower bounds on the capacity for the cases in which the user wants to compute $L$ (potentially more than one) linear combinations. | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | 251,549 |
2307.12463 | Rethinking Data Distillation: Do Not Overlook Calibration | Neural networks trained on distilled data often produce over-confident output and require correction by calibration methods. Existing calibration methods such as temperature scaling and mixup work well for networks trained on original large-scale data. However, we find that these methods fail to calibrate networks trained on data distilled from large source datasets. In this paper, we show that distilled data lead to networks that are not calibratable due to (i) a more concentrated distribution of the maximum logits and (ii) the loss of information that is semantically meaningful but unrelated to classification tasks. To address this problem, we propose Masked Temperature Scaling (MTS) and Masked Distillation Training (MDT) which mitigate the limitations of distilled data and achieve better calibration results while maintaining the efficiency of dataset distillation. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 381,266 |
2308.09931 | TDG: Text-guided Domain Generalization | Domain generalization (DG) attempts to generalize a model trained on single or multiple source domains to the unseen target domain. Benefiting from the success of Visual-and-Language Pre-trained models in recent years, we argue that it is crucial for domain generalization by introducing extra text information. In this paper, we develop a novel Text-guided Domain Generalization (TDG) paradigm for domain generalization, which includes three following aspects. Specifically, we first devise an automatic words generation method to extend the description of current domains with novel domain-relevant words. Then, we embed the generated domain information into the text feature space, by the proposed prompt learning-based text feature generation method, which shares a common representation space with the image feature. Finally, we utilize both input image features and generated text features to train a specially designed classifier that generalizes well on unseen target domains, while the image encoder is also updated under the supervision of gradients back propagated from the classifier. Our experimental results show that the techniques incorporated by TDG contribute to the performance in an easy implementation manner. Experimental results on several domain generalization benchmarks show that our proposed framework achieves superior performance by effectively leveraging generated text information in domain generalization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 386,491 |
1210.6488 | A New Identification Framework For Off-Line Computation of
Moving-Horizon Observers | In this paper, a new nonlinear identification framework is proposed to address the issue of off-line computation of moving-horizon observer estimate. The proposed structure merges the advantages of nonlinear approximators with the efficient computation of constrained quadratic programming problems. A bound on the estimation error is proposed and the efficiency of the resulting scheme is illustrated using two state estimation examples. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 19,370 |
1112.1143 | Mathematical model for hit phenomena as stochastic process of
interactions of human interactions | Mathematical model for hit phenomena in entertainments in the society is presented as stochastic process of interactions of human dynamics. The model use only the time distribution of advertisement budget as input and the words of mouth (WOM) as posting in the social network system is used as the data to compare with the calculated results. The unit of time is daily. The WOM distribution in time is found to be very close to the residue distribution in time. The calculations for Japanese motion picture market due to the mathematical model agree very well with the actual residue distribution in time. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 13,327 |
2006.02816 | The Requirement Gatherers' Approach to the 2019 Multi-Agent Programming
Contest Scenario | The 2019 Multi-Agent Programming Contest (MAPC) scenario poses many challenges for agents participating in the contest. We discuss The Requirement Gatherers' (TRG) approach to handling the various challenges we faced -- including how we designed our system, how we went about debugging our agents, and the strategy we employed to each of our agents. We conclude the paper with remarks about the performance of our agents, and what we should have done differently. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 180,150 |
2411.00119 | Soft Condorcet Optimization for Ranking of General Agents | Driving progress of AI models and agents requires comparing their performance on standardized benchmarks; for general agents, individual performances must be aggregated across a potentially wide variety of different tasks. In this paper, we describe a novel ranking scheme inspired by social choice frameworks, called Soft Condorcet Optimization (SCO), to compute the optimal ranking of agents: the one that makes the fewest mistakes in predicting the agent comparisons in the evaluation data. This optimal ranking is the maximum likelihood estimate when evaluation data (which we view as votes) are interpreted as noisy samples from a ground truth ranking, a solution to Condorcet's original voting system criteria. SCO ratings are maximal for Condorcet winners when they exist, which we show is not necessarily true for the classical rating system Elo. We propose three optimization algorithms to compute SCO ratings and evaluate their empirical performance. When serving as an approximation to the Kemeny-Young voting method, SCO rankings are on average 0 to 0.043 away from the optimal ranking in normalized Kendall-tau distance across 865 preference profiles from the PrefLib open ranking archive. In a simulated noisy tournament setting, SCO achieves accurate approximations to the ground truth ranking and the best among several baselines when 59\% or more of the preference data is missing. Finally, SCO ranking provides the best approximation to the optimal ranking, measured on held-out test sets, in a problem containing 52,958 human players across 31,049 games of the classic seven-player game of Diplomacy. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | 504,439 |
2311.10785 | Text Sanitization Beyond Specific Domains: Zero-Shot Redaction &
Substitution with Large Language Models | In the context of information systems, text sanitization techniques are used to identify and remove sensitive data to comply with security and regulatory requirements. Even though many methods for privacy preservation have been proposed, most of them are focused on the detection of entities from specific domains (e.g., credit card numbers, social security numbers), lacking generality and requiring customization for each desirable domain. Moreover, removing words is, in general, a drastic measure, as it can degrade text coherence and contextual information. Less severe measures include substituting a word for a safe alternative, yet it can be challenging to automatically find meaningful substitutions. We present a zero-shot text sanitization technique that detects and substitutes potentially sensitive information using Large Language Models. Our evaluation shows that our method excels at protecting privacy while maintaining text coherence and contextual information, preserving data utility for downstream tasks. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 408,656 |
2109.08382 | To be Closer: Learning to Link up Aspects with Opinions | Dependency parse trees are helpful for discovering the opinion words in aspect-based sentiment analysis (ABSA). However, the trees obtained from off-the-shelf dependency parsers are static, and could be sub-optimal in ABSA. This is because the syntactic trees are not designed for capturing the interactions between opinion words and aspect words. In this work, we aim to shorten the distance between aspects and corresponding opinion words by learning an aspect-centric tree structure. The aspect and opinion words are expected to be closer along such tree structure compared to the standard dependency parse tree. The learning process allows the tree structure to adaptively correlate the aspect and opinion words, enabling us to better identify the polarity in the ABSA task. We conduct experiments on five aspect-based sentiment datasets, and the proposed model significantly outperforms recent strong baselines. Furthermore, our thorough analysis demonstrates the average distance between aspect and opinion words are shortened by at least 19% on the standard SemEval Restaurant14 dataset. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 255,891 |
2311.10337 | Scalable Edge Clustering of Dynamic Graphs via Weighted Line Graphs | Timestamped relational datasets consisting of records between pairs of entities are ubiquitous in data and network science. For applications like peer-to-peer communication, email, social network interactions, and computer network security, it makes sense to organize these records into groups based on how and when they are occurring. Weighted line graphs offer a natural way to model how records are related in such datasets but for large real-world graph topologies the complexity of building and utilizing the line graph is prohibitive. We present an algorithm to cluster the edges of a dynamic graph via the associated line graph without forming it explicitly. We outline a novel hierarchical dynamic graph edge clustering approach that efficiently breaks massive relational datasets into small sets of edges containing events at various timescales. This is in stark contrast to traditional graph clustering algorithms that prioritize highly connected community structures. Our approach relies on constructing a sufficient subgraph of a weighted line graph and applying a hierarchical agglomerative clustering. This work draws particular inspiration from HDBSCAN. We present a parallel algorithm and show that it is able to break billion-scale dynamic graphs into small sets that correlate in topology and time. The entire clustering process for a graph with $O(10 \text{ billion})$ edges takes just a few minutes of run time on 256 nodes of a distributed compute environment. We argue how the output of the edge clustering is useful for a multitude of data visualization and powerful machine learning tasks, both involving the original massive dynamic graph data and/or the non-relational metadata. Finally, we demonstrate its use on a real-world large-scale directed dynamic graph and describe how it can be extended to dynamic hypergraphs and graphs with unstructured data living on vertices and edges. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 408,498 |
2402.18150 | Unsupervised Information Refinement Training of Large Language Models
for Retrieval-Augmented Generation | Retrieval-augmented generation (RAG) enhances large language models (LLMs) by incorporating additional information from retrieval. However, studies have shown that LLMs still face challenges in effectively using the retrieved information, even ignoring it or being misled by it. The key reason is that the training of LLMs does not clearly make LLMs learn how to utilize input retrieved texts with varied quality. In this paper, we propose a novel perspective that considers the role of LLMs in RAG as ``Information Refiner'', which means that regardless of correctness, completeness, or usefulness of retrieved texts, LLMs can consistently integrate knowledge within the retrieved texts and model parameters to generate the texts that are more concise, accurate, and complete than the retrieved texts. To this end, we propose an information refinement training method named InFO-RAG that optimizes LLMs for RAG in an unsupervised manner. InFO-RAG is low-cost and general across various tasks. Extensive experiments on zero-shot prediction of 11 datasets in diverse tasks including Question Answering, Slot-Filling, Language Modeling, Dialogue, and Code Generation show that InFO-RAG improves the performance of LLaMA2 by an average of 9.39\% relative points. InFO-RAG also shows advantages in in-context learning and robustness of RAG. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 433,310 |
2104.05135 | ON-OFF Privacy Against Correlation Over Time | We consider the problem of ON-OFF privacy in which a user is interested in the latest message generated by one of n sources available at a server. The user has the choice to turn privacy ON or OFF depending on whether he wants to hide his interest at the time or not. The challenge of allowing the privacy to be toggled between ON and OFF is that the user's online behavior is correlated over time. Therefore, the user cannot simply ignore the privacy requirement when privacy is OFF. We represent the user's correlated requests by an n-state Markov chain. Our goal is to design ON-OFF privacy schemes with optimal download rate that ensure privacy for past and future requests. We devise a polynomial-time algorithm to construct an ON-OFF privacy scheme. Moreover, we present an upper bound on the achievable rate. We show that the proposed scheme is optimal and the upper bound is tight for some special families of Markov chains. We also give an implicit characterization of the optimal achievable rate as a linear programming (LP). | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 229,610 |
2408.02907 | Leveraging Inter-Chunk Interactions for Enhanced Retrieval in Large
Language Model-Based Question Answering | Retrieving external knowledge and prompting large language models with relevant information is an effective paradigm to enhance the performance of question-answering tasks. Previous research typically handles paragraphs from external documents in isolation, resulting in a lack of context and ambiguous references, particularly in multi-document and complex tasks. To overcome these challenges, we propose a new retrieval framework IIER, that leverages Inter-chunk Interactions to Enhance Retrieval. This framework captures the internal connections between document chunks by considering three types of interactions: structural, keyword, and semantic. We then construct a unified Chunk-Interaction Graph to represent all external documents comprehensively. Additionally, we design a graph-based evidence chain retriever that utilizes previous paths and chunk interactions to guide the retrieval process. It identifies multiple seed nodes based on the target question and iteratively searches for relevant chunks to gather supporting evidence. This retrieval process refines the context and reasoning chain, aiding the large language model in reasoning and answer generation. Extensive experiments demonstrate that IIER outperforms strong baselines across four datasets, highlighting its effectiveness in improving retrieval and reasoning capabilities. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 478,812 |
1504.04428 | Optimal Dynamic Multicast Scheduling for Cache-Enabled Content-Centric
Wireless Networks | Caching and multicasting at base stations are two promising approaches to support massive content delivery over wireless networks. However, existing scheduling designs do not make full use of the advantages of the two approaches. In this paper, we consider the optimal dynamic multicast scheduling to jointly minimize the average delay, power, and fetching costs for cache-enabled content-centric wireless networks. We formulate this stochastic optimization problem as an infinite horizon average cost Markov decision process (MDP). It is well-known to be a difficult problem due to the curse of dimensionality, and there generally only exist numerical solutions. By using relative value iteration algorithm and the special structures of the request queue dynamics, we analyze the properties of the value function and the state-action cost function of the MDP for both the uniform and nonuniform channel cases. Based on these properties, we show that the optimal policy, which is adaptive to the request queue state, has a switch structure in the uniform case and a partial switch structure in the nonuniform case. Moreover, in the uniform case with two contents, we show that the switch curve is monotonically non-decreasing. Then, by exploiting these structural properties of the optimal policy, we propose two low-complexity optimal algorithms. Motivated by the switch structures of the optimal policy, to further reduce the complexity, we also propose a low-complexity suboptimal policy, which possesses similar structural properties to the optimal policy, and develop a low-complexity algorithm to compute this policy. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 42,140 |
1812.04831 | Weakly Supervised Instance Segmentation Using Hybrid Network | Weakly-supervised instance segmentation, which could greatly save labor and time cost of pixel mask annotation, has attracted increasing attention in recent years. The commonly used pipeline firstly utilizes conventional image segmentation methods to automatically generate initial masks and then use them to train an off-the-shelf segmentation network in an iterative way. However, the initial generated masks usually contains a notable proportion of invalid masks which are mainly caused by small object instances. Directly using these initial masks to train segmentation model is harmful for the performance. To address this problem, we propose a hybrid network in this paper. In our architecture, there is a principle segmentation network which is used to handle the normal samples with valid generated masks. In addition, a complementary branch is added to handle the small and dim objects without valid masks. Experimental results indicate that our method can achieve significantly performance improvement both on the small object instances and large ones, and outperforms all state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 116,297 |
1901.09036 | Orthogonal Statistical Learning | We provide non-asymptotic excess risk guarantees for statistical learning in a setting where the population risk with respect to which we evaluate the target parameter depends on an unknown nuisance parameter that must be estimated from data. We analyze a two-stage sample splitting meta-algorithm that takes as input arbitrary estimation algorithms for the target parameter and nuisance parameter. We show that if the population risk satisfies a condition called Neyman orthogonality, the impact of the nuisance estimation error on the excess risk bound achieved by the meta-algorithm is of second order. Our theorem is agnostic to the particular algorithms used for the target and nuisance and only makes an assumption on their individual performance. This enables the use of a plethora of existing results from machine learning to give new guarantees for learning with a nuisance component. Moreover, by focusing on excess risk rather than parameter estimation, we can provide rates under weaker assumptions than in previous works and accommodate settings in which the target parameter belongs to a complex nonparametric class. We provide conditions on the metric entropy of the nuisance and target classes such that oracle rates of the same order as if we knew the nuisance parameter are achieved. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 119,627 |
2209.06346 | Prediction of the outcome of a Twenty-20 Cricket Match : A Machine
Learning Approach | Twenty20 cricket, sometimes written Twenty-20, and often abbreviated to T20, is a short form of cricket. In a Twenty20 game the two teams of 11 players have a single innings each, which is restricted to a maximum of 20 overs. This version of cricket is especially unpredictable and is one of the reasons it has gained popularity over recent times. However, in this paper we try four different machine learning approaches for predicting the results of T20 Cricket Matches. Specifically we take in to account: previous performance statistics of the players involved in the competing teams, ratings of players obtained from reputed cricket statistics websites, clustering the players' with similar performance statistics and propose a novel method using an ELO based approach to rate players. We compare the performances of each of these feature engineering approaches by using different ML algorithms, including logistic regression, support vector machines, bayes network, decision tree, random forest. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 317,361 |
1205.0085 | Spectrum Leasing via Cooperation for Enhanced Physical-Layer Secrecy | Spectrum leasing via cooperation refers to the possibility of primary users leasing a portion of the spectral resources to secondary users in exchange for cooperation. In the presence of an eavesdropper, this correspondence proposes a novel application of this concept in which the secondary cooperation aims at improving secrecy of the primary network by creating more interference to the eavesdropper than to the primary receiver. To generate the interference in a positive way, this work studies an optimal design of a beamformer at the secondary transmitter with multiple antennas that maximizes a secrecy rate of the primary network while satisfying a required rate for the secondary network. Moreover, we investigate two scenarios depending upon the operation of the eavesdropper: i) the eavesdropper treats the interference by the secondary transmission as an additive noise (single-user decoding) and ii) the eavesdropper tries to decode and remove the secondary signal (joint decoding). Numerical results confirm that, for a wide range of required secondary rate constraints, the proposed spectrum-leasing strategy increases the secrecy rate of the primary network compared to the case of no spectrum leasing. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,746 |
2111.06437 | Scalable Operator Allocation for Multi-Robot Assistance: A Restless
Bandit Approach | In this paper, we consider the problem of allocating human operators in a system with multiple semi-autonomous robots. Each robot is required to perform an independent sequence of tasks, subjected to a chance of failing and getting stuck in a fault state at every task. If and when required, a human operator can assist or teleoperate a robot. Conventional MDP techniques used to solve such problems face scalability issues due to exponential growth of state and action spaces with the number of robots and operators. In this paper we derive conditions under which the operator allocation problem is indexable, enabling the use of the Whittle index heuristic. The conditions can be easily checked to verify indexability, and we show that they hold for a wide range of problems of interest. Our key insight is to leverage the structure of the value function of individual robots, resulting in conditions that can be verified separately for each state of each robot. We apply these conditions to two types of transitions commonly seen in remote robot supervision systems. Through numerical simulations, we demonstrate the efficacy of Whittle index policy as a near-optimal and scalable approach that outperforms existing scalable methods. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 266,073 |
2411.10273 | Fill in the blanks: Rethinking Interpretability in vision | Model interpretability is a key challenge that has yet to align with the advancements observed in contemporary state-of-the-art deep learning models. In particular, deep learning aided vision tasks require interpretability, in order for their adoption in more specialized domains such as medical imaging. Although the field of explainable AI (XAI) developed methods for interpreting vision models along with early convolutional neural networks, recent XAI research has mainly focused on assigning attributes via saliency maps. As such, these methods are restricted to providing explanations at a sample level, and many explainability methods suffer from low adaptability across a wide range of vision models. In our work, we re-think vision-model explainability from a novel perspective, to probe the general input structure that a model has learnt during its training. To this end, we ask the question: "How would a vision model fill-in a masked-image". Experiments on standard vision datasets and pre-trained models reveal consistent patterns, and could be intergrated as an additional model-agnostic explainability tool in modern machine-learning platforms. The code will be available at \url{https://github.com/BoTZ-TND/FillingTheBlanks.git} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 508,576 |
2412.06063 | On Socially Fair Low-Rank Approximation and Column Subset Selection | Low-rank approximation and column subset selection are two fundamental and related problems that are applied across a wealth of machine learning applications. In this paper, we study the question of socially fair low-rank approximation and socially fair column subset selection, where the goal is to minimize the loss over all sub-populations of the data. We show that surprisingly, even constant-factor approximation to fair low-rank approximation requires exponential time under certain standard complexity hypotheses. On the positive side, we give an algorithm for fair low-rank approximation that, for a constant number of groups and constant-factor accuracy, runs in $2^{\text{poly}(k)}$ time rather than the na\"{i}ve $n^{\text{poly}(k)}$, which is a substantial improvement when the dataset has a large number $n$ of observations. We then show that there exist bicriteria approximation algorithms for fair low-rank approximation and fair column subset selection that run in polynomial time. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 515,072 |
2409.08020 | Network Anomaly Traffic Detection via Multi-view Feature Fusion | Traditional anomalous traffic detection methods are based on single-view analysis, which has obvious limitations in dealing with complex attacks and encrypted communications. In this regard, we propose a Multi-view Feature Fusion (MuFF) method for network anomaly traffic detection. MuFF models the temporal and interactive relationships of packets in network traffic based on the temporal and interactive viewpoints respectively. It learns temporal and interactive features. These features are then fused from different perspectives for anomaly traffic detection. Extensive experiments on six real traffic datasets show that MuFF has excellent performance in network anomalous traffic detection, which makes up for the shortcomings of detection under a single perspective. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 487,744 |
2310.15933 | Modeling and Contribution of Flexible Heating Systems for Transmission
Grid Congestion Management | The large-scale integration of flexible heating systems in the European electricity market leads to a substantial increase of transportation requirements and consecutively grid congestions in the continental transmission grid. Novel model formulations for the grid-aware operation of both individual small-scale heat pumps and large-scale power-to-heat (PtH) units located in district heating networks are presented. The functionality of the models and the contribution of flexible heating systems for transmission grid congestion management is evaluated by running simulations for the target year 2035 for the German transmission grid. The findings show a decrease in annual conventional redispatch volumes and renewable energy sources (RES) curtailment resulting in cost savings of approximately 6 % through the integration of flexible heating systems in the grid congestion management scheme. The analysis suggests that especially large-scale PtH units in combination with thermal energy storages can contribute significantly to the alleviation of grid congestion and foster RES integration. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 402,505 |
2501.19055 | Towards Physiologically Sensible Predictions via the Rule-based
Reinforcement Learning Layer | This paper adds to the growing literature of reinforcement learning (RL) for healthcare by proposing a novel paradigm: augmenting any predictor with Rule-based RL Layer (RRLL) that corrects the model's physiologically impossible predictions. Specifically, RRLL takes as input states predicted labels and outputs corrected labels as actions. The reward of the state-action pair is evaluated by a set of general rules. RRLL is efficient, general and lightweight: it does not require heavy expert knowledge like prior work but only a set of impossible transitions. This set is much smaller than all possible transitions; yet it can effectively reduce physiologically impossible mistakes made by the state-of-the-art predictor models. We verify the utility of RRLL on a variety of important healthcare classification problems and observe significant improvements using the same setup, with only the domain-specific set of impossibility changed. In-depth analysis shows that RRLL indeed improves accuracy by effectively reducing the presence of physiologically impossible predictions. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 528,983 |
1802.01548 | Regularized Evolution for Image Classifier Architecture Search | The effort devoted to hand-crafting neural network image classifiers has motivated the use of architecture search to discover them automatically. Although evolutionary algorithms have been repeatedly applied to neural network topologies, the image classifiers thus discovered have remained inferior to human-crafted ones. Here, we evolve an image classifier---AmoebaNet-A---that surpasses hand-designs for the first time. To do this, we modify the tournament selection evolutionary algorithm by introducing an age property to favor the younger genotypes. Matching size, AmoebaNet-A has comparable accuracy to current state-of-the-art ImageNet models discovered with more complex architecture-search methods. Scaled to larger size, AmoebaNet-A sets a new state-of-the-art 83.9% / 96.6% top-5 ImageNet accuracy. In a controlled comparison against a well known reinforcement learning algorithm, we give evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search. This is relevant when fewer compute resources are available. Evolution is, thus, a simple method to effectively discover high-quality architectures. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | true | false | true | 89,629 |
2206.10767 | Providers-Clients-Robots: Framework for spatial-semantic planning for
shared understanding in human-robot interaction | This paper develops a novel framework called Providers-Clients-Robots (PCR), applicable to socially assistive robots that support research on shared understanding in human-robot interactions. Providers, Clients, and Robots share an actionable and intuitive representation of the environment to create plans that best satisfy the combined needs of all parties. The plans are formed via interaction between the Client and the Robot based on a previously built multi-modal navigation graph. The explainable environmental representation in the form of a navigation graph is constructed collaboratively between Providers and Robots prior to interaction with Clients. We develop a realization of the proposed framework to create a spatial-semantic representation of an indoor environment autonomously. Moreover, we develop a planner that takes in constraints from Providers and Clients of the establishment and dynamically plans a sequence of visits to each area of interest. Evaluations show that the proposed realization of the PCR framework can successfully make plans while satisfying the specified time budget and sequence constraints and outperforming the greedy baseline. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 304,023 |
2211.14003 | Assistive Teaching of Motor Control Tasks to Humans | Recent works on shared autonomy and assistive-AI technologies, such as assistive robot teleoperation, seek to model and help human users with limited ability in a fixed task. However, these approaches often fail to account for humans' ability to adapt and eventually learn how to execute a control task themselves. Furthermore, in applications where it may be desirable for a human to intervene, these methods may inhibit their ability to learn how to succeed with full self-control. In this paper, we focus on the problem of assistive teaching of motor control tasks such as parking a car or landing an aircraft. Despite their ubiquitous role in humans' daily activities and occupations, motor tasks are rarely taught in a uniform way due to their high complexity and variance. We propose an AI-assisted teaching algorithm that leverages skill discovery methods from reinforcement learning (RL) to (i) break down any motor control task into teachable skills, (ii) construct novel drill sequences, and (iii) individualize curricula to students with different capabilities. Through an extensive mix of synthetic and user studies on two motor control tasks -- parking a car with a joystick and writing characters from the Balinese alphabet -- we show that assisted teaching with skills improves student performance by around 40% compared to practicing full trajectories without skills, and practicing with individualized drills can result in up to 25% further improvement. Our source code is available at https://github.com/Stanford-ILIAD/teaching | true | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 332,683 |
2409.16800 | Programming of Skill-based Robots | Manufacturing is facing ever changing market demands, with faster innovation cycles resulting to growing agility and flexibility requirements. Industry 4.0 has been transforming the manufacturing world towards digital automation and the importance of software has increased drastically. Easy and fast task programming and execution in robot - sensor systems become a prerequisite for agile and flexible automation and in this paper, we propose such a system. Our solution relies on a robot skill library, which provides the user with high level and parametrized operations, i.e., robot skills, for task programming and execution. Programming actions results to a control recipe in a neutral product context and is based on use of product CAD models or alternatively collaborative use of pointers and tracking sensor with real parts. Practical tests are also reported to show the feasibility of our approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 491,515 |
2204.00152 | Multi-Rate Planning and Control of Uncertain Nonlinear Systems: Model
Predictive Control and Control Lyapunov Functions | Modern control systems must operate in increasingly complex environments subject to safety constraints and input limits, and are often implemented in a hierarchical fashion with different controllers running at multiple time scales. Yet traditional constructive methods for nonlinear controller synthesis typically "flatten" this hierarchy, focusing on a single time scale, and thereby limited the ability to make rigorous guarantees on constraint satisfaction that hold for the entire system. In this work we seek to address the stabilization of constrained nonlinear systems through a \textit{multi-rate} control architecture. This is accomplished by iteratively planning continuous reference trajectories for a nonlinear system using a linearized model and Model Predictive Control (MPC), and tracking said trajectories using the full-order nonlinear model and Control Lyapunov Functions (CLFs). Connecting these two levels of control design in a way that ensures constraint satisfaction is achieved through the use of \textit{B\'{e}zier curves}, which enable planning continuous trajectories respecting constraints by planning a sequence of discrete points. Our framework is encoded via convex optimization problems which may be efficiently solved, as demonstrated in simulation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 289,156 |
2105.03540 | An Intelligent Model for Solving Manpower Scheduling Problems | The manpower scheduling problem is a critical research field in the resource management area. Based on the existing studies on scheduling problem solutions, this paper transforms the manpower scheduling problem into a combinational optimization problem under multi-constraint conditions from a new perspective. It also uses logical paradigms to build a mathematical model for problem solution and an improved multi-dimensional evolution algorithm for solving the model. Moreover, the constraints discussed in this paper basically cover all the requirements of human resource coordination in modern society and are supported by our experiment results. In the discussion part, we compare our model with other heuristic algorithms or linear programming methods and prove that the model proposed in this paper makes a 25.7% increase in efficiency and a 17% increase in accuracy at most. In addition, to the numerical solution of the manpower scheduling problem, this paper also studies the algorithm for scheduling task list generation and the method of displaying scheduling results. As a result, we not only provide various modifications for the basic algorithm to solve different condition problems but also propose a new algorithm that increases at least 28.91% in time efficiency by comparing with different baseline models. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 234,172 |
1908.06316 | Mono-SF: Multi-View Geometry Meets Single-View Depth for Monocular Scene
Flow Estimation of Dynamic Traffic Scenes | Existing 3D scene flow estimation methods provide the 3D geometry and 3D motion of a scene and gain a lot of interest, for example in the context of autonomous driving. These methods are traditionally based on a temporal series of stereo images. In this paper, we propose a novel monocular 3D scene flow estimation method, called Mono-SF. Mono-SF jointly estimates the 3D structure and motion of the scene by combining multi-view geometry and single-view depth information. Mono-SF considers that the scene flow should be consistent in terms of warping the reference image in the consecutive image based on the principles of multi-view geometry. For integrating single-view depth in a statistical manner, a convolutional neural network, called ProbDepthNet, is proposed. ProbDepthNet estimates pixel-wise depth distributions from a single image rather than single depth values. Additionally, as part of ProbDepthNet, a novel recalibration technique for regression problems is proposed to ensure well-calibrated distributions. Our experiments show that Mono-SF outperforms state-of-the-art monocular baselines and ablation studies support the Mono-SF approach and ProbDepthNet design. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 141,977 |
2204.04350 | Hardware Trojan Insertion Using Reinforcement Learning | This paper utilizes Reinforcement Learning (RL) as a means to automate the Hardware Trojan (HT) insertion process to eliminate the inherent human biases that limit the development of robust HT detection methods. An RL agent explores the design space and finds circuit locations that are best for keeping inserted HTs hidden. To achieve this, a digital circuit is converted to an environment in which an RL agent inserts HTs such that the cumulative reward is maximized. Our toolset can insert combinational HTs into the ISCAS-85 benchmark suite with variations in HT size and triggering conditions. Experimental results show that the toolset achieves high input coverage rates (100\% in two benchmark circuits) that confirms its effectiveness. Also, the inserted HTs have shown a minimal footprint and rare activation probability. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 290,619 |
2002.08777 | Do you comply with AI? -- Personalized explanations of learning
algorithms and their impact on employees' compliance behavior | Machine Learning algorithms are technological key enablers for artificial intelligence (AI). Due to the inherent complexity, these learning algorithms represent black boxes and are difficult to comprehend, therefore influencing compliance behavior. Hence, compliance with the recommendations of such artifacts, which can impact employees' task performance significantly, is still subject to research - and personalization of AI explanations seems to be a promising concept in this regard. In our work, we hypothesize that, based on varying backgrounds like training, domain knowledge and demographic characteristics, individuals have different understandings and hence mental models about the learning algorithm. Personalization of AI explanations, related to the individuals' mental models, may thus be an instrument to affect compliance and therefore employee task performance. Our preliminary results already indicate the importance of personalized explanations in industry settings and emphasize the importance of this research endeavor. | true | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 164,859 |
1406.2283 | Depth Map Prediction from a Single Image using a Multi-Scale Deep
Network | Predicting depth is an essential component in understanding the 3D geometry of a scene. While for stereo images local correspondence suffices for estimation, finding depth relations from a single image is less straightforward, requiring integration of both global and local information from various cues. Moreover, the task is inherently ambiguous, with a large source of uncertainty coming from the overall scale. In this paper, we present a new method that addresses this task by employing two deep network stacks: one that makes a coarse global prediction based on the entire image, and another that refines this prediction locally. We also apply a scale-invariant error to help measure depth relations rather than scale. By leveraging the raw datasets as large sources of training data, our method achieves state-of-the-art results on both NYU Depth and KITTI, and matches detailed depth boundaries without the need for superpixelation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 33,733 |
cmp-lg/9407015 | Specifying Intonation from Context for Speech Synthesis | This paper presents a theory and a computational implementation for generating prosodically appropriate synthetic speech in response to database queries. Proper distinctions of contrast and emphasis are expressed in an intonation contour that is synthesized by rule under the control of a grammar, a discourse model, and a knowledge base. The theory is based on Combinatory Categorial Grammar, a formalism which easily integrates the notions of syntactic constituency, semantics, prosodic phrasing and information structure. Results from our current implementation demonstrate the system's ability to generate a variety of intonational possibilities for a given sentence depending on the discourse context. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,134 |
2412.18351 | Multi-Agents Based on Large Language Models for Knowledge-based Visual
Question Answering | Large Language Models (LLMs) have achieved impressive results in knowledge-based Visual Question Answering (VQA). However existing methods still have challenges: the inability to use external tools autonomously, and the inability to work in teams. Humans tend to know whether they need to use external tools when they encounter a new question, e.g., they tend to be able to give a direct answer to a familiar question, whereas they tend to use tools such as search engines when they encounter an unfamiliar question. In addition, humans also tend to collaborate and discuss with others to get better answers. Inspired by this, we propose the multi-agent voting framework. We design three LLM-based agents that simulate different levels of staff in a team, and assign the available tools according to the levels. Each agent provides the corresponding answer, and finally all the answers provided by the agents are voted to get the final answer. Experiments on OK-VQA and A-OKVQA show that our approach outperforms other baselines by 2.2 and 1.0, respectively. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 520,385 |
2109.11726 | Morse-STF: Improved Protocols for Privacy-Preserving Machine Learning | Secure multi-party computation enables multiple mutually distrusting parties to perform computations on data without revealing the data itself, and has become one of the core technologies behind privacy-preserving machine learning. In this work, we present several improved privacy-preserving protocols for both linear and non-linear layers in machine learning. For linear layers, we present an extended beaver triple protocol for bilinear maps that significantly reduces communication of convolution layer. For non-linear layers, we introduce novel protocols for computing the sigmoid and softmax function. Both functions are essential building blocks for machine learning training of classification tasks. Our protocols are both more scalable and robust than prior constructions, and improves runtime performance by 3-17x. Finally, we introduce Morse-STF, an end-to-end privacy-preserving system for machine learning training that leverages all these improved protocols. Our system achieves a 1.8x speedup on logistic regression and 3.9-4.9x speedup on convolutional neural networks compared to prior state-of-the-art systems. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 257,036 |
cmp-lg/9605016 | Parsing for Semidirectional Lambek Grammar is NP-Complete | We study the computational complexity of the parsing problem of a variant of Lambek Categorial Grammar that we call {\em semidirectional}. In semidirectional Lambek calculus $\SDL$ there is an additional non-directional abstraction rule allowing the formula abstracted over to appear anywhere in the premise sequent's left-hand side, thus permitting non-peripheral extraction. $\SDL$ grammars are able to generate each context-free language and more than that. We show that the parsing problem for semidirectional Lambek Grammar is NP-complete by a reduction of the 3-Partition problem. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,544 |
2202.07036 | Benchmarking Online Sequence-to-Sequence and Character-based Handwriting
Recognition from IMU-Enhanced Pens | Purpose. Handwriting is one of the most frequently occurring patterns in everyday life and with it come challenging applications such as handwriting recognition (HWR), writer identification, and signature verification. In contrast to offline HWR that only uses spatial information (i.e., images), online HWR (OnHWR) uses richer spatio-temporal information (i.e., trajectory data or inertial data). While there exist many offline HWR datasets, there is only little data available for the development of OnHWR methods on paper as it requires hardware-integrated pens. Methods. This paper presents data and benchmark models for real-time sequence-to-sequence (seq2seq) learning and single character-based recognition. Our data is recorded by a sensor-enhanced ballpoint pen, yielding sensor data streams from triaxial accelerometers, a gyroscope, a magnetometer and a force sensor at 100 Hz. We propose a variety of datasets including equations and words for both the writer-dependent and writer-independent tasks. Our datasets allow a comparison between classical OnHWR on tablets and on paper with sensor-enhanced pens. We provide an evaluation benchmark for seq2seq and single character-based HWR using recurrent and temporal convolutional networks and Transformers combined with a connectionist temporal classification (CTC) loss and cross-entropy (CE) losses. Results. Our convolutional network combined with BiLSTMs outperforms Transformer-based architectures, is on par with InceptionTime for sequence-based classification tasks, and yields better results compared to 28 state-of-the-art techniques. Time-series augmentation methods improve the sequence-based task, and we show that CE variants can improve the single classification task. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 280,405 |
2207.03718 | Convolutional Neural Networks for Time-dependent Classification of
Variable-length Time Series | Time series data are often obtained only within a limited time range due to interruptions during observation process. To classify such partial time series, we need to account for 1) the variable-length data drawn from 2) different timestamps. To address the first problem, existing convolutional neural networks use global pooling after convolutional layers to cancel the length differences. This architecture suffers from the trade-off between incorporating entire temporal correlations in long data and avoiding feature collapse for short data. To resolve this tradeoff, we propose Adaptive Multi-scale Pooling, which aggregates features from an adaptive number of layers, i.e., only the first few layers for short data and more layers for long data. Furthermore, to address the second problem, we introduce Temporal Encoding, which embeds the observation timestamps into the intermediate features. Experiments on our private dataset and the UCR/UEA time series archive show that our modules improve classification accuracy especially on short data obtained as partial time series. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 306,958 |
2410.10819 | DuoAttention: Efficient Long-Context LLM Inference with Retrieval and
Streaming Heads | Deploying long-context large language models (LLMs) is essential but poses significant computational and memory challenges. Caching all Key and Value (KV) states across all attention heads consumes substantial memory. Existing KV cache pruning methods either damage the long-context capabilities of LLMs or offer only limited efficiency improvements. In this paper, we identify that only a fraction of attention heads, a.k.a, Retrieval Heads, are critical for processing long contexts and require full attention across all tokens. In contrast, all other heads, which primarily focus on recent tokens and attention sinks--referred to as Streaming Heads--do not require full attention. Based on this insight, we introduce DuoAttention, a framework that only applies a full KV cache to retrieval heads while using a light-weight, constant-length KV cache for streaming heads, which reduces both LLM's decoding and pre-filling memory and latency without compromising its long-context abilities. DuoAttention uses a lightweight, optimization-based algorithm with synthetic data to identify retrieval heads accurately. Our method significantly reduces long-context inference memory by up to 2.55x for MHA and 1.67x for GQA models while speeding up decoding by up to 2.18x and 1.50x and accelerating pre-filling by up to 1.73x and 1.63x for MHA and GQA models, respectively, with minimal accuracy loss compared to full attention. Notably, combined with quantization, DuoAttention enables Llama-3-8B decoding with 3.3 million context length on a single A100 GPU. Code is provided in https://github.com/mit-han-lab/duo-attention. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,252 |
2411.14269 | Guided MRI Reconstruction via Schr\"odinger Bridge | Magnetic Resonance Imaging (MRI) is a multi-contrast imaging technique in which different contrast images share similar structural information. However, conventional diffusion models struggle to effectively leverage this structural similarity. Recently, the Schr\"odinger Bridge (SB), a nonlinear extension of the diffusion model, has been proposed to establish diffusion paths between any distributions, allowing the incorporation of guided priors. This study proposes an SB-based, multi-contrast image-guided reconstruction framework that establishes a diffusion bridge between the guiding and target image distributions. By using the guiding image along with data consistency during sampling, the target image is reconstructed more accurately. To better address structural differences between images, we introduce an inversion strategy from the field of image editing, termed $\mathbf{I}^2$SB-inversion. Experiments on a paried T1 and T2-FLAIR datasets demonstrate that $\mathbf{I}^2$SB-inversion achieve a high acceleration up to 14.4 and outperforms existing methods in terms of both reconstruction accuracy and stability. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 510,093 |
2305.04101 | SRTK: A Toolkit for Semantic-relevant Subgraph Retrieval | Information retrieval based knowledge base question answering (KBQA) first retrieves a subgraph to reduce search space, then reasons on the subgraph to select answer entities. Existing approaches have three issues that impede the retrieval of such subgraphs. Firstly, there is no off-the-shelf toolkit for semantic-relevant subgraph retrieval. Secondly, existing methods are knowledge-graph-dependent, resulting in outdated knowledge graphs used even in recent studies. Thirdly, previous solutions fail to incorporate the best available techniques for entity linking or path expansion. In this paper, we present SRTK, a user-friendly toolkit for semantic-relevant subgraph retrieval from large-scale knowledge graphs. SRTK is the first toolkit that streamlines the entire lifecycle of subgraph retrieval across multiple knowledge graphs. Additionally, it comes with state-of-the-art subgraph retrieval algorithms, guaranteeing an up-to-date solution set out of the box. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 362,637 |
2107.14698 | Strategically Efficient Exploration in Competitive Multi-agent
Reinforcement Learning | High sample complexity remains a barrier to the application of reinforcement learning (RL), particularly in multi-agent systems. A large body of work has demonstrated that exploration mechanisms based on the principle of optimism under uncertainty can significantly improve the sample efficiency of RL in single agent tasks. This work seeks to understand the role of optimistic exploration in non-cooperative multi-agent settings. We will show that, in zero-sum games, optimistic exploration can cause the learner to waste time sampling parts of the state space that are irrelevant to strategic play, as they can only be reached through cooperation between both players. To address this issue, we introduce a formal notion of strategically efficient exploration in Markov games, and use this to develop two strategically efficient learning algorithms for finite Markov games. We demonstrate that these methods can be significantly more sample efficient than their optimistic counterparts. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | true | false | false | false | 248,546 |
2410.04161 | Overcoming False Illusions in Real-World Face Restoration with
Multi-Modal Guided Diffusion Model | We introduce a novel Multi-modal Guided Real-World Face Restoration (MGFR) technique designed to improve the quality of facial image restoration from low-quality inputs. Leveraging a blend of attribute text prompts, high-quality reference images, and identity information, MGFR can mitigate the generation of false facial attributes and identities often associated with generative face restoration methods. By incorporating a dual-control adapter and a two-stage training strategy, our method effectively utilizes multi-modal prior information for targeted restoration tasks. We also present the Reface-HQ dataset, comprising over 23,000 high-resolution facial images across 5,000 identities, to address the need for reference face training images. Our approach achieves superior visual quality in restoring facial details under severe degradation and allows for controlled restoration processes, enhancing the accuracy of identity preservation and attribute correction. Including negative quality samples and attribute prompts in the training further refines the model's ability to generate detailed and perceptually accurate images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 495,152 |
2410.06997 | Feasibility Study of a Diffusion-Based Model for Cross-Modal Generation
of Knee MRI from X-ray: Integrating Radiographic Feature Information | Knee osteoarthritis (KOA) is a prevalent musculoskeletal disorder, often diagnosed using X-rays due to its cost-effectiveness. While Magnetic Resonance Imaging (MRI) provides superior soft tissue visualization and serves as a valuable supplementary diagnostic tool, its high cost and limited accessibility significantly restrict its widespread use. To explore the feasibility of bridging this imaging gap, we conducted a feasibility study leveraging a diffusion-based model that uses an X-ray image as conditional input, alongside target depth and additional patient-specific feature information, to generate corresponding MRI sequences. Our findings demonstrate that the MRI volumes generated by our approach is visually closer to real MRI scans. Moreover, increasing inference steps enhances the continuity and smoothness of the synthesized MRI sequences. Through ablation studies, we further validate that integrating supplementary patient-specific information, beyond what X-rays alone can provide, enhances the accuracy and clinical relevance of the generated MRI, which underscores the potential of leveraging external patient-specific information to improve the MRI generation. This study is available at https://zwang78.github.io/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 496,426 |
2211.12644 | Scalable Predictive Beamforming for IRS-Assisted Multi-User
Communications: A Deep Learning Approach | Beamforming design for intelligent reflecting surface (IRS)-assisted multi-user communication (IRS-MUC) systems critically depends on the acquisition of accurate channel state information (CSI). However, channel estimation (CE) in IRS-MUC systems causes a large signaling overhead for training due to the large number of IRS elements. In this paper, taking into account user mobility, we adopt a deep learning (DL) approach to implicitly learn the historical line-of-sight (LoS) channel features and predict the IRS phase shifts to be adopted for the next time slot for maximization of the weighted sum-rate (WSR) of the IRS-MUC system. With the proposed predictive approach, we can avoid full-scale CSI estimation and facilitate low-dimensional CE for transmit beamforming design such that the signaling overhead is reduced by a scale of $\frac{1}{N}$, where $N$ is the number of IRS elements. To this end, we first develop a universal DL-based predictive beamforming (DLPB) framework featuring a two-stage predictive-instantaneous beamforming mechanism. As a realization of the developed framework, a location-aware convolutional long short-term memory (CLSTM) graph neural network (GNN) is developed to facilitate effective predictive beamforming at the IRS, where a CLSTM module is first adopted to exploit the spatial and temporal features of the considered channels and a GNN is then applied to empower the designed neural network with high scalability and generalizability. Furthermore, in the second stage, based on the predicted IRS phase shifts, an instantaneous CSI-aware fully-connected neural network is designed to optimize the transmit beamforming at the access point. Simulation results demonstrate that the proposed framework not only achieves a better WSR performance and requires a lower CE overhead compared with state-of-the-art benchmarks, but also is highly scalable in the numbers of users. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 332,181 |
2004.11676 | Automated diagnosis of COVID-19 with limited posteroanterior chest X-ray
images using fine-tuned deep neural networks | The novel coronavirus 2019 (COVID-19) is a respiratory syndrome that resembles pneumonia. The current diagnostic procedure of COVID-19 follows reverse-transcriptase polymerase chain reaction (RT-PCR) based approach which however is less sensitive to identify the virus at the initial stage. Hence, a more robust and alternate diagnosis technique is desirable. Recently, with the release of publicly available datasets of corona positive patients comprising of computed tomography (CT) and chest X-ray (CXR) imaging; scientists, researchers and healthcare experts are contributing for faster and automated diagnosis of COVID-19 by identifying pulmonary infections using deep learning approaches to achieve better cure and treatment. These datasets have limited samples concerned with the positive COVID-19 cases, which raise the challenge for unbiased learning. Following from this context, this article presents the random oversampling and weighted class loss function approach for unbiased fine-tuned learning (transfer learning) in various state-of-the-art deep learning approaches such as baseline ResNet, Inception-v3, Inception ResNet-v2, DenseNet169, and NASNetLarge to perform binary classification (as normal and COVID-19 cases) and also multi-class classification (as COVID-19, pneumonia, and normal case) of posteroanterior CXR images. Accuracy, precision, recall, loss, and area under the curve (AUC) are utilized to evaluate the performance of the models. Considering the experimental results, the performance of each model is scenario dependent; however, NASNetLarge displayed better scores in contrast to other architectures, which is further compared with other recently proposed approaches. This article also added the visual explanation to illustrate the basis of model classification and perception of COVID-19 in CXR images. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 173,977 |
1904.11578 | Asynchronous "Events" are Better For Motion Estimation | Event-based camera is a bio-inspired vision sensor that records intensity changes (called event) asynchronously in each pixel. As an instance of event-based camera, Dynamic and Active-pixel Vision Sensor (DAVIS) combines a standard camera and an event-based camera. However, traditional models could not deal with the event stream asynchronously. To analyze the event stream asynchronously, most existing approaches accumulate events within a certain time interval and treat the accumulated events as a synchronous frame, which wastes the intensity change information and weakens the advantages of DAVIS. Therefore, in this paper, we present the first neural asynchronous approach to process event stream for event-based camera. Our method asynchronously extracts dynamic information from events by leveraging previous motion and critical features of gray-scale frames. To our best knowledge, this is the first neural asynchronous method to analyze event stream through a novel deep neural network. Extensive experiments demonstrate that our proposed model achieves remarkable improvements against the state-of-the-art baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 128,891 |
2305.17393 | Answering Unanswered Questions through Semantic Reformulations in Spoken
QA | Spoken Question Answering (QA) is a key feature of voice assistants, usually backed by multiple QA systems. Users ask questions via spontaneous speech which can contain disfluencies, errors, and informal syntax or phrasing. This is a major challenge in QA, causing unanswered questions or irrelevant answers, and leading to bad user experiences. We analyze failed QA requests to identify core challenges: lexical gaps, proposition types, complex syntactic structure, and high specificity. We propose a Semantic Question Reformulation (SURF) model offering three linguistically-grounded operations (repair, syntactic reshaping, generalization) to rewrite questions to facilitate answering. Offline evaluation on 1M unanswered questions from a leading voice assistant shows that SURF significantly improves answer rates: up to 24% of previously unanswered questions obtain relevant answers (75%). Live deployment shows positive impact for millions of customers with unanswered questions; explicit relevance feedback shows high user satisfaction. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 368,567 |
1603.02371 | Interactive Browsing and Navigation in Relational Databases | Although researchers have devoted considerable attention to helping database users formulate queries, many users still find it challenging to specify queries that involve joining tables. To help users construct join queries for exploring relational databases, we propose ETable, a novel presentation data model that provides users with a presentation-level interactive view. This view compactly presents one-to-many and many-to-many relationships within a single enriched table by allowing a cell to contain a set of entity references. Users can directly interact with this enriched table to incrementally construct complex queries and navigate databases on a conceptual entity-relationship level. In a user study, participants performed a range of database querying tasks faster with ETable than with a commercial graphical query builder. Subjective feedback about ETable was also positive. All participants found that ETable was easier to learn and helpful for exploring databases. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | 53,005 |
2008.00742 | Collaborative Learning in the Jungle (Decentralized, Byzantine,
Heterogeneous, Asynchronous and Nonconvex Learning) | We study Byzantine collaborative learning, where $n$ nodes seek to collectively learn from each others' local data. The data distribution may vary from one node to another. No node is trusted, and $f < n$ nodes can behave arbitrarily. We prove that collaborative learning is equivalent to a new form of agreement, which we call averaging agreement. In this problem, nodes start each with an initial vector and seek to approximately agree on a common vector, which is close to the average of honest nodes' initial vectors. We present two asynchronous solutions to averaging agreement, each we prove optimal according to some dimension. The first, based on the minimum-diameter averaging, requires $ n \geq 6f+1$, but achieves asymptotically the best-possible averaging constant up to a multiplicative constant. The second, based on reliable broadcast and coordinate-wise trimmed mean, achieves optimal Byzantine resilience, i.e., $n \geq 3f+1$. Each of these algorithms induces an optimal Byzantine collaborative learning protocol. In particular, our equivalence yields new impossibility theorems on what any collaborative learning algorithm can achieve in adversarial and heterogeneous environments. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 190,096 |
2404.07985 | WaveMo: Learning Wavefront Modulations to See Through Scattering | Imaging through scattering media is a fundamental and pervasive challenge in fields ranging from medical diagnostics to astronomy. A promising strategy to overcome this challenge is wavefront modulation, which induces measurement diversity during image acquisition. Despite its importance, designing optimal wavefront modulations to image through scattering remains under-explored. This paper introduces a novel learning-based framework to address the gap. Our approach jointly optimizes wavefront modulations and a computationally lightweight feedforward "proxy" reconstruction network. This network is trained to recover scenes obscured by scattering, using measurements that are modified by these modulations. The learned modulations produced by our framework generalize effectively to unseen scattering scenarios and exhibit remarkable versatility. During deployment, the learned modulations can be decoupled from the proxy network to augment other more computationally expensive restoration algorithms. Through extensive experiments, we demonstrate our approach significantly advances the state of the art in imaging through scattering media. Our project webpage is at https://wavemo-2024.github.io/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 446,060 |
2310.00309 | An Adaptation of the AAA-Interpolation Algorithm for Model Reduction of
MIMO Systems | We consider the Adaptive Antoulas-Anderson (AAA) rational interpolation algorithm recently developed by Trefethen and co-authors, which can be viewed as a type of moment-matching technique for system realization and approximation. We consider variations on this algorithm that are suitable for model reduction of linear time invariant systems while addressing some of the shortcomings of the block-AAA variant of the algorithm for MIMO systems. In particular, we develop state-space formulas and keep track of the state-space dimension at every step of the adaptive block-AAA algorithm, showing an unfavorable increase of the state dimension. We propose a new low-rank adaptive interpolation algorithm that addresses this shortcoming. Comparative computational results are included for the algorithms above, together with comparisons to balanced reduction. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 395,923 |
2106.00090 | Deep learning for prediction of hepatocellular carcinoma recurrence
after resection or liver transplantation: a discovery and validation study | This study aimed to develop a classifier of prognosis after resection or liver transplantation (LT) for HCC by directly analysing the ubiquitously available histological images using deep learning based neural networks. Nucleus map set was used to train U-net to capture the nuclear architectural information. Train set included the patients with HCC treated by resection and has a distinct outcome. LT set contained patients with HCC treated by LT. Train set and its nuclear architectural information extracted by U-net were used to train MobileNet V2 based classifier (MobileNetV2_HCC_Class), purpose-built for classifying supersized heterogeneous images. The MobileNetV2_HCC_Class maintained relative higher discriminatory power than the other factors after HCC resection or LT in the independent validation set. Pathological review showed that the tumoral areas most predictive of recurrence were characterized by presence of stroma, high degree of cytological atypia, nuclear hyperchomasia, and a lack of immune infiltration. A clinically useful prognostic classifier was developed using deep learning allied to histological slides. The classifier has been extensively evaluated in independent patient populations with different treatment, and gives consistent excellent results across the classical clinical, biological and pathological features. The classifier assists in refining the prognostic prediction of HCC patients and identifying patients who would benefit from more intensive management. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 237,977 |
2207.08141 | ELECTRA is a Zero-Shot Learner, Too | Recently, for few-shot or even zero-shot learning, the new paradigm "pre-train, prompt, and predict" has achieved remarkable achievements compared with the "pre-train, fine-tune" paradigm. After the success of prompt-based GPT-3, a series of masked language model (MLM)-based (e.g., BERT, RoBERTa) prompt learning methods became popular and widely used. However, another efficient pre-trained discriminative model, ELECTRA, has probably been neglected. In this paper, we attempt to accomplish several NLP tasks in the zero-shot scenario using a novel our proposed replaced token detection (RTD)-based prompt learning method. Experimental results show that ELECTRA model based on RTD-prompt learning achieves surprisingly state-of-the-art zero-shot performance. Numerically, compared to MLM-RoBERTa-large and MLM-BERT-large, our RTD-ELECTRA-large has an average of about 8.4% and 13.7% improvement on all 15 tasks. Especially on the SST-2 task, our RTD-ELECTRA-large achieves an astonishing 90.1% accuracy without any training data. Overall, compared to the pre-trained masked language models, the pre-trained replaced token detection model performs better in zero-shot learning. The source code is available at: https://github.com/nishiwen1214/RTD-ELECTRA. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 308,470 |
1703.03476 | Enhancing sensitivity in quantum metrology by Hamiltonian extensions | A well-studied scenario in quantum parameter estimation theory arises when the parameter to be estimated is imprinted on the initial state by a Hamiltonian of the form $\theta G$. For such "phase shift Hamiltonians" it has been shown that one cannot improve the channel quantum Fisher information by adding ancillas and letting the system interact with them. Here we investigate the general case, where the Hamiltonian is not necessarily a phase shift, and show that in this case in general it \emph{is} possible to increase the quantum channel information and to reach an upper bound. This can be done by adding a term proportional to the derivative of the Hamiltonian, or by subtracting a term to the original Hamiltonian. Both methods do not make use of any ancillas and show therefore that for quantum channel estimation with arbitrary parameter-dependent Hamiltonian, entanglement with an ancillary system is not necessary to reach the best possible sensitivity. By adding an operator to the Hamiltonian we can also modify the time scaling of the channel quantum Fisher information. We illustrate our techniques with NV-center magnetometry and the estimation of the direction of a magnetic field in a given plane using a single spin-1 as probe. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 69,736 |
2301.10153 | Sequential Graph Attention Learning for Predicting Dynamic Stock Trends
(Student Abstract) | The stock market is characterized by a complex relationship between companies and the market. This study combines a sequential graph structure with attention mechanisms to learn global and local information within temporal time. Specifically, our proposed "GAT-AGNN" module compares model performance across multiple industries as well as within single industries. The results show that the proposed framework outperforms the state-of-the-art methods in predicting stock trends across multiple industries on Taiwan Stock datasets. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 341,716 |
2108.12251 | Changes in Twitter geolocations: Insights and suggestions for future
usage | Twitter data has become established as a valuable source of data for various application scenarios in the past years. For many such applications, it is necessary to know where Twitter posts (tweets) were sent from or what location they refer to. Researchers have frequently used exact coordinates provided in a small percentage of tweets, but Twitter removed the option to share these coordinates in mid-2019. Moreover, there is reason to suspect that a large share of the provided coordinates did not correspond to GPS coordinates of the user even before that. In this paper, we explain the situation and the 2019 policy change and shed light on the various options of still obtaining location information from tweets. We provide usage statistics including changes over time, and analyze what the removal of exact coordinates means for various common research tasks performed with Twitter data. Finally, we make suggestions for future research requiring geolocated tweets. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 252,446 |
2206.07588 | Characteristic kernels on Hilbert spaces, Banach spaces, and on sets of
measures | We present new classes of positive definite kernels on non-standard spaces that are integrally strictly positive definite or characteristic. In particular, we discuss radial kernels on separable Hilbert spaces, and introduce broad classes of kernels on Banach spaces and on metric spaces of strong negative type. The general results are used to give explicit classes of kernels on separable $L^p$ spaces and on sets of measures. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,798 |
2103.09716 | Quantitative Performance Assessment of CNN Units via Topological Entropy
Calculation | Identifying the status of individual network units is critical for understanding the mechanism of convolutional neural networks (CNNs). However, it is still challenging to reliably give a general indication of unit status, especially for units in different network models. To this end, we propose a novel method for quantitatively clarifying the status of single unit in CNN using algebraic topological tools. Unit status is indicated via the calculation of a defined topological-based entropy, called feature entropy, which measures the degree of chaos of the global spatial pattern hidden in the unit for a category. In this way, feature entropy could provide an accurate indication of status for units in different networks with diverse situations like weight-rescaling operation. Further, we show that feature entropy decreases as the layer goes deeper and shares almost simultaneous trend with loss during training. We show that by investigating the feature entropy of units on only training data, it could give discrimination between networks with different generalization ability from the view of the effectiveness of feature representations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 225,243 |
2311.16714 | Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld | While large language models (LLMs) excel in a simulated world of texts, they struggle to interact with the more realistic world without perceptions of other modalities such as visual or audio signals. Although vision-language models (VLMs) integrate LLM modules (1) aligned with static image features, and (2) may possess prior knowledge of world dynamics (as demonstrated in the text world), they have not been trained in an embodied visual world and thus cannot align with its dynamics. On the other hand, training an embodied agent in a noisy visual world without expert guidance is often challenging and inefficient. In this paper, we train a VLM agent living in a visual world using an LLM agent excelling in a parallel text world. Specifically, we distill LLM's reflection outcomes (improved actions by analyzing mistakes) in a text world's tasks to finetune the VLM on the same tasks of the visual world, resulting in an Embodied Multi-Modal Agent (EMMA) quickly adapting to the visual world dynamics. Such cross-modality imitation learning between the two parallel worlds is achieved by a novel DAgger-DPO algorithm, enabling EMMA to generalize to a broad scope of new tasks without any further guidance from the LLM expert. Extensive evaluations on the ALFWorld benchmark's diverse tasks highlight EMMA's superior performance to SOTA VLM-based agents, e.g., 20%-70% improvement in the success rate. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,024 |
1808.04151 | Multi-Task Learning for Sequence Tagging: An Empirical Study | We study three general multi-task learning (MTL) approaches on 11 sequence tagging tasks. Our extensive empirical results show that in about 50% of the cases, jointly learning all 11 tasks improves upon either independent or pairwise learning of the tasks. We also show that pairwise MTL can inform us what tasks can benefit others or what tasks can be benefited if they are learned jointly. In particular, we identify tasks that can always benefit others as well as tasks that can always be harmed by others. Interestingly, one of our MTL approaches yields embeddings of the tasks that reveal the natural clustering of semantic and syntactic tasks. Our inquiries have opened the doors to further utilization of MTL in NLP. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 105,078 |
2009.07769 | TadGAN: Time Series Anomaly Detection Using Generative Adversarial
Networks | Time series anomalies can offer information relevant to critical situations facing various fields, from finance and aerospace to the IT, security, and medical domains. However, detecting anomalies in time series data is particularly challenging due to the vague definition of anomalies and said data's frequent lack of labels and highly complex temporal correlations. Current state-of-the-art unsupervised machine learning methods for anomaly detection suffer from scalability and portability issues, and may have high false positive rates. In this paper, we propose TadGAN, an unsupervised anomaly detection approach built on Generative Adversarial Networks (GANs). To capture the temporal correlations of time series distributions, we use LSTM Recurrent Neural Networks as base models for Generators and Critics. TadGAN is trained with cycle consistency loss to allow for effective time-series data reconstruction. We further propose several novel methods to compute reconstruction errors, as well as different approaches to combine reconstruction errors and Critic outputs to compute anomaly scores. To demonstrate the performance and generalizability of our approach, we test several anomaly scoring techniques and report the best-suited one. We compare our approach to 8 baseline anomaly detection methods on 11 datasets from multiple reputable sources such as NASA, Yahoo, Numenta, Amazon, and Twitter. The results show that our approach can effectively detect anomalies and outperform baseline methods in most cases (6 out of 11). Notably, our method has the highest averaged F1 score across all the datasets. Our code is open source and is available as a benchmarking tool. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 196,039 |
2502.10713 | Improving action segmentation via explicit similarity measurement | Existing supervised action segmentation methods depend on the quality of frame-wise classification using attention mechanisms or temporal convolutions to capture temporal dependencies. Even boundary detection-based methods primarily depend on the accuracy of an initial frame-wise classification, which can overlook precise identification of segments and boundaries in case of low-quality prediction. To address this problem, this paper proposes ASESM (Action Segmentation via Explicit Similarity Measurement) to enhance the segmentation accuracy by incorporating explicit similarity evaluation across frames and predictions. Our supervised learning architecture uses frame-level multi-resolution features as input to multiple Transformer encoders. The resulting multiple frame-wise predictions are used for similarity voting to obtain high quality initial prediction. We apply a newly proposed boundary correction algorithm that operates based on feature similarity between consecutive frames to adjust the boundary locations iteratively through the learning process. The corrected prediction is then further refined through multiple stages of temporal convolutions. As post-processing, we optionally apply boundary correction again followed by a segment smoothing method that removes outlier classes within segments using similarity measurement between consecutive predictions. Additionally, we propose a fully unsupervised boundary detection-correction algorithm that identifies segment boundaries based solely on feature similarity without any training. Experiments on 50Salads, GTEA, and Breakfast datasets show the effectiveness of both the supervised and unsupervised algorithms. Code and models are made available on Github. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 534,016 |
2206.01515 | Understanding Deep Learning via Decision Boundary | This paper discovers that the neural network with lower decision boundary (DB) variability has better generalizability. Two new notions, algorithm DB variability and $(\epsilon, \eta)$-data DB variability, are proposed to measure the decision boundary variability from the algorithm and data perspectives. Extensive experiments show significant negative correlations between the decision boundary variability and the generalizability. From the theoretical view, two lower bounds based on algorithm DB variability are proposed and do not explicitly depend on the sample size. We also prove an upper bound of order $\mathcal{O}\left(\frac{1}{\sqrt{m}}+\epsilon+\eta\log\frac{1}{\eta}\right)$ based on data DB variability. The bound is convenient to estimate without the requirement of labels, and does not explicitly depend on the network size which is usually prohibitively large in deep learning. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 300,503 |
1406.6818 | Face Identification with Second-Order Pooling | Automatic face recognition has received significant performance improvement by developing specialised facial image representations. On the other hand, generic object recognition has rarely been applied to the face recognition. Spatial pyramid pooling of features encoded by an over-complete dictionary has been the key component of many state-of-the-art image classification systems. Inspired by its success, in this work we develop a new face image representation method inspired by the second-order pooling in Carreira et al. [1], which was originally proposed for image segmentation. The proposed method differs from the previous methods in that, we encode the densely extracted local patches by a small-size dictionary; and the facial image signatures are obtained by pooling the second-order statistics of the encoded features. We show the importance of pooling on encoded features, which is bypassed by the original second-order pooling method to avoid the high computational cost. Equipped with a simple linear classifier, the proposed method outperforms the state-of-the-art face identification performance by large margins. For example, on the LFW databases, the proposed method performs better than the previous best by around 13% accuracy. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 34,156 |
1904.00634 | CFSNet: Toward a Controllable Feature Space for Image Restoration | Deep learning methods have witnessed the great progress in image restoration with specific metrics (e.g., PSNR, SSIM). However, the perceptual quality of the restored image is relatively subjective, and it is necessary for users to control the reconstruction result according to personal preferences or image characteristics, which cannot be done using existing deterministic networks. This motivates us to exquisitely design a unified interactive framework for general image restoration tasks. Under this framework, users can control continuous transition of different objectives, e.g., the perception-distortion trade-off of image super-resolution, the trade-off between noise reduction and detail preservation. We achieve this goal by controlling the latent features of the designed network. To be specific, our proposed framework, named Controllable Feature Space Network (CFSNet), is entangled by two branches based on different objectives. Our framework can adaptively learn the coupling coefficients of different layers and channels, which provides finer control of the restored image quality. Experiments on several typical image restoration tasks fully validate the effective benefits of the proposed method. Code is available at https://github.com/qibao77/CFSNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,919 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.