id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2409.11629
Designing Interfaces for Multimodal Vector Search Applications
Multimodal vector search offers a new paradigm for information retrieval by exposing numerous pieces of functionality which are not possible in traditional lexical search engines. While multimodal vector search can be treated as a drop in replacement for these traditional systems, the experience can be significantly enhanced by leveraging the unique capabilities of multimodal search. Central to any information retrieval system is a user who expresses an information need, traditional user interfaces with a single search bar allow users to interact with lexical search systems effectively however are not necessarily optimal for multimodal vector search. In this paper we explore novel capabilities of multimodal vector search applications utilising CLIP models and present implementations and design patterns which better allow users to express their information needs and effectively interact with these systems in an information retrieval context.
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
489,241
2412.01189
MiningGPT -- A Domain-Specific Large Language Model for the Mining Industry
Recent advancements of generative LLMs (Large Language Models) have exhibited human-like language capabilities but have shown a lack of domain-specific understanding. Therefore, the research community has started the development of domain-specific LLMs for many domains. In this work we focus on discussing how to build mining domain-specific LLMs, as the global mining industry contributes significantly to the worldwide economy. We report on MiningGPT, a mining domain-specific instruction-following 7B parameter LLM model which showed a 14\% higher mining domain knowledge test score as compared to its parent model Mistral 7B instruct.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
512,998
2211.01478
A machine learning model to identify corruption in M\'exico's public procurement contracts
The costs and impacts of government corruption range from impairing a country's economic growth to affecting its citizens' well-being and safety. Public contracting between government dependencies and private sector instances, referred to as public procurement, is a fertile land of opportunity for corrupt practices, generating substantial monetary losses worldwide. Thus, identifying and deterring corrupt activities between the government and the private sector is paramount. However, due to several factors, corruption in public procurement is challenging to identify and track, leading to corrupt practices going unnoticed. This paper proposes a machine learning model based on an ensemble of random forest classifiers, which we call hyper-forest, to identify and predict corrupt contracts in M\'exico's public procurement data. This method's results correctly detect most of the corrupt and non-corrupt contracts evaluated in the dataset. Furthermore, we found that the most critical predictors considered in the model are those related to the relationship between buyers and suppliers rather than those related to features of individual contracts. Also, the method proposed here is general enough to be trained with data from other countries. Overall, our work presents a tool that can help in the decision-making process to identify, predict and analyze corruption in public procurement contracts.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
328,237
2311.01852
Optimisation of Active Space Debris Removal Missions With Multiple Targets Using Quantum Annealing
A strategy for the analysis of active debris removal missions targeting multiple objects from a set of objects in near-circular orbit with similar inclination is presented. Algebraic techniques successfully reduce the orbital mechanics regarding specific inter-debris transfer and disposal methods to simple computations, which can be used as the coefficients of a quadratic unconstrained binary optimisation (QUBO) problem formulation which minimises the total propellant used in the mission whilst allowing for servicing time and meeting the mission deadline. The QUBO is validated by solving artificial small problems (from 2 to 11 debris) using classical computational methods and the weaknesses in using these methods are examined prior to solution using quantum annealing hardware. The quantum processing unit (QPU) and quantum-classical hybrid solvers provided by D-Wave are then used to solve the same small problems, with attention paid to evident strengths and weaknesses of each approach. Hybrid solvers are found to be significantly more effective at solving larger problems. Finally, the hybrid method is used to solve a large problem using a real dataset. From a set of 79 debris objects resulting from the destruction of the Kosmos-1408 satellite, an active debris removal mission starting on 30 September 2023 targeting 5 debris objects for disposal within a year with 20 days servicing time per object is successfully planned. This plan calculates the total propellant cost of transfer and disposal to be 0.87km/s and would be complete well within the deadline at 241 days from the start date. This problem uses 6,478 binary variables in total and is solved using around 25s of QPU access time.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
405,203
1705.10245
Deep Learning for Patient-Specific Kidney Graft Survival Analysis
An accurate model of patient-specific kidney graft survival distributions can help to improve shared-decision making in the treatment and care of patients. In this paper, we propose a deep learning method that directly models the survival function instead of estimating the hazard function to predict survival times for graft patients based on the principle of multi-task learning. By learning to jointly predict the time of the event, and its rank in the cox partial log likelihood framework, our deep learning approach outperforms, in terms of survival time prediction quality and concordance index, other common methods for survival analysis, including the Cox Proportional Hazards model and a network trained on the cox partial log-likelihood.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
74,358
2108.13046
Data-driven Small-signal Modeling for Converter-based Power Systems
This article details a complete procedure to derive a data-driven small-signal-based model useful to perform converter-based power system related studies. To compute the model, Decision Tree (DT) regression, both using single DT and ensemble DT, and Spline regression have been employed and their performances have been compared, in terms of accuracy, training and computing time. The methodology includes a comprehensive step-by-step procedure to develop the model: data generation by conventional simulation and mathematical models, databases (DBs) arrangement, regression training and testing, realizing prediction for new instances. The methodology has been developed using an essential network and then tested on a more complex system, to show the validity and usefulness of the suggested approach. Both power systems test cases have the essential characteristics of converter-based power systems, simulating high penetration of converter interfaced generation and the presence of HVDC links. Moreover, it is proposed how to represent in a visual manner the results of the small-signal stability analysis for a wide range of system operating conditions, exploiting DT regressions. Finally, the possible applications of the model are discussed, highlighting the potential of the developed model in further power system small-signal related studies.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
252,694
1803.09010
Datasheets for Datasets
The machine learning community currently has no standardized process for documenting datasets, which can lead to severe consequences in high-stakes domains. To address this gap, we propose datasheets for datasets. In the electronics industry, every component, no matter how simple or complex, is accompanied with a datasheet that describes its operating characteristics, test results, recommended uses, and other information. By analogy, we propose that every dataset be accompanied with a datasheet that documents its motivation, composition, collection process, recommended uses, and so on. Datasheets for datasets will facilitate better communication between dataset creators and dataset consumers, and encourage the machine learning community to prioritize transparency and accountability.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
93,396
2203.15758
A Sparsity-promoting Dictionary Model for Variational Autoencoders
Structuring the latent space in probabilistic deep generative models, e.g., variational autoencoders (VAEs), is important to yield more expressive models and interpretable representations, and to avoid overfitting. One way to achieve this objective is to impose a sparsity constraint on the latent variables, e.g., via a Laplace prior. However, such approaches usually complicate the training phase, and they sacrifice the reconstruction quality to promote sparsity. In this paper, we propose a simple yet effective methodology to structure the latent space via a sparsity-promoting dictionary model, which assumes that each latent code can be written as a sparse linear combination of a dictionary's columns. In particular, we leverage a computationally efficient and tuning-free method, which relies on a zero-mean Gaussian latent prior with learnable variances. We derive a variational inference scheme to train the model. Experiments on speech generative modeling demonstrate the advantage of the proposed approach over competing techniques, since it promotes sparsity while not deteriorating the output speech quality.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
288,524
2304.03031
Evidentiality-aware Retrieval for Overcoming Abstractiveness in Open-Domain Question Answering
The long-standing goal of dense retrievers in abtractive open-domain question answering (ODQA) tasks is to learn to capture evidence passages among relevant passages for any given query, such that the reader produce factually correct outputs from evidence passages. One of the key challenge is the insufficient amount of training data with the supervision of the answerability of the passages. Recent studies rely on iterative pipelines to annotate answerability using signals from the reader, but their high computational costs hamper practical applications. In this paper, we instead focus on a data-centric approach and propose Evidentiality-Aware Dense Passage Retrieval (EADPR), which leverages synthetic distractor samples to learn to discriminate evidence passages from distractors. We conduct extensive experiments to validate the effectiveness of our proposed method on multiple abstractive ODQA tasks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
356,645
2212.09885
Python Code Generation by Asking Clarification Questions
Code generation from text requires understanding the user's intent from a natural language description and generating an executable code snippet that satisfies this intent. While recent pretrained language models demonstrate remarkable performance for this task, these models fail when the given natural language description is under-specified. In this work, we introduce a novel and more realistic setup for this task. We hypothesize that the under-specification of a natural language description can be resolved by asking clarification questions. Therefore, we collect and introduce a new dataset named CodeClarQA containing pairs of natural language descriptions and code with created synthetic clarification questions and answers. The empirical results of our evaluation of pretrained language model performance on code generation show that clarifications result in more precisely generated code, as shown by the substantial improvement of model performance in all evaluation metrics. Alongside this, our task and dataset introduce new challenges to the community, including when and what clarification questions should be asked. Our code and dataset are available on GitHub.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
337,244
2310.12036
A General Theoretical Paradigm to Understand Learning from Human Preferences
The prevalent deployment of learning from human preferences through reinforcement learning (RLHF) relies on two important approximations: the first assumes that pairwise preferences can be substituted with pointwise rewards. The second assumes that a reward model trained on these pointwise rewards can generalize from collected data to out-of-distribution data sampled by the policy. Recently, Direct Preference Optimisation (DPO) has been proposed as an approach that bypasses the second approximation and learn directly a policy from collected data without the reward modelling stage. However, this method still heavily relies on the first approximation. In this paper we try to gain a deeper theoretical understanding of these practical algorithms. In particular we derive a new general objective called $\Psi$PO for learning from human preferences that is expressed in terms of pairwise preferences and therefore bypasses both approximations. This new general objective allows us to perform an in-depth analysis of the behavior of RLHF and DPO (as special cases of $\Psi$PO) and to identify their potential pitfalls. We then consider another special case for $\Psi$PO by setting $\Psi$ simply to Identity, for which we can derive an efficient optimisation procedure, prove performance guarantees and demonstrate its empirical superiority to DPO on some illustrative examples.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
400,879
2411.15396
The Decoy Dilemma in Online Medical Information Evaluation: A Comparative Study of Credibility Assessments by LLM and Human Judges
Can AI be cognitively biased in automated information judgment tasks? Despite recent progresses in measuring and mitigating social and algorithmic biases in AI and large language models (LLMs), it is not clear to what extent LLMs behave "rationally", or if they are also vulnerable to human cognitive bias triggers. To address this open problem, our study, consisting of a crowdsourcing user experiment and a LLM-enabled simulation experiment, compared the credibility assessments by LLM and human judges under potential decoy effects in an information retrieval (IR) setting, and empirically examined the extent to which LLMs are cognitively biased in COVID-19 medical (mis)information assessment tasks compared to traditional human assessors as a baseline. The results, collected from a between-subject user experiment and a LLM-enabled replicate experiment, demonstrate that 1) Larger and more recent LLMs tend to show a higher level of consistency and accuracy in distinguishing credible information from misinformation. However, they are more likely to give higher ratings for misinformation due to the presence of a more salient, decoy misinformation result; 2) While decoy effect occurred in both human and LLM assessments, the effect is more prevalent across different conditions and topics in LLM judgments compared to human credibility ratings. In contrast to the generally assumed "rationality" of AI tools, our study empirically confirms the cognitive bias risks embedded in LLM agents, evaluates the decoy impact on LLMs against human credibility assessments, and thereby highlights the complexity and importance of debiasing AI agents and developing psychology-informed AI audit techniques and policies for automated judgment tasks and beyond.
true
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
510,592
2307.11476
Data-driven dual-loop control for platooning mixed human-driven and automated vehicles
This paper considers controlling automated vehicles (AVs) to form a platoon with human-driven vehicles (HVs) under consideration of unknown HV model parameters and propulsion time constants. The proposed design is a data-driven dual-loop control strategy for the ego AVs, where the inner loop controller ensures platoon stability and the outer loop controller keeps a safe inter-vehicular spacing under control input limits. The inner loop controller is a constant-gain state feedback controller solved from a semidefinite program (SDP) using the online collected data of platooning errors. The outer loop is a model predictive control (MPC) that embeds a data-driven internal model to predict the future platooning error evolution. The proposed design is evaluated on a mixed platoon with a representative aggressive reference velocity profile, the SFTP-US06 Drive Cycle. The results confirm efficacy of the design and its advantages over the existing single loop data-driven MPC in terms of platoon stability and computational cost.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
380,922
2409.16456
Communication and Energy Efficient Federated Learning using Zero-Order Optimization Technique
Federated learning (FL) is a popular machine learning technique that enables multiple users to collaboratively train a model while maintaining the user data privacy. A significant challenge in FL is the communication bottleneck in the upload direction, and thus the corresponding energy consumption of the devices, attributed to the increasing size of the model/gradient. In this paper, we address this issue by proposing a zero-order (ZO) optimization method that requires the upload of a quantized single scalar per iteration by each device instead of the whole gradient vector. We prove its theoretical convergence and find an upper bound on its convergence rate in the non-convex setting, and we discuss its implementation in practical scenarios. Our FL method and the corresponding convergence analysis take into account the impact of quantization and packet dropping due to wireless errors. We show also the superiority of our method, in terms of communication overhead and energy consumption, as compared to standard gradient-based FL methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
491,349
2210.04987
Loop Unrolled Shallow Equilibrium Regularizer (LUSER) -- A Memory-Efficient Inverse Problem Solver
In inverse problems we aim to reconstruct some underlying signal of interest from potentially corrupted and often ill-posed measurements. Classical optimization-based techniques proceed by optimizing a data consistency metric together with a regularizer. Current state-of-the-art machine learning approaches draw inspiration from such techniques by unrolling the iterative updates for an optimization-based solver and then learning a regularizer from data. This loop unrolling (LU) method has shown tremendous success, but often requires a deep model for the best performance leading to high memory costs during training. Thus, to address the balance between computation cost and network expressiveness, we propose an LU algorithm with shallow equilibrium regularizers (LUSER). These implicit models are as expressive as deeper convolutional networks, but far more memory efficient during training. The proposed method is evaluated on image deblurring, computed tomography (CT), as well as single-coil Magnetic Resonance Imaging (MRI) tasks and shows similar, or even better, performance while requiring up to 8 times less computational resources during training when compared against a more typical LU architecture with feedforward convolutional regularizers.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
322,649
2008.09706
Detecting and Classifying Malevolent Dialogue Responses: Taxonomy, Data and Methodology
Conversational interfaces are increasingly popular as a way of connecting people to information. Corpus-based conversational interfaces are able to generate more diverse and natural responses than template-based or retrieval-based agents. With their increased generative capacity of corpusbased conversational agents comes the need to classify and filter out malevolent responses that are inappropriate in terms of content and dialogue acts. Previous studies on the topic of recognizing and classifying inappropriate content are mostly focused on a certain category of malevolence or on single sentences instead of an entire dialogue. In this paper, we define the task of Malevolent Dialogue Response Detection and Classification (MDRDC). We make three contributions to advance research on this task. First, we present a Hierarchical Malevolent Dialogue Taxonomy (HMDT). Second, we create a labelled multi-turn dialogue dataset and formulate the MDRDC task as a hierarchical classification task over this taxonomy. Third, we apply stateof-the-art text classification methods to the MDRDC task and report on extensive experiments aimed at assessing the performance of these approaches.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
192,798
2404.06389
Raster Forge: Interactive Raster Manipulation Library and GUI for Python
Raster Forge is a Python library and graphical user interface for raster data manipulation and analysis. The tool is focused on remote sensing applications, particularly in wildfire management. It allows users to import, visualize, and process raster layers for tasks such as image compositing or topographical analysis. For wildfire management, it generates fuel maps using predefined models. Its impact extends from disaster management to hydrological modeling, agriculture, and environmental monitoring. Raster Forge can be a valuable asset for geoscientists and researchers who rely on raster data analysis, enhancing geospatial data processing and visualization across various disciplines.
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
true
445,446
2111.05576
Topic-aware latent models for representation learning on networks
Network representation learning (NRL) methods have received significant attention over the last years thanks to their success in several graph analysis problems, including node classification, link prediction, and clustering. Such methods aim to map each vertex of the network into a low-dimensional space in a way that the structural information of the network is preserved. Of particular interest are methods based on random walks; such methods transform the network into a collection of node sequences, aiming to learn node representations by predicting the context of each node within the sequence. In this paper, we introduce TNE, a generic framework to enhance the embeddings of nodes acquired by means of random walk-based approaches with topic-based information. Similar to the concept of topical word embeddings in Natural Language Processing, the proposed model first assigns each node to a latent community with the favor of various statistical graph models and community detection methods and then learns the enhanced topic-aware representations. We evaluate our methodology in two downstream tasks: node classification and link prediction. The experimental results demonstrate that by incorporating node and community embeddings, we are able to outperform widely-known baseline NRL models.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
265,835
2105.08336
Exemplar-Based Open-Set Panoptic Segmentation Network
We extend panoptic segmentation to the open-world and introduce an open-set panoptic segmentation (OPS) task. This task requires performing panoptic segmentation for not only known classes but also unknown ones that have not been acknowledged during training. We investigate the practical challenges of the task and construct a benchmark on top of an existing dataset, COCO. In addition, we propose a novel exemplar-based open-set panoptic segmentation network (EOPSN) inspired by exemplar theory. Our approach identifies a new class based on exemplars, which are identified by clustering and employed as pseudo-ground-truths. The size of each class increases by mining new exemplars based on the similarities to the existing ones associated with the class. We evaluate EOPSN on the proposed benchmark and demonstrate the effectiveness of our proposals. The primary goal of our work is to draw the attention of the community to the recognition in the open-world scenarios. The implementation of our algorithm is available on the project webpage: https://cv.snu.ac.kr/research/EOPSN.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
235,735
1703.10701
Concurrent Segmentation and Localization for Tracking of Surgical Instruments
Real-time instrument tracking is a crucial requirement for various computer-assisted interventions. In order to overcome problems such as specular reflections and motion blur, we propose a novel method that takes advantage of the interdependency between localization and segmentation of the surgical tool. In particular, we reformulate the 2D instrument pose estimation as heatmap regression and thereby enable a concurrent, robust and near real-time regression of both tasks via deep learning. As demonstrated by our experimental results, this modeling leads to a significantly improved performance than directly regressing the tool position and allows our method to outperform the state of the art on a Retinal Microsurgery benchmark and the MICCAI EndoVis Challenge 2015.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
70,960
2102.12365
Modelling SARS-CoV-2 coevolution with genetic algorithms
At the end of 2020, policy responses to the SARS-CoV-2 outbreak have been shaken by the emergence of virus variants, impacting public health and policy measures worldwide. The emergence of these strains suspected to be more contagious, more severe, or even resistant to antibodies and vaccines, seem to have taken by surprise health services and policymakers, struggling to adapt to the new variants constraints. Anticipating the emergence of these mutations to plan ahead adequate policies, and understanding how human behaviors may affect the evolution of viruses by coevolution, are key challenges. In this article, we propose coevolution with genetic algorithms (GAs) as a credible approach to model this relationship, highlighting its implications, potential and challenges. Because of their qualities of exploration of large spaces of possible solutions, capacity to generate novelty, and natural genetic focus, GAs are relevant for this issue. We present a dual GA model in which both viruses aiming for survival and policy measures aiming at minimising infection rates in the population, competitively evolve. This artificial coevolution system may offer us a laboratory to "debug" our current policy measures, identify the weaknesses of our current strategies, and anticipate the evolution of the virus to plan ahead relevant policies. It also constitutes a decisive opportunity to develop new genetic algorithms capable of simulating much more complex objects. We highlight some structural innovations for GAs for that virus evolution context that may carry promising developments in evolutionary computation, artificial life and AI.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
221,712
2309.11512
Multidimensional well-being of US households at a fine spatial scale using fused household surveys: fusionACS
Social science often relies on surveys of households and individuals. Dozens of such surveys are regularly administered by the U.S. government. However, they field independent, unconnected samples with specialized questions, limiting research questions to those that can be answered by a single survey. The fusionACS project seeks to integrate data from multiple U.S. household surveys by statistically "fusing" variables from "donor" surveys onto American Community Survey (ACS) microdata. This results in an integrated microdataset of household attributes and well-being dimensions that can be analyzed to address research questions in ways that are not currently possible. The presented data comprise the fusion onto the ACS of select donor variables from the Residential Energy Consumption Survey (RECS) of 2015, the National Household Transportation Survey (NHTS) of 2017, the American Housing Survey (AHS) of 2019, and the Consumer Expenditure Survey - Interview (CEI) for the years 2015-2019. The underlying statistical techniques are included in an open-source $R$ package, fusionModel, that provides generic tools for the creation, analysis, and validation of fused microdata.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
393,436
2011.08819
Spatio-Temporal Analysis of Facial Actions using Lifecycle-Aware Capsule Networks
Most state-of-the-art approaches for Facial Action Unit (AU) detection rely upon evaluating facial expressions from static frames, encoding a snapshot of heightened facial activity. In real-world interactions, however, facial expressions are usually more subtle and evolve in a temporal manner requiring AU detection models to learn spatial as well as temporal information. In this paper, we focus on both spatial and spatio-temporal features encoding the temporal evolution of facial AU activation. For this purpose, we propose the Action Unit Lifecycle-Aware Capsule Network (AULA-Caps) that performs AU detection using both frame and sequence-level features. While at the frame-level the capsule layers of AULA-Caps learn spatial feature primitives to determine AU activations, at the sequence-level, it learns temporal dependencies between contiguous frames by focusing on relevant spatio-temporal segments in the sequence. The learnt feature capsules are routed together such that the model learns to selectively focus more on spatial or spatio-temporal information depending upon the AU lifecycle. The proposed model is evaluated on the commonly used BP4D and GFT benchmark datasets obtaining state-of-the-art results on both the datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
207,004
2402.05445
Accurate LoRA-Finetuning Quantization of LLMs via Information Retention
The LoRA-finetuning quantization of LLMs has been extensively studied to obtain accurate yet compact LLMs for deployment on resource-constrained hardware. However, existing methods cause the quantized LLM to severely degrade and even fail to benefit from the finetuning of LoRA. This paper proposes a novel IR-QLoRA for pushing quantized LLMs with LoRA to be highly accurate through information retention. The proposed IR-QLoRA mainly relies on two technologies derived from the perspective of unified information: (1) statistics-based Information Calibration Quantization allows the quantized parameters of LLM to retain original information accurately; (2) finetuning-based Information Elastic Connection makes LoRA utilizes elastic representation transformation with diverse information. Comprehensive experiments show that IR-QLoRA can significantly improve accuracy across LLaMA and LLaMA2 families under 2-4 bit-widths, e.g., 4- bit LLaMA-7B achieves 1.4% improvement on MMLU compared with the state-of-the-art methods. The significant performance gain requires only a tiny 0.31% additional time consumption, revealing the satisfactory efficiency of our IR-QLoRA. We highlight that IR-QLoRA enjoys excellent versatility, compatible with various frameworks (e.g., NormalFloat and Integer quantization) and brings general accuracy gains. The code is available at https://github.com/htqin/ir-qlora.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
427,867
2108.07886
Modulating Language Models with Emotions
Generating context-aware language that embodies diverse emotions is an important step towards building empathetic NLP systems. In this paper, we propose a formulation of modulated layer normalization -- a technique inspired by computer vision -- that allows us to use large-scale language models for emotional response generation. In automatic and human evaluation on the MojiTalk dataset, our proposed modulated layer normalization method outperforms prior baseline methods while maintaining diversity, fluency, and coherence. Our method also obtains competitive performance even when using only 10% of the available training data.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
251,047
1905.11015
Unsupervised Euclidean Distance Attack on Network Embedding
Considering the wide application of network embedding methods in graph data mining, inspired by the adversarial attack in deep learning, this paper proposes a Genetic Algorithm (GA) based Euclidean Distance Attack strategy (EDA) to attack the network embedding, so as to prevent certain structural information from being discovered. EDA focuses on disturbing the Euclidean distance between a pair of nodes in the embedding space as much as possible through minimal modifications of the network structure. Since a large number of downstream network algorithms, such as community detection and node classification, rely on the Euclidean distance between nodes to evaluate the similarity between them in the embedding space, EDA can be considered as a universal attack on a variety of network algorithms. Different from traditional supervised attack strategies, EDA does not need labeling information, and, in other words, is an unsupervised network embedding attack method.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
132,305
2008.05583
Impact of Disturbances on Mixed Traffic Control with Autonomous Vehicles
This paper investigates the impact of disturbances on controlling an autonomous vehicle to smooth mixed traffic flow in a ring road setup. By exploiting the ring structure of this system, it is shown that velocity perturbations impacting any vehicle on the ring enter an uncontrollable and marginally stable mode defined by the sum of relative vehicle spacings. These disturbances are then integrated up by the system and cannot be unwound via controlling the autonomous vehicle. In particular, if the velocity disturbances are zero-mean Gaussians, then the traffic flow on the ring will undergo a random walk with the variance growing indefinitely and independently of the control policy applied. In contrast, the impact of acceleration disturbances is benign as these disturbances do no enter the uncontrollable mode, meaning that they can be easily regulated using the autonomous vehicle. Our results support and complement the existing theoretic analysis and field experiments.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
191,544
2402.10510
Human Goal Recognition as Bayesian Inference: Investigating the Impact of Actions, Timing, and Goal Solvability
Goal recognition is a fundamental cognitive process that enables individuals to infer intentions based on available cues. Current goal recognition algorithms often take only observed actions as input, but here we use a Bayesian framework to explore the role of actions, timing, and goal solvability in goal recognition. We analyze human responses to goal-recognition problems in the Sokoban domain, and find that actions are assigned most importance, but that timing and solvability also influence goal recognition in some cases, especially when actions are uninformative. We leverage these findings to develop a goal recognition model that matches human inferences more closely than do existing algorithms. Our work provides new insight into human goal recognition and takes a step towards more human-like AI models.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
429,996
2212.09900
Policy learning "without" overlap: Pessimism and generalized empirical Bernstein's inequality
This paper studies offline policy learning, which aims at utilizing observations collected a priori (from either fixed or adaptively evolving behavior policies) to learn an optimal individualized decision rule that achieves the best overall outcomes for a given population. Existing policy learning methods rely on a uniform overlap assumption, i.e., the propensities of exploring all actions for all individual characteristics must be lower bounded. As one has no control over the data collection process, this assumption can be unrealistic in many situations, especially when the behavior policies are allowed to evolve over time with diminishing propensities for certain actions. In this paper, we propose Pessimistic Policy Learning (PPL), a new algorithm that optimizes lower confidence bounds (LCBs) -- instead of point estimates -- of the policy values. The LCBs are constructed using knowledge of the behavior policies for collecting the offline data. Without assuming any uniform overlap condition, we establish a data-dependent upper bound for the suboptimality of our algorithm, which only depends on (i) the overlap for the optimal policy, and (ii) the complexity of the policy class we optimize over. As an implication, for adaptively collected data, we ensure efficient policy learning as long as the propensities for optimal actions are lower bounded over time, while those for suboptimal ones are allowed to diminish arbitrarily fast. In our theoretical analysis, we develop a new self-normalized type concentration inequality for inverse-propensity-weighting estimators, generalizing the well-known empirical Bernstein's inequality to unbounded and non-i.i.d. data. We complement our theory with an efficient optimization algorithm via Majorization-Minimization and policy tree search, as well as extensive simulation studies and real-world applications that demonstrate the efficacy of PPL.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
337,249
2102.10315
EXTRA: Explanation Ranking Datasets for Explainable Recommendation
Recently, research on explainable recommender systems has drawn much attention from both academia and industry, resulting in a variety of explainable models. As a consequence, their evaluation approaches vary from model to model, which makes it quite difficult to compare the explainability of different models. To achieve a standard way of evaluating recommendation explanations, we provide three benchmark datasets for EXplanaTion RAnking (denoted as EXTRA), on which explainability can be measured by ranking-oriented metrics. Constructing such datasets, however, poses great challenges. First, user-item-explanation triplet interactions are rare in existing recommender systems, so how to find alternatives becomes a challenge. Our solution is to identify nearly identical sentences from user reviews. This idea then leads to the second challenge, i.e., how to efficiently categorize the sentences in a dataset into different groups, since it has quadratic runtime complexity to estimate the similarity between any two sentences. To mitigate this issue, we provide a more efficient method based on Locality Sensitive Hashing (LSH) that can detect near-duplicates in sub-linear time for a given query. Moreover, we make our code publicly available to allow researchers in the community to create their own datasets.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
221,048
2411.14816
Unsupervised Multi-view UAV Image Geo-localization via Iterative Rendering
Unmanned Aerial Vehicle (UAV) Cross-View Geo-Localization (CVGL) presents significant challenges due to the view discrepancy between oblique UAV images and overhead satellite images. Existing methods heavily rely on the supervision of labeled datasets to extract viewpoint-invariant features for cross-view retrieval. However, these methods have expensive training costs and tend to overfit the region-specific cues, showing limited generalizability to new regions. To overcome this issue, we propose an unsupervised solution that lifts the scene representation to 3d space from UAV observations for satellite image generation, providing robust representation against view distortion. By generating orthogonal images that closely resemble satellite views, our method reduces view discrepancies in feature representation and mitigates shortcuts in region-specific image pairing. To further align the rendered image's perspective with the real one, we design an iterative camera pose updating mechanism that progressively modulates the rendered query image with potential satellite targets, eliminating spatial offsets relative to the reference images. Additionally, this iterative refinement strategy enhances cross-view feature invariance through view-consistent fusion across iterations. As such, our unsupervised paradigm naturally avoids the problem of region-specific overfitting, enabling generic CVGL for UAV images without feature fine-tuning or data-driven training. Experiments on the University-1652 and SUES-200 datasets demonstrate that our approach significantly improves geo-localization accuracy while maintaining robustness across diverse regions. Notably, without model fine-tuning or paired training, our method achieves competitive performance with recent supervised methods.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
510,337
2405.11014
The Arabic Noun System Generation
In this paper, we show that the multiple-stem approach to nouns with a broken plural pattern allows for greater generalizations to be stated in the morphological system. Such an approach dispenses with truncating/deleting rules and other complex rules that are required to account for the highly allomorphic broken plural system. The generation of inflected sound nouns necessitates a pre-specification of the affixes denoting the sound plural masculine and the sound plural feminine, namely uwna and aAt, in the lexicon. The first subsection of section one provides an evaluation of some of the previous analyses of the Arabic broken plural. We provide both linguistic and statistical evidence against deriving broken plurals from the singular or the root. In subsection two, we propose a multiple stem approach to the Arabic Noun Plural System within the Lexeme-based Morphology framework. In section two, we look at the noun inflection of Arabic. Section three provides an implementation of the Arabic Noun system in MORPHE. In this context, we show how the generalizations discussed in the linguistic analysis section are captured in Morphe using the equivalencing nodes.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
454,976
2401.14931
Do LLMs Dream of Ontologies?
Large Language Models (LLMs) have demonstrated remarkable performance across diverse natural language processing tasks, yet their ability to memorize structured knowledge remains underexplored. In this paper, we investigate the extent to which general-purpose pre-trained LLMs retain and correctly reproduce concept identifier (ID)-label associations from publicly available ontologies. We conduct a systematic evaluation across multiple ontological resources, including the Gene Ontology, Uberon, Wikidata, and ICD-10, using LLMs such as Pythia-12B, Gemini-1.5-Flash, GPT-3.5, and GPT-4. Our findings reveal that only a small fraction of ontological concepts is accurately memorized, with GPT-4 demonstrating the highest performance. To understand why certain concepts are memorized more effectively than others, we analyze the relationship between memorization accuracy and concept popularity on the Web. Our results indicate a strong correlation between the frequency of a concept's occurrence online and the likelihood of accurately retrieving its ID from the label. This suggests that LLMs primarily acquire such knowledge through indirect textual exposure rather than directly from structured ontological resources. Furthermore, we introduce new metrics to quantify prediction invariance, demonstrating that the stability of model responses across variations in prompt language and temperature settings can serve as a proxy for estimating memorization robustness.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
424,265
2407.15455
Score matching for bridges without time-reversals
We propose a new algorithm for learning a bridged diffusion process using score-matching methods. Our method relies on reversing the dynamics of the forward process and using this to learn a score function, which, via Doob's $h$-transform, gives us a bridged diffusion process; that is, a process conditioned on an endpoint. In contrast to prior methods, ours learns the score term $\nabla_x \log p(t, x; T, y)$, for given $t, Y$ directly, completely avoiding the need for first learning a time reversal. We compare the performance of our algorithm with existing methods and see that it outperforms using the (learned) time-reversals to learn the score term. The code can be found at https://github.com/libbylbaker/forward_bridge.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
475,194
2003.06686
Perception of prosodic variation for speech synthesis using an unsupervised discrete representation of F0
In English, prosody adds a broad range of information to segment sequences, from information structure (e.g. contrast) to stylistic variation (e.g. expression of emotion). However, when learning to control prosody in text-to-speech voices, it is not clear what exactly the control is modifying. Existing research on discrete representation learning for prosody has demonstrated high naturalness, but no analysis has been performed on what these representations capture, or if they can generate meaningfully-distinct variants of an utterance. We present a phrase-level variational autoencoder with a multi-modal prior, using the mode centres as "intonation codes". Our evaluation establishes which intonation codes are perceptually distinct, finding that the intonation codes from our multi-modal latent model were significantly more distinct than a baseline using k-means clustering. We carry out a follow-up qualitative study to determine what information the codes are carrying. Most commonly, listeners commented on the intonation codes having a statement or question style. However, many other affect-related styles were also reported, including: emotional, uncertain, surprised, sarcastic, passive aggressive, and upset.
false
false
true
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
168,192
2401.04210
FunnyNet-W: Multimodal Learning of Funny Moments in Videos in the Wild
Automatically understanding funny moments (i.e., the moments that make people laugh) when watching comedy is challenging, as they relate to various features, such as body language, dialogues and culture. In this paper, we propose FunnyNet-W, a model that relies on cross- and self-attention for visual, audio and text data to predict funny moments in videos. Unlike most methods that rely on ground truth data in the form of subtitles, in this work we exploit modalities that come naturally with videos: (a) video frames as they contain visual information indispensable for scene understanding, (b) audio as it contains higher-level cues associated with funny moments, such as intonation, pitch and pauses and (c) text automatically extracted with a speech-to-text model as it can provide rich information when processed by a Large Language Model. To acquire labels for training, we propose an unsupervised approach that spots and labels funny audio moments. We provide experiments on five datasets: the sitcoms TBBT, MHD, MUStARD, Friends, and the TED talk UR-Funny. Extensive experiments and analysis show that FunnyNet-W successfully exploits visual, auditory and textual cues to identify funny moments, while our findings reveal FunnyNet-W's ability to predict funny moments in the wild. FunnyNet-W sets the new state of the art for funny moment detection with multimodal cues on all datasets with and without using ground truth information.
false
false
true
false
true
false
false
false
true
false
false
true
false
false
false
false
false
true
420,360
1910.06444
Building Damage Detection in Satellite Imagery Using Convolutional Neural Networks
In all types of disasters, from earthquakes to armed conflicts, aid workers need accurate and timely data such as damage to buildings and population displacement to mount an effective response. Remote sensing provides this data at an unprecedented scale, but extracting operationalizable information from satellite images is slow and labor-intensive. In this work, we use machine learning to automate the detection of building damage in satellite imagery. We compare the performance of four different convolutional neural network models in detecting damaged buildings in the 2010 Haiti earthquake. We also quantify how well the models will generalize to future disasters by training and testing models on different disaster events.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
149,335
2308.12198
Hierarchical Beam Alignment for Millimeter-Wave Communication Systems: A Deep Learning Approach
Fast and precise beam alignment is crucial for high-quality data transmission in millimeter-wave (mmWave) communication systems, where large-scale antenna arrays are utilized to overcome the severe propagation loss. To tackle the challenging problem, we propose a novel deep learning-based hierarchical beam alignment method for both multiple-input single-output (MISO) and multiple-input multiple-output (MIMO) systems, which learns two tiers of probing codebooks (PCs) and uses their measurements to predict the optimal beam in a coarse-to-fine search manner. Specifically, a hierarchical beam alignment network (HBAN) is developed for MISO systems, which first performs coarse channel measurement using a tier-1 PC, then selects a tier-2 PC for fine channel measurement, and finally predicts the optimal beam based on both coarse and fine measurements. The propounded HBAN is trained in two steps: the tier-1 PC and the tier-2 PC selector are first trained jointly, followed by the joint training of all the tier-2 PCs and beam predictors. Furthermore, an HBAN for MIMO systems is proposed to directly predict the optimal beam pair without performing beam alignment individually at the transmitter and receiver. Numerical results demonstrate that the proposed HBANs are superior to the state-of-art methods in both alignment accuracy and signaling overhead reduction.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
387,450
2001.03750
SympNets: Intrinsic structure-preserving symplectic networks for identifying Hamiltonian systems
We propose new symplectic networks (SympNets) for identifying Hamiltonian systems from data based on a composition of linear, activation and gradient modules. In particular, we define two classes of SympNets: the LA-SympNets composed of linear and activation modules, and the G-SympNets composed of gradient modules. Correspondingly, we prove two new universal approximation theorems that demonstrate that SympNets can approximate arbitrary symplectic maps based on appropriate activation functions. We then perform several experiments including the pendulum, double pendulum and three-body problems to investigate the expressivity and the generalization ability of SympNets. The simulation results show that even very small size SympNets can generalize well, and are able to handle both separable and non-separable Hamiltonian systems with data points resulting from short or long time steps. In all the test cases, SympNets outperform the baseline models, and are much faster in training and prediction. We also develop an extended version of SympNets to learn the dynamics from irregularly sampled data. This extended version of SympNets can be thought of as a universal model representing the solution to an arbitrary Hamiltonian system.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
160,048
0908.0163
An Improvement of Cover/El Gamal's Compress-and-Forward Relay Scheme
The compress-and-forward relay scheme developed by (Cover and El Gamal, 1979) is improved with a modification on the decoding process. The improvement follows as a result of realizing that it is not necessary for the destination to decode the compressed observation of the relay; and even if the compressed observation is to be decoded, it can be more easily done by joint decoding with the original message, rather than in a successive way. An extension to multiple relays is also discussed.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
4,199
2005.00728
RMM: A Recursive Mental Model for Dialog Navigation
Language-guided robots must be able to both ask humans questions and understand answers. Much existing work focuses only on the latter. In this paper, we go beyond instruction following and introduce a two-agent task where one agent navigates and asks questions that a second, guiding agent answers. Inspired by theory of mind, we propose the Recursive Mental Model (RMM). The navigating agent models the guiding agent to simulate answers given candidate generated questions. The guiding agent in turn models the navigating agent to simulate navigation steps it would take to generate answers. We use the progress agents make towards the goal as a reinforcement learning reward signal to directly inform not only navigation actions, but also both question and answer generation. We demonstrate that RMM enables better generalization to novel environments. Interlocutor modelling may be a way forward for human-agent dialogue where robots need to both ask and answer questions.
false
false
false
false
true
false
true
true
true
false
false
true
false
false
false
false
false
false
175,355
2111.00514
Visualization: the missing factor in Simultaneous Speech Translation
Simultaneous speech translation (SimulST) is the task in which output generation has to be performed on partial, incremental speech input. In recent years, SimulST has become popular due to the spread of cross-lingual application scenarios, like international live conferences and streaming lectures, in which on-the-fly speech translation can facilitate users' access to audio-visual content. In this paper, we analyze the characteristics of the SimulST systems developed so far, discussing their strengths and weaknesses. We then concentrate on the evaluation framework required to properly assess systems' effectiveness. To this end, we raise the need for a broader performance analysis, also including the user experience standpoint. SimulST systems, indeed, should be evaluated not only in terms of quality/latency measures, but also via task-oriented metrics accounting, for instance, for the visualization strategy adopted. In light of this, we highlight which are the goals achieved by the community and what is still missing.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
264,243
2202.04219
Improving Computational Complexity in Statistical Models with Second-Order Information
It is known that when the statistical models are singular, i.e., the Fisher information matrix at the true parameter is degenerate, the fixed step-size gradient descent algorithm takes polynomial number of steps in terms of the sample size $n$ to converge to a final statistical radius around the true parameter, which can be unsatisfactory for the application. To further improve that computational complexity, we consider the utilization of the second-order information in the design of optimization algorithms. Specifically, we study the normalized gradient descent (NormGD) algorithm for solving parameter estimation in parametric statistical models, which is a variant of gradient descent algorithm whose step size is scaled by the maximum eigenvalue of the Hessian matrix of the empirical loss function of statistical models. When the population loss function, i.e., the limit of the empirical loss function when $n$ goes to infinity, is homogeneous in all directions, we demonstrate that the NormGD iterates reach a final statistical radius around the true parameter after a logarithmic number of iterations in terms of $n$. Therefore, for fixed dimension $d$, the NormGD algorithm achieves the optimal overall computational complexity $\mathcal{O}(n)$ to reach the final statistical radius. This computational complexity is cheaper than that of the fixed step-size gradient descent algorithm, which is of the order $\mathcal{O}(n^{\tau})$ for some $\tau > 1$, to reach the same statistical radius. We illustrate our general theory under two statistical models: generalized linear models and mixture models, and experimental results support our prediction with general theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
279,491
2110.12765
"So You Think You're Funny?": Rating the Humour Quotient in Standup Comedy
Computational Humour (CH) has attracted the interest of Natural Language Processing and Computational Linguistics communities. Creating datasets for automatic measurement of humour quotient is difficult due to multiple possible interpretations of the content. In this work, we create a multi-modal humour-annotated dataset ($\sim$40 hours) using stand-up comedy clips. We devise a novel scoring mechanism to annotate the training data with a humour quotient score using the audience's laughter. The normalized duration (laughter duration divided by the clip duration) of laughter in each clip is used to compute this humour coefficient score on a five-point scale (0-4). This method of scoring is validated by comparing with manually annotated scores, wherein a quadratic weighted kappa of 0.6 is obtained. We use this dataset to train a model that provides a "funniness" score, on a five-point scale, given the audio and its corresponding text. We compare various neural language models for the task of humour-rating and achieve an accuracy of $0.813$ in terms of Quadratic Weighted Kappa (QWK). Our "Open Mic" dataset is released for further research along with the code.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
262,958
2501.11881
Channel Resolvability Using Multiplicative Weight Update Algorithm
We study the channel resolvability problem, which is used to prove strong converse of identification via channel. Channel resolvability has been solved by only random coding in the literature. We prove channel resolvability using the multiplicative weight update algorithm. This is the first approach to channel resolvability using non-random coding.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
526,076
2210.16107
SeaDroneSim: Simulation of Aerial Images for Detection of Objects Above Water
Unmanned Aerial Vehicles (UAVs) are known for their fast and versatile applicability. With UAVs' growth in availability and applications, they are now of vital importance in serving as technological support in search-and-rescue(SAR) operations in marine environments. High-resolution cameras and GPUs can be equipped on the UAVs to provide effective and efficient aid to emergency rescue operations. With modern computer vision algorithms, we can detect objects for aiming such rescue missions. However, these modern computer vision algorithms are dependent on numerous amounts of training data from UAVs, which is time-consuming and labor-intensive for maritime environments. To this end, we present a new benchmark suite, SeaDroneSim, that can be used to create photo-realistic aerial image datasets with the ground truth for segmentation masks of any given object. Utilizing only the synthetic data generated from SeaDroneSim, we obtain 71 mAP on real aerial images for detecting BlueROV as a feasibility study. This result from the new simulation suit also serves as a baseline for the detection of BlueROV.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
327,227
2102.12896
Predicting times of waiting on red signals using BERT
We present a method for approximating outcomes of road traffic simulations using BERT-based models, which may find applications in, e.g., optimizing traffic signal settings, especially with the presence of autonomous and connected vehicles. The experiments were conducted on a dataset generated using the Traffic Simulation Framework software runs on a realistic road network. The BERT-based models were compared with 4 other types of machine learning models (LightGBM, fully connected neural networks and 2 types of graph neural networks) and gave the best results in terms of all the considered metrics.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
221,876
1808.06022
Characterizing Transgender Health Issues in Twitter
Although there are millions of transgender people in the world, a lack of information exists about their health issues. This issue has consequences for the medical field, which only has a nascent understanding of how to identify and meet this population's health-related needs. Social media sites like Twitter provide new opportunities for transgender people to overcome these barriers by sharing their personal health experiences. Our research employs a computational framework to collect tweets from self-identified transgender users, detect those that are health-related, and identify their information needs. This framework is significant because it provides a macro-scale perspective on an issue that lacks investigation at national or demographic levels. Our findings identified 54 distinct health-related topics that we grouped into 7 broader categories. Further, we found both linguistic and topical differences in the health-related information shared by transgender men (TM) as com-pared to transgender women (TW). These findings can help inform medical and policy-based strategies for health interventions within transgender communities. Also, our proposed approach can inform the development of computational strategies to identify the health-related information needs of other marginalized populations.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
105,456
2004.04069
Convolutional neural net face recognition works in non-human-like ways
Convolutional neural networks (CNNs) give state of the art performance in many pattern recognition problems but can be fooled by carefully crafted patterns of noise. We report that CNN face recognition systems also make surprising "errors". We tested six commercial face recognition CNNs and found that they outperform typical human participants on standard face matching tasks. However, they also declare matches that humans would not, where one image from the pair has been transformed to look a different sex or race. This is not due to poor performance; the best CNNs perform almost perfectly on the human face matching tasks, but also declare the most matches for faces of a different apparent race or sex. Although differing on the salience of sex and race, humans and computer systems are not working in completely different ways. They tend to find the same pairs of images difficult, suggesting some agreement about the underlying similarity space.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
171,768
2012.04428
A General Computational Framework to Measure the Expressiveness of Complex Networks Using a Tighter Upper Bound of Linear Regions
The expressiveness of deep neural network (DNN) is a perspective to understandthe surprising performance of DNN. The number of linear regions, i.e. pieces thata piece-wise-linear function represented by a DNN, is generally used to measurethe expressiveness. And the upper bound of regions number partitioned by a rec-tifier network, instead of the number itself, is a more practical measurement ofexpressiveness of a rectifier DNN. In this work, we propose a new and tighter up-per bound of regions number. Inspired by the proof of this upper bound and theframework of matrix computation in Hinz & Van de Geer (2019), we propose ageneral computational approach to compute a tight upper bound of regions numberfor theoretically any network structures (e.g. DNN with all kind of skip connec-tions and residual structures). Our experiments show our upper bound is tighterthan existing ones, and explain why skip connections and residual structures canimprove network performance.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
210,453
2206.09182
Coin Flipping Neural Networks
We show that neural networks with access to randomness can outperform deterministic networks by using amplification. We call such networks Coin-Flipping Neural Networks, or CFNNs. We show that a CFNN can approximate the indicator of a $d$-dimensional ball to arbitrary accuracy with only 2 layers and $\mathcal{O}(1)$ neurons, where a 2-layer deterministic network was shown to require $\Omega(e^d)$ neurons, an exponential improvement (arXiv:1610.09887). We prove a highly non-trivial result, that for almost any classification problem, there exists a trivially simple network that solves it given a sufficiently powerful generator for the network's weights. Combining these results we conjecture that for most classification problems, there is a CFNN which solves them with higher accuracy or fewer neurons than any deterministic network. Finally, we verify our proofs experimentally using novel CFNN architectures on CIFAR10 and CIFAR100, reaching an improvement of 9.25\% from the baseline.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
303,461
2111.07507
Convergence and Equilibria Analysis of a Networked Bivirus Epidemic Model
This paper studies a networked bivirus model, in which two competing viruses spread across a network of interconnected populations; each node represents a population with a large number of individuals. The viruses may spread through possibly different network structures, and an individual cannot be simultaneously infected with both viruses. Focusing on convergence and equilibria analysis, a number of new results are provided. First, we show that for networks with generic system parameters, there exist a finite number of equilibria. Exploiting monotone systems theory, we further prove that for bivirus networks with generic system parameters, then convergence to an equilibrium occurs for all initial conditions, except possibly for a set of measure zero. Given the network structure of one virus, a method is presented to construct an infinite family of network structures for the other virus that results in an infinite number of equilibria in which both viruses coexist. Necessary and sufficient conditions are derived for the local stability/instability of boundary equilibria, in which one virus is present and the other is extinct. A sufficient condition for a boundary equilibrium to be almost globally stable is presented. Then, we show how to use monotone systems theory to generate conclusions on the ordering of stable and unstable equilibria, and in some instances identify the number of equilibria via rapid simulation testing. Last, we provide an analytical method for computing equilibria in networks with only two nodes, and show that it is possible for a bivirus network to have an unstable coexistence equilibrium and two locally stable boundary equilibria.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
266,398
2405.03929
Unicorn: U-Net for Sea Ice Forecasting with Convolutional Neural Ordinary Differential Equations
Sea ice at the North Pole is vital to global climate dynamics. However, accurately forecasting sea ice poses a significant challenge due to the intricate interaction among multiple variables. Leveraging the capability to integrate multiple inputs and powerful performances seamlessly, many studies have turned to neural networks for sea ice forecasting. This paper introduces a novel deep architecture named Unicorn, designed to forecast weekly sea ice. Our model integrates multiple time series images within its architecture to enhance its forecasting performance. Moreover, we incorporate a bottleneck layer within the U-Net architecture, serving as neural ordinary differential equations with convolution operations, to capture the spatiotemporal dynamics of latent variables. Through real data analysis with datasets spanning from 1998 to 2021, our proposed model demonstrates significant improvements over state-of-the-art models in the sea ice concentration forecasting task. It achieves an average MAE improvement of 12% compared to benchmark models. Additionally, our method outperforms existing approaches in sea ice extent forecasting, achieving a classification performance improvement of approximately 18%. These experimental results show the superiority of our proposed model.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
452,365
2007.03204
Learning Branching Heuristics for Propositional Model Counting
Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
185,986
2303.03211
Using a Variational Autoencoder to Learn Valid Search Spaces of Safely Monitored Autonomous Robots for Last-Mile Delivery
The use of autonomous robots for delivery of goods to customers is an exciting new way to provide a reliable and sustainable service. However, in the real world, autonomous robots still require human supervision for safety reasons. We tackle the realworld problem of optimizing autonomous robot timings to maximize deliveries, while ensuring that there are never too many robots running simultaneously so that they can be monitored safely. We assess the use of a recent hybrid machine-learningoptimization approach COIL (constrained optimization in learned latent space) and compare it with a baseline genetic algorithm for the purposes of exploring variations of this problem. We also investigate new methods for improving the speed and efficiency of COIL. We show that only COIL can find valid solutions where appropriate numbers of robots run simultaneously for all problem variations tested. We also show that when COIL has learned its latent representation, it can optimize 10% faster than the GA, making it a good choice for daily re-optimization of robots where delivery requests for each day are allocated to robots while maintaining safe numbers of robots running at once.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
false
false
349,648
2005.07656
A Deep Q-learning/genetic Algorithms Based Novel Methodology For Optimizing Covid-19 Pandemic Government Actions
Whenever countries are threatened by a pandemic, as is the case with the COVID-19 virus, governments should take the right actions to safeguard public health as well as to mitigate the negative effects on the economy. In this regard, there are two completely different approaches governments can take: a restrictive one, in which drastic measures such as self-isolation can seriously damage the economy, and a more liberal one, where more relaxed restrictions may put at risk a high percentage of the population. The optimal approach could be somewhere in between, and, in order to make the right decisions, it is necessary to accurately estimate the future effects of taking one or other measures. In this paper, we use the SEIR epidemiological model (Susceptible - Exposed - Infected - Recovered) for infectious diseases to represent the evolution of the virus COVID-19 over time in the population. To optimize the best sequences of actions governments can take, we propose a methodology with two approaches, one based on Deep Q-Learning and another one based on Genetic Algorithms. The sequences of actions (confinement, self-isolation, two-meter distance or not taking restrictions) are evaluated according to a reward system focused on meeting two objectives: firstly, getting few people infected so that hospitals are not overwhelmed with critical patients, and secondly, avoiding taking drastic measures for too long which can potentially cause serious damage to the economy. The conducted experiments prove that our methodology is a valid tool to discover actions governments can take to reduce the negative effects of a pandemic in both senses. We also prove that the approach based on Deep Q-Learning overcomes the one based on Genetic Algorithms for optimizing the sequences of actions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
177,341
2303.01869
Intrinsic Physical Concepts Discovery with Object-Centric Predictive Models
The ability to discover abstract physical concepts and understand how they work in the world through observing lies at the core of human intelligence. The acquisition of this ability is based on compositionally perceiving the environment in terms of objects and relations in an unsupervised manner. Recent approaches learn object-centric representations and capture visually observable concepts of objects, e.g., shape, size, and location. In this paper, we take a step forward and try to discover and represent intrinsic physical concepts such as mass and charge. We introduce the PHYsical Concepts Inference NEtwork (PHYCINE), a system that infers physical concepts in different abstract levels without supervision. The key insights underlining PHYCINE are two-fold, commonsense knowledge emerges with prediction, and physical concepts of different abstract levels should be reasoned in a bottom-up fashion. Empirical evaluation demonstrates that variables inferred by our system work in accordance with the properties of the corresponding physical concepts. We also show that object representations containing the discovered physical concepts variables could help achieve better performance in causal reasoning tasks, i.e., ComPhy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
349,139
2303.07489
MRET: Multi-resolution Transformer for Video Quality Assessment
No-reference video quality assessment (NR-VQA) for user generated content (UGC) is crucial for understanding and improving visual experience. Unlike video recognition tasks, VQA tasks are sensitive to changes in input resolution. Since large amounts of UGC videos nowadays are 720p or above, the fixed and relatively small input used in conventional NR-VQA methods results in missing high-frequency details for many videos. In this paper, we propose a novel Transformer-based NR-VQA framework that preserves the high-resolution quality information. With the multi-resolution input representation and a novel multi-resolution patch sampling mechanism, our method enables a comprehensive view of both the global video composition and local high-resolution details. The proposed approach can effectively aggregate quality information across different granularities in spatial and temporal dimensions, making the model robust to input resolution variations. Our method achieves state-of-the-art performance on large-scale UGC VQA datasets LSVQ and LSVQ-1080p, and on KoNViD-1k and LIVE-VQC without fine-tuning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
351,266
1603.02772
Unscented External Force and Torque Estimation for Quadrotors
In this paper, we describe an algorithm, based on the well-known Unscented Quaternion Estimator, to estimate external forces and torques acting on a quadrotor. This formulation uses a non-linear model for the quadrotor dynamics, naturally incorporates process and measurement noise, requires only a few parameters to be tuned manually, and uses singularity-free unit quaternions to represent attitude. We demonstrate in simulation that the proposed algorithm can outperform existing methods. We then highlight how our approach can be used to generate force and torque profiles from experimental data, and how this information can later be used for controller design. Finally, we show how the resulting controllers enable a quadrotor to stay in the wind field of a moving fan.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
53,055
1506.04209
A Flexible and Efficient Algorithmic Framework for Constrained Matrix and Tensor Factorization
We propose a general algorithmic framework for constrained matrix and tensor factorization, which is widely used in signal processing and machine learning. The new framework is a hybrid between alternating optimization (AO) and the alternating direction method of multipliers (ADMM): each matrix factor is updated in turn, using ADMM, hence the name AO-ADMM. This combination can naturally accommodate a great variety of constraints on the factor matrices, and almost all possible loss measures for the fitting. Computation caching and warm start strategies are used to ensure that each update is evaluated efficiently, while the outer AO framework exploits recent developments in block coordinate descent (BCD)-type methods which help ensure that every limit point is a stationary point, as well as faster and more robust convergence in practice. Three special cases are studied in detail: non-negative matrix/tensor factorization, constrained matrix/tensor completion, and dictionary learning. Extensive simulations and experiments with real data are used to showcase the effectiveness and broad applicability of the proposed framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
44,136
1501.07399
Particle swarm optimization for time series motif discovery
Efficiently finding similar segments or motifs in time series data is a fundamental task that, due to the ubiquity of these data, is present in a wide range of domains and situations. Because of this, countless solutions have been devised but, to date, none of them seems to be fully satisfactory and flexible. In this article, we propose an innovative standpoint and present a solution coming from it: an anytime multimodal optimization algorithm for time series motif discovery based on particle swarms. By considering data from a variety of domains, we show that this solution is extremely competitive when compared to the state-of-the-art, obtaining comparable motifs in considerably less time using minimal memory. In addition, we show that it is robust to different implementation choices and see that it offers an unprecedented degree of flexibility with regard to the task. All these qualities make the presented solution stand out as one of the most prominent candidates for motif discovery in long time series streams. Besides, we believe the proposed standpoint can be exploited in further time series analysis and mining tasks, widening the scope of research and potentially yielding novel effective solutions.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
39,703
2211.00526
Leveraging Graph-based Cross-modal Information Fusion for Neural Sign Language Translation
Sign Language (SL), as the mother tongue of the deaf community, is a special visual language that most hearing people cannot understand. In recent years, neural Sign Language Translation (SLT), as a possible way for bridging communication gap between the deaf and the hearing people, has attracted widespread academic attention. We found that the current mainstream end-to-end neural SLT models, which tries to learning language knowledge in a weakly supervised manner, could not mine enough semantic information under the condition of low data resources. Therefore, we propose to introduce additional word-level semantic knowledge of sign language linguistics to assist in improving current end-to-end neural SLT models. Concretely, we propose a novel neural SLT model with multi-modal feature fusion based on the dynamic graph, in which the cross-modal information, i.e. text and video, is first assembled as a dynamic graph according to their correlation, and then the graph is processed by a multi-modal graph encoder to generate the multi-modal embeddings for further usage in the subsequent neural translation models. To the best of our knowledge, we are the first to introduce graph neural networks, for fusing multi-modal information, into neural sign language translation models. Moreover, we conducted experiments on a publicly available popular SLT dataset RWTH-PHOENIX-Weather-2014T. and the quantitative experiments show that our method can improve the model.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
327,901
1012.3023
Generating constrained random graphs using multiple edge switches
The generation of random graphs using edge swaps provides a reliable method to draw uniformly random samples of sets of graphs respecting some simple constraints, e.g. degree distributions. However, in general, it is not necessarily possible to access all graphs obeying some given con- straints through a classical switching procedure calling on pairs of edges. We therefore propose to get round this issue by generalizing this classical approach through the use of higher-order edge switches. This method, which we denote by "k-edge switching", makes it possible to progres- sively improve the covered portion of a set of constrained graphs, thereby providing an increasing, asymptotically certain confidence on the statistical representativeness of the obtained sample.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
8,534
2410.01444
Geometric Signatures of Compositionality Across a Language Model's Lifetime
By virtue of linguistic compositionality, few syntactic rules and a finite lexicon can generate an unbounded number of sentences. That is, language, though seemingly high-dimensional, can be explained using relatively few degrees of freedom. An open question is whether contemporary language models (LMs) reflect the intrinsic simplicity of language that is enabled by compositionality. We take a geometric view of this problem by relating the degree of compositionality in a dataset to the intrinsic dimension (ID) of its representations under an LM, a measure of feature complexity. We find not only that the degree of dataset compositionality is reflected in representations' ID, but that the relationship between compositionality and geometric complexity arises due to learned linguistic features over training. Finally, our analyses reveal a striking contrast between nonlinear and linear dimensionality, showing they respectively encode semantic and superficial aspects of linguistic composition.
false
false
false
false
true
false
true
false
true
true
false
false
false
false
false
false
false
false
493,777
2210.11467
Multi-View Guided Multi-View Stereo
This paper introduces a novel deep framework for dense 3D reconstruction from multiple image frames, leveraging a sparse set of depth measurements gathered jointly with image acquisition. Given a deep multi-view stereo network, our framework uses sparse depth hints to guide the neural network by modulating the plane-sweep cost volume built during the forward step, enabling us to infer constantly much more accurate depth maps. Moreover, since multiple viewpoints can provide additional depth measurements, we propose a multi-view guidance strategy that increases the density of the sparse points used to guide the network, thus leading to even more accurate results. We evaluate our Multi-View Guided framework within a variety of state-of-the-art deep multi-view stereo networks, demonstrating its effectiveness at improving the results achieved by each of them on BlendedMVG and DTU datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
325,322
2311.13016
Image-Based Soil Organic Carbon Remote Sensing from Satellite Images with Fourier Neural Operator and Structural Similarity
Soil organic carbon (SOC) sequestration is the transfer and storage of atmospheric carbon dioxide in soils, which plays an important role in climate change mitigation. SOC concentration can be improved by proper land use, thus it is beneficial if SOC can be estimated at a regional or global scale. As multispectral satellite data can provide SOC-related information such as vegetation and soil properties at a global scale, estimation of SOC through satellite data has been explored as an alternative to manual soil sampling. Although existing studies show promising results, they are mainly based on pixel-based approaches with traditional machine learning methods, and convolutional neural networks (CNNs) are uncommon. To study the use of CNNs on SOC remote sensing, here we propose the FNO-DenseNet based on the Fourier neural operator (FNO). By combining the advantages of the FNO and DenseNet, the FNO-DenseNet outperformed the FNO in our experiments with hundreds of times fewer parameters. The FNO-DenseNet also outperformed a pixel-based random forest by 18% in the mean absolute percentage error.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
409,588
1705.06477
Protecting Against Untrusted Relays: An Information Self-encrypted Approach
The reliability and transmission distance are generally limited for the wireless communications due to the severe channel fading. As an effective way to resist the channel fading, cooperative relaying is usually adopted in wireless networks where neighbouring nodes act as relays to help the transmission between the source and the destination. Most research works simply regard these cooperative nodes trustworthy, which may be not practical in some cases especially when transmitting confidential information. In this paper, we consider the issue of untrusted relays in cooperative communications and propose an information self-encrypted approach to protect against these relays. Specifically, the original packets of the information are used to encrypt each other as the secret keys such that the information cannot be recovered before all of the encrypted packets have been received. The information is intercepted only when the relays obtain all of these encrypted packets. It is proved that the intercept probability is reduced to zero exponentially with the number of the original packets. However, the security performance is still not satisfactory for a large number of relays. Therefore, the combination of destination-based jamming is further adopted to confuse the relays, which makes the security performance acceptable even for a large number of relays. Finally, the simulation results are provided to confirm the theoretical analysis and the superiority of the proposed scheme.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
73,641
1707.01341
A Modern Approach to Integrate Database Queries for Searching E-Commerce Product
E commerce refers to the utilization of electronic data transmission for enhancing business processes and implementing business strategies. Explicit components of e commerce include providing after sales services, promoting services or products to services, processing payment, engaging in transaction processes, identifying customer needs, processing payment and creating services or products. In recent times, the use of e commerce has become too common among the people. However, the growing demand of e commerce sites have made essential for the databases to support direct querying of the Web page. This research aims to explore and evaluate the integration of database queries and their uses in searching of electronic commerce products. It has been analyzed that e commerce is one of the most outstanding trends, which have been emerged in the commerce world, for the last decades. Therefore, this study was undertaken to examine the benefits of integrating database queries with e commerce product searches. The findings of this study suggested that database queries are extremely valuable for e commerce sites as they make product searches simpler and accurate. In this context, the approach of integrating database queries is found to be the most suitable and satisfactory, as it simplifies the searching of e commerce products.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
76,520
2406.17563
Multi-property Steering of Large Language Models with Dynamic Activation Composition
Activation steering methods were shown to be effective in conditioning language model generation by additively intervening over models' intermediate representations. However, the evaluation of these techniques has so far been limited to single conditioning properties and synthetic settings. In this work, we conduct a comprehensive evaluation of various activation steering strategies, highlighting the property-dependent nature of optimal parameters to ensure a robust effect throughout generation. To address this issue, we propose Dynamic Activation Composition, an information-theoretic approach to modulate the steering intensity of one or more properties throughout generation. Our experiments on multi-property steering show that our method successfully maintains high conditioning while minimizing the impact of conditioning on generation fluency.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
467,623
2002.09797
Reliable Fidelity and Diversity Metrics for Generative Models
Devising indicative evaluation metrics for the image generation task remains an open problem. The most widely used metric for measuring the similarity between real and generated images has been the Fr\'echet Inception Distance (FID) score. Because it does not differentiate the fidelity and diversity aspects of the generated images, recent papers have introduced variants of precision and recall metrics to diagnose those properties separately. In this paper, we show that even the latest version of the precision and recall metrics are not reliable yet. For example, they fail to detect the match between two identical distributions, they are not robust against outliers, and the evaluation hyperparameters are selected arbitrarily. We propose density and coverage metrics that solve the above issues. We analytically and experimentally show that density and coverage provide more interpretable and reliable signals for practitioners than the existing metrics. Code: https://github.com/clovaai/generative-evaluation-prdc.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
165,183
2407.15811
Stretching Each Dollar: Diffusion Training from Scratch on a Micro-Budget
As scaling laws in generative AI push performance, they also simultaneously concentrate the development of these models among actors with large computational resources. With a focus on text-to-image (T2I) generative models, we aim to address this bottleneck by demonstrating very low-cost training of large-scale T2I diffusion transformer models. As the computational cost of transformers increases with the number of patches in each image, we propose to randomly mask up to 75% of the image patches during training. We propose a deferred masking strategy that preprocesses all patches using a patch-mixer before masking, thus significantly reducing the performance degradation with masking, making it superior to model downscaling in reducing computational cost. We also incorporate the latest improvements in transformer architecture, such as the use of mixture-of-experts layers, to improve performance and further identify the critical benefit of using synthetic images in micro-budget training. Finally, using only 37M publicly available real and synthetic images, we train a 1.16 billion parameter sparse transformer with only \$1,890 economical cost and achieve a 12.7 FID in zero-shot generation on the COCO dataset. Notably, our model achieves competitive FID and high-quality generations while incurring 118$\times$ lower cost than stable diffusion models and 14$\times$ lower cost than the current state-of-the-art approach that costs \$28,400. We aim to release our end-to-end training pipeline to further democratize the training of large-scale diffusion models on micro-budgets.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
475,343
2404.10942
What Hides behind Unfairness? Exploring Dynamics Fairness in Reinforcement Learning
In sequential decision-making problems involving sensitive attributes like race and gender, reinforcement learning (RL) agents must carefully consider long-term fairness while maximizing returns. Recent works have proposed many different types of fairness notions, but how unfairness arises in RL problems remains unclear. In this paper, we address this gap in the literature by investigating the sources of inequality through a causal lens. We first analyse the causal relationships governing the data generation process and decompose the effect of sensitive attributes on long-term well-being into distinct components. We then introduce a novel notion called dynamics fairness, which explicitly captures the inequality stemming from environmental dynamics, distinguishing it from those induced by decision-making or inherited from the past. This notion requires evaluating the expected changes in the next state and the reward induced by changing the value of the sensitive attribute while holding everything else constant. To quantitatively evaluate this counterfactual concept, we derive identification formulas that allow us to obtain reliable estimations from data. Extensive experiments demonstrate the effectiveness of the proposed techniques in explaining, detecting, and reducing inequality in reinforcement learning. We publicly release code at https://github.com/familyld/InsightFair.
false
false
false
false
true
false
true
false
false
false
false
false
false
true
false
false
false
false
447,315
2404.12560
Dubo-SQL: Diverse Retrieval-Augmented Generation and Fine Tuning for Text-to-SQL
The current state-of-the-art (SOTA) for automated text-to-SQL still falls well short of expert human performance as measured by execution accuracy (EX) on the BIRD-SQL benchmark. The most accurate methods are also slow and expensive. To advance the SOTA for text-to-SQL while reducing cost and improving speed, we explore the combination of low-cost fine tuning, novel methods for diverse retrieval-augmented generation (RAG) and new input and output formats that help large language models (LLMs) achieve higher EX. We introduce two new methods, Dubo-SQL v1 and v2. Dubo-SQL v1 sets a new record for EX on the holdout test set of BIRD-SQL. Dubo-SQL v2 achieves even higher performance on the BIRD-SQL dev set. Dubo-SQL v1 relies on LLMs from OpenAI, but uses the low-cost GPT-3.5 Turbo while exceeding the performance of the next-best model using OpenAI, which instead uses the more expensive GPT-4. Dubo-SQL v1 exceeds the performance of the next-best model using GPT-3.5 by over 20%. Dubo-SQL v2 uses GPT-4 Turbo and RAG in place of fine tuning to push EX higher.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
447,935
2410.01978
LLM+KG@VLDB'24 Workshop Summary
The unification of large language models (LLMs) and knowledge graphs (KGs) has emerged as a hot topic. At the LLM+KG'24 workshop, held in conjunction with VLDB 2024 in Guangzhou, China, one of the key themes explored was important data management challenges and opportunities due to the effective interaction between LLMs and KGs. This report outlines the major directions and approaches presented by various speakers during the LLM+KG'24 workshop.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
true
false
494,042
2107.07871
Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations
Recently, physics-informed neural networks (PINNs) have offered a powerful new paradigm for solving problems relating to differential equations. Compared to classical numerical methods PINNs have several advantages, for example their ability to provide mesh-free solutions of differential equations and their ability to carry out forward and inverse modelling within the same optimisation problem. Whilst promising, a key limitation to date is that PINNs have struggled to accurately and efficiently solve problems with large domains and/or multi-scale solutions, which is crucial for their real-world application. Multiple significant and related factors contribute to this issue, including the increasing complexity of the underlying PINN optimisation problem as the problem size grows and the spectral bias of neural networks. In this work we propose a new, scalable approach for solving large problems relating to differential equations called Finite Basis PINNs (FBPINNs). FBPINNs are inspired by classical finite element methods, where the solution of the differential equation is expressed as the sum of a finite set of basis functions with compact support. In FBPINNs neural networks are used to learn these basis functions, which are defined over small, overlapping subdomains. FBINNs are designed to address the spectral bias of neural networks by using separate input normalisation over each subdomain, and reduce the complexity of the underlying optimisation problem by using many smaller neural networks in a parallel divide-and-conquer approach. Our numerical experiments show that FBPINNs are effective in solving both small and larger, multi-scale problems, outperforming standard PINNs in both accuracy and computational resources required, potentially paving the way to the application of PINNs on large, real-world problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
246,554
1009.5750
Use of multiple singular value decompositions to analyze complex intracellular calcium ion signals
We compare calcium ion signaling ($\mathrm {Ca}^{2+}$) between two exposures; the data are present as movies, or, more prosaically, time series of images. This paper describes novel uses of singular value decompositions (SVD) and weighted versions of them (WSVD) to extract the signals from such movies, in a way that is semi-automatic and tuned closely to the actual data and their many complexities. These complexities include the following. First, the images themselves are of no interest: all interest focuses on the behavior of individual cells across time, and thus, the cells need to be segmented in an automated manner. Second, the cells themselves have 100$+$ pixels, so that they form 100$+$ curves measured over time, so that data compression is required to extract the features of these curves. Third, some of the pixels in some of the cells are subject to image saturation due to bit depth limits, and this saturation needs to be accounted for if one is to normalize the images in a reasonably unbiased manner. Finally, the $\mathrm {Ca}^{2+}$ signals have oscillations or waves that vary with time and these signals need to be extracted. Thus, our aim is to show how to use multiple weighted and standard singular value decompositions to detect, extract and clarify the $\mathrm {Ca}^{2+}$ signals. Our signal extraction methods then lead to simple although finely focused statistical methods to compare $\mathrm {Ca}^{2+}$ signals across experimental conditions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
7,705
1712.07887
Multiagent-based Participatory Urban Simulation through Inverse Reinforcement Learning
The multiagent-based participatory simulation features prominently in urban planning as the acquired model is considered as the hybrid system of the domain and the local knowledge. However, the key problem of generating realistic agents for particular social phenomena invariably remains. The existing models have attempted to dictate the factors involving human behavior, which appeared to be intractable. In this paper, Inverse Reinforcement Learning (IRL) is introduced to address this problem. IRL is developed for computational modeling of human behavior and has achieved great successes in robotics, psychology and machine learning. The possibilities presented by this new style of modeling are drawn out as conclusions, and the relative challenges with this modeling are highlighted.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
87,117
2408.11470
A Thorough Comparison Between Independent Cascade and Susceptible-Infected-Recovered Models
We study cascades in social networks with the independent cascade (IC) model and the Susceptible-Infected-recovered (SIR) model. The well-studied IC model fails to capture the feature of node recovery, and the SIR model is a variant of the IC model with the node recovery feature. In the SIR model, by computing the probability that a node successfully infects another before its recovery and viewing this probability as the corresponding IC parameter, the SIR model becomes an "out-going-edge-correlated" version of the IC model: the events of the infections along different out-going edges of a node become dependent in the SIR model, whereas these events are independent in the IC model. In this paper, we thoroughly compare the two models and examine the effect of this extra dependency in the SIR model. By a carefully designed coupling argument, we show that the seeds in the IC model have a stronger influence spread than their counterparts in the SIR model, and sometimes it can be significantly stronger. Specifically, we prove that, given the same network, the same seed sets, and the parameters of the two models being set based on the above-mentioned equivalence, the expected number of infected nodes at the end of the cascade for the IC model is weakly larger than that for the SIR model, and there are instances where this dominance is significant. We also study the influence maximization problem with the SIR model. We show that the above-mentioned difference in the two models yields different seed-selection strategies, which motivates the design of influence maximization algorithms specifically for the SIR model. We design efficient approximation algorithms with theoretical guarantees by adapting the reverse-reachable-set-based algorithms, commonly used for the IC model, to the SIR model.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
482,310
2402.18434
Graph Regularized Encoder Training for Extreme Classification
Deep extreme classification (XC) aims to train an encoder architecture and an accompanying classifier architecture to tag a data point with the most relevant subset of labels from a very large universe of labels. XC applications in ranking, recommendation and tagging routinely encounter tail labels for which the amount of training data is exceedingly small. Graph convolutional networks (GCN) present a convenient but computationally expensive way to leverage task metadata and enhance model accuracies in these settings. This paper formally establishes that in several use cases, the steep computational cost of GCNs is entirely avoidable by replacing GCNs with non-GCN architectures. The paper notices that in these settings, it is much more effective to use graph data to regularize encoder training than to implement a GCN. Based on these insights, an alternative paradigm RAMEN is presented to utilize graph metadata in XC settings that offers significant performance boosts with zero increase in inference computational costs. RAMEN scales to datasets with up to 1M labels and offers prediction accuracy up to 15% higher on benchmark datasets than state of the art methods, including those that use graph metadata to train GCNs. RAMEN also offers 10% higher accuracy over the best baseline on a proprietary recommendation dataset sourced from click logs of a popular search engine. Code for RAMEN will be released publicly.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
433,431
2406.04549
Concurrent Training and Layer Pruning of Deep Neural Networks
We propose an algorithm capable of identifying and eliminating irrelevant layers of a neural network during the early stages of training. In contrast to weight or filter-level pruning, layer pruning reduces the harder to parallelize sequential computation of a neural network. We employ a structure using residual connections around nonlinear network sections that allow the flow of information through the network once a nonlinear section is pruned. Our approach is based on variational inference principles using Gaussian scale mixture priors on the neural network weights and allows for substantial cost savings during both training and inference. More specifically, the variational posterior distribution of scalar Bernoulli random variables multiplying a layer weight matrix of its nonlinear sections is learned, similarly to adaptive layer-wise dropout. To overcome challenges of concurrent learning and pruning such as premature pruning and lack of robustness with respect to weight initialization or the size of the starting network, we adopt the "flattening" hyper-prior on the prior parameters. We prove that, as a result of its usage, the solutions of the resulting optimization problem describe deterministic networks with parameters of the posterior distribution at either 0 or 1. We formulate a projected SGD algorithm and prove its convergence to such a solution using stochastic approximation results. In particular, we prove conditions that lead to a layer's weights converging to zero and derive practical pruning conditions from the theoretical results. The proposed algorithm is evaluated on the MNIST, CIFAR-10 and ImageNet datasets and common LeNet, VGG16 and ResNet architectures. The simulations demonstrate that our method achieves state-of the-art performance for layer pruning at reduced computational cost in distinction to competing methods due to the concurrent training and pruning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
461,727
1301.2840
Unsupervised Feature Learning for low-level Local Image Descriptors
Unsupervised feature learning has shown impressive results for a wide range of input modalities, in particular for object classification tasks in computer vision. Using a large amount of unlabeled data, unsupervised feature learning methods are utilized to construct high-level representations that are discriminative enough for subsequently trained supervised classification algorithms. However, it has never been \emph{quantitatively} investigated yet how well unsupervised learning methods can find \emph{low-level representations} for image patches without any additional supervision. In this paper we examine the performance of pure unsupervised methods on a low-level correspondence task, a problem that is central to many Computer Vision applications. We find that a special type of Restricted Boltzmann Machines (RBMs) performs comparably to hand-crafted descriptors. Additionally, a simple binarization scheme produces compact representations that perform better than several state-of-the-art descriptors.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
21,046
2403.18762
ModaLink: Unifying Modalities for Efficient Image-to-PointCloud Place Recognition
Place recognition is an important task for robots and autonomous cars to localize themselves and close loops in pre-built maps. While single-modal sensor-based methods have shown satisfactory performance, cross-modal place recognition that retrieving images from a point-cloud database remains a challenging problem. Current cross-modal methods transform images into 3D points using depth estimation for modality conversion, which are usually computationally intensive and need expensive labeled data for depth supervision. In this work, we introduce a fast and lightweight framework to encode images and point clouds into place-distinctive descriptors. We propose an effective Field of View (FoV) transformation module to convert point clouds into an analogous modality as images. This module eliminates the necessity for depth estimation and helps subsequent modules achieve real-time performance. We further design a non-negative factorization-based encoder to extract mutually consistent semantic features between point clouds and images. This encoder yields more distinctive global descriptors for retrieval. Experimental results on the KITTI dataset show that our proposed methods achieve state-of-the-art performance while running in real time. Additional evaluation on the HAOMO dataset covering a 17 km trajectory further shows the practical generalization capabilities. We have released the implementation of our methods as open source at: https://github.com/haomo-ai/ModaLink.git.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
442,067
2206.05108
Deep Multi-Agent Reinforcement Learning with Hybrid Action Spaces based on Maximum Entropy
Multi-agent deep reinforcement learning has been applied to address a variety of complex problems with either discrete or continuous action spaces and achieved great success. However, most real-world environments cannot be described by only discrete action spaces or only continuous action spaces. And there are few works having ever utilized deep reinforcement learning (drl) to multi-agent problems with hybrid action spaces. Therefore, we propose a novel algorithm: Deep Multi-Agent Hybrid Soft Actor-Critic (MAHSAC) to fill this gap. This algorithm follows the centralized training but decentralized execution (CTDE) paradigm, and extend the Soft Actor-Critic algorithm (SAC) to handle hybrid action space problems in Multi-Agent environments based on maximum entropy. Our experiences are running on an easy multi-agent particle world with a continuous observation and discrete action space, along with some basic simulated physics. The experimental results show that MAHSAC has good performance in training speed, stability, and anti-interference ability. At the same time, it outperforms existing independent deep hybrid learning method in cooperative scenarios and competitive scenarios.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
301,894
2102.05847
ZeroScatter: Domain Transfer for Long Distance Imaging and Vision through Scattering Media
Adverse weather conditions, including snow, rain, and fog, pose a major challenge for both human and computer vision. Handling these environmental conditions is essential for safe decision making, especially in autonomous vehicles, robotics, and drones. Most of today's supervised imaging and vision approaches, however, rely on training data collected in the real world that is biased towards good weather conditions, with dense fog, snow, and heavy rain as outliers in these datasets. Without training data, let alone paired data, existing autonomous vehicles often limit themselves to good conditions and stop when dense fog or snow is detected. In this work, we tackle the lack of supervised training data by combining synthetic and indirect supervision. We present ZeroScatter, a domain transfer method for converting RGB-only captures taken in adverse weather into clear daytime scenes. ZeroScatter exploits model-based, temporal, multi-view, multi-modal, and adversarial cues in a joint fashion, allowing us to train on unpaired, biased data. We assess the proposed method on in-the-wild captures, and the proposed method outperforms existing monocular descattering approaches by 2.8 dB PSNR on controlled fog chamber measurements.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
219,562
1808.05140
A Framework for Automated Cellular Network Tuning with Reinforcement Learning
Tuning cellular network performance against always occurring wireless impairments can dramatically improve reliability to end users. In this paper, we formulate cellular network performance tuning as a reinforcement learning (RL) problem and provide a solution to improve the performance for indoor and outdoor environments. By leveraging the ability of Q-learning to estimate future performance improvement rewards, we propose two algorithms: (1) closed loop power control (PC) for downlink voice over LTE (VoLTE) and (2) self-organizing network (SON) fault management. The VoLTE PC algorithm uses RL to adjust the indoor base station transmit power so that the signal to interference plus noise ratio (SINR) of a user equipment (UE) meets the target SINR. It does so without the UE having to send power control requests. The SON fault management algorithm uses RL to improve the performance of an outdoor base station cluster by resolving faults in the network through configuration management. Both algorithms exploit measurements from the connected users, wireless impairments, and relevant configuration parameters to solve a non-convex performance optimization problem using RL. Simulation results show that our proposed RL based algorithms outperform the industry standards today in realistic cellular communication environments.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
105,300
2007.14730
Amplify-and-Forward Relaying for Hierarchical Over-the-Air Computation
This paper studies a hierarchical over-the-air computation (AirComp) network over a large area, in which multiple relays are exploited to facilitate data aggregation from massive WDs. We present a two-phase amplify-and-forward (AF) relaying protocol. In the first phase, the WDs simultaneously send their data to the relays, while in the second phase, the relays amplify the respectively received signals and concurrently forward them to the fusion center (FC) for aggregation. Our objective is to minimize the computational mean squared error (MSE) at the FC, by jointly optimizing the WD transmit coefficients, the relay AF coefficients, and the FC de-noising factor, subject to their individual transmit power constraints. First, we consider the centralized design with global channel state information (CSI), in which the inter-relay signals can be exploited beneficially for data aggregation. In this case, we develop an alternating-optimization-based algorithm to obtain a high-quality solution to the computational MSE minimization problem. Next, to reduce the signaling overhead caused by the centralized design, we consider an alternative decentralized design with partial CSI, in which the relays and the FC make their own decisions by only requiring the channel power gain information across different relays. In this case, the relays and FC need to treat the inter-relay signals as harmful interference or noise. Accordingly, we optimize the transmit coefficients of the WDs associated with each relay, and the relay AF coefficients (together with the FC de-noising factor) in an iterative manner, which can be implemented efficiently in a decentralized way.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
189,481
2207.09725
OTPose: Occlusion-Aware Transformer for Pose Estimation in Sparsely-Labeled Videos
Although many approaches for multi-human pose estimation in videos have shown profound results, they require densely annotated data which entails excessive man labor. Furthermore, there exists occlusion and motion blur that inevitably lead to poor estimation performance. To address these problems, we propose a method that leverages an attention mask for occluded joints and encodes temporal dependency between frames using transformers. First, our framework composes different combinations of sparsely annotated frames that denote the track of the overall joint movement. We propose an occlusion attention mask from these combinations that enable encoding occlusion-aware heatmaps as a semi-supervised task. Second, the proposed temporal encoder employs transformer architecture to effectively aggregate the temporal relationship and keypoint-wise attention from each time step and accurately refines the target frame's final pose estimation. We achieve state-of-the-art pose estimation results for PoseTrack2017 and PoseTrack2018 datasets and demonstrate the robustness of our approach to occlusion and motion blur in sparsely annotated video data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
309,005
2009.09259
Bid Shading by Win-Rate Estimation and Surplus Maximization
This paper describes a new win-rate based bid shading algorithm (WR) that does not rely on the minimum-bid-to-win feedback from a Sell-Side Platform (SSP). The method uses a modified logistic regression to predict the profit from each possible shaded bid price. The function form allows fast maximization at run-time, a key requirement for Real-Time Bidding (RTB) systems. We report production results from this method along with several other algorithms. We found that bid shading, in general, can deliver significant value to advertisers, reducing price per impression to about 55% of the unshaded cost. Further, the particular approach described in this paper captures 7% more profit for advertisers, than do benchmark methods of just bidding the most probable winning price. We also report 4.3% higher surplus than an industry Sell-Side Platform shading service. Furthermore, we observed 3% - 7% lower eCPM, eCPC and eCPA when the algorithm was integrated with budget controllers. We attribute the gains above as being mainly due to the explicit maximization of the surplus function, and note that other algorithms can take advantage of this same approach.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
true
196,516
2410.14913
ReeFRAME: Reeb Graph based Trajectory Analysis Framework to Capture Top-Down and Bottom-Up Patterns of Life
In this paper, we present ReeFRAME, a scalable Reeb graph-based framework designed to analyze vast volumes of GPS-enabled human trajectory data generated at 1Hz frequency. ReeFRAME models Patterns-of-life (PoL) at both the population and individual levels, utilizing Multi-Agent Reeb Graphs (MARGs) for population-level patterns and Temporal Reeb Graphs (TERGs) for individual trajectories. The framework's linear algorithmic complexity relative to the number of time points ensures scalability for anomaly detection. We validate ReeFRAME on six large-scale anomaly detection datasets, simulating real-time patterns with up to 500,000 agents over two months.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,261
2407.12842
MS2SL: Multimodal Spoken Data-Driven Continuous Sign Language Production
Sign language understanding has made significant strides; however, there is still no viable solution for generating sign sequences directly from entire spoken content, e.g., text or speech. In this paper, we propose a unified framework for continuous sign language production, easing communication between sign and non-sign language users. In particular, a sequence diffusion model, utilizing embeddings extracted from text or speech, is crafted to generate sign predictions step by step. Moreover, by creating a joint embedding space for text, audio, and sign, we bind these modalities and leverage the semantic consistency among them to provide informative feedback for the model training. This embedding-consistency learning strategy minimizes the reliance on sign triplets and ensures continuous model refinement, even with a missing audio modality. Experiments on How2Sign and PHOENIX14T datasets demonstrate that our model achieves competitive performance in sign language production.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
474,115
2204.05103
Transformer-Based Self-Supervised Learning for Emotion Recognition
In order to exploit representations of time-series signals, such as physiological signals, it is essential that these representations capture relevant information from the whole signal. In this work, we propose to use a Transformer-based model to process electrocardiograms (ECG) for emotion recognition. Attention mechanisms of the Transformer can be used to build contextualized representations for a signal, giving more importance to relevant parts. These representations may then be processed with a fully-connected network to predict emotions. To overcome the relatively small size of datasets with emotional labels, we employ self-supervised learning. We gathered several ECG datasets with no labels of emotion to pre-train our model, which we then fine-tuned for emotion recognition on the AMIGOS dataset. We show that our approach reaches state-of-the-art performances for emotion recognition using ECG signals on AMIGOS. More generally, our experiments show that transformers and pre-training are promising strategies for emotion recognition with physiological signals.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
290,910
2102.06741
Discovery of Options via Meta-Learned Subgoals
Temporal abstractions in the form of options have been shown to help reinforcement learning (RL) agents learn faster. However, despite prior work on this topic, the problem of discovering options through interaction with an environment remains a challenge. In this paper, we introduce a novel meta-gradient approach for discovering useful options in multi-task RL environments. Our approach is based on a manager-worker decomposition of the RL agent, in which a manager maximises rewards from the environment by learning a task-dependent policy over both a set of task-independent discovered-options and primitive actions. The option-reward and termination functions that define a subgoal for each option are parameterised as neural networks and trained via meta-gradients to maximise their usefulness. Empirical analysis on gridworld and DeepMind Lab tasks show that: (1) our approach can discover meaningful and diverse temporally-extended options in multi-task RL domains, (2) the discovered options are frequently used by the agent while learning to solve the training tasks, and (3) that the discovered options help a randomly initialised manager learn faster in completely new tasks.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
219,853
2009.08368
A new front-tracking Lagrangian model for the modeling of dynamic and post-dynamic recrystallization
A new method for the simulation of evolving multi-domains problems has been introduced in previous works (RealIMotion), Florez et al. (2020) and further developed in parallel in the context of isotropic Grain Growth (GG) with no consideration for the effects of the Stored Energy (SE) due to dislocations. The methodology consists in a new front-tracking approach where one of the originality is that not only interfaces between grains are discretized but their bulks are also meshed and topological changes of the domains are driven by selective local remeshing operations performed on the Finite Element (FE) mesh. In this article, further developments and studies of the model will be presented, mainly on the development of a model taking into account grain boundary migration by (GBM) SE. Further developments for the nucleation of new grains will be presented, allowing to model Dynamic Recrystallization (DRX) and Post-Dynamic Recrystallization (PDRX) phenomena. The accuracy and the performance of the numerical algorithms have been proven to be very promising in Florez et al. (2020). Here the results for multiple test cases will be given in order to validate the accuracy of the model taking into account GG and SE. The computational performance will be evaluated for the DRX and PDRX mechanisms and compared to a classical Finite Element (FE) framework using a Level-Set (LS) formulation.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
196,218
2311.17836
On Scaling Robust Feedback Control and State Estimation Problems in Power Networks
Many mainstream robust control/estimation algorithms for power networks are designed using the Lyapunov theory as it provides performance guarantees for linear/nonlinear models of uncertain power networks but comes at the expense of scalability and sensitivity. In particular, Lyapunov-based approaches rely on forming semi-definite programs (SDPs) that are (i) not scalable and (ii) extremely sensitive to the choice of the bounding scalar that ensures the strict feasibility of the linear matrix inequalities (LMIs). This paper addresses these two issues by employing a celebrated non-Lyapunov approach (NLA) from the control theory literature. In lieu of linearized models of power grids, we focus on (the more representative) nonlinear differential algebraic equation (DAE) models and showcase the simplicity, scalability, and parameter-resiliency of NLA. For some power systems, the approach is nearly fifty times faster than solving SDPs via standard solvers with almost no impact on the performance. The case studies also demonstrate that NLA can be applied to more realistic scenarios in which (i) only partial state data is available and (ii) sparsity structures are imposed on the feedback gain. The paper also showcases that virtually no degradation in state estimation quality is experienced when applying NLA.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
411,431
2411.16797
Enhancing Answer Reliability Through Inter-Model Consensus of Large Language Models
We explore the collaborative dynamics of an innovative language model interaction system involving advanced models such as GPT-4-0125-preview, Meta-LLaMA-3-70B-Instruct, Claude-3-Opus, and Gemini-1.5-Flash. These models generate and answer complex, PhD-level statistical questions without exact ground-truth answers. Our study investigates how inter-model consensus enhances the reliability and precision of responses. By employing statistical methods such as chi-square tests, Fleiss' Kappa, and confidence interval analysis, we evaluate consensus rates and inter-rater agreement to quantify the reliability of collaborative outputs. Key results reveal that Claude and GPT-4 exhibit the highest reliability and consistency, as evidenced by their narrower confidence intervals and higher alignment with question-generating models. Conversely, Gemini and LLaMA show more significant variability in their consensus rates, as reflected in wider confidence intervals and lower reliability percentages. These findings demonstrate that collaborative interactions among large language models (LLMs) significantly improve response reliability, offering novel insights into autonomous, cooperative reasoning and validation in AI systems.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
511,190
2012.09842
$\mathbb{X}$Resolution Correspondence Networks
In this paper, we aim at establishing accurate dense correspondences between a pair of images with overlapping field of view under challenging illumination variation, viewpoint changes, and style differences. Through an extensive ablation study of the state-of-the-art correspondence networks, we surprisingly discovered that the widely adopted 4D correlation tensor and its related learning and processing modules could be de-parameterised and removed from training with merely a minor impact over the final matching accuracy. Disabling these computational expensive modules dramatically speeds up the training procedure and allows to use 4 times bigger batch size, which in turn compensates for the accuracy drop. Together with a multi-GPU inference stage, our method facilitates the systematic investigation of the relationship between matching accuracy and up-sampling resolution of the native testing images from 1280 to 4K. This leads to discovery of the existence of an optimal resolution $\mathbb{X}$ that produces accurate matching performance surpassing the state-of-the-art methods particularly over the lower error band on public benchmarks for the proposed network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
212,188
2111.10330
Data-driven verification and synthesis of stochastic systems via barrier certificates
In this work, we study verification and synthesis problems for safety specifications over unknown discrete-time stochastic systems. When a model of the system is available, barrier certificates have been successfully applied for ensuring the satisfaction of safety specifications. In this work, we formulate the computation of barrier certificates as a robust convex program (RCP). Solving the acquired RCP is hard in general because the model of the system that appears in one of the constraints of the RCP is unknown. We propose a data-driven approach that replaces the uncountable number of constraints in the RCP with a finite number of constraints by taking finitely many random samples from the trajectories of the system. We thus replace the original RCP with a scenario convex program (SCP) and show how to relate their optimizers. We guarantee that the solution of the SCP is a solution of the RCP with a priori guaranteed confidence when the number of samples is larger than a pre-computed value. This provides a lower bound on the safety probability of the original unknown system together with a controller in the case of synthesis. We also discuss an extension of our verification approach to a case where the associated robust program is non-convex and show how a similar methodology can be applied. Finally, the applicability of our proposed approach is illustrated through three case studies.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
267,286
2205.12722
Mathematical Models of Human Drivers Using Artificial Risk Fields
In this paper, we use the concept of artificial risk fields to predict how human operators control a vehicle in response to upcoming road situations. A risk field assigns a non-negative risk measure to the state of the system in order to model how close that state is to violating a safety property, such as hitting an obstacle or exiting the road. Using risk fields, we construct a stochastic model of the operator that maps from states to likely actions. We demonstrate our approach on a driving task wherein human subjects are asked to drive a car inside a realistic driving simulator while avoiding obstacles placed on the road. We show that the most likely risk field given the driving data is obtained by solving a convex optimization problem. Next, we apply the inferred risk fields to generate distinct driving behaviors while comparing predicted trajectories against ground truth measurements. We observe that the risk fields are excellent at predicting future trajectory distributions with high prediction accuracy for up to twenty seconds prediction horizons. At the same time, we observe some challenges such as the inability to account for how drivers choose to accelerate/decelerate based on the road conditions.
true
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
298,687
2405.09959
Patient-Specific Real-Time Segmentation in Trackerless Brain Ultrasound
Intraoperative ultrasound (iUS) imaging has the potential to improve surgical outcomes in brain surgery. However, its interpretation is challenging, even for expert neurosurgeons. In this work, we designed the first patient-specific framework that performs brain tumor segmentation in trackerless iUS. To disambiguate ultrasound imaging and adapt to the neurosurgeon's surgical objective, a patient-specific real-time network is trained using synthetic ultrasound data generated by simulating virtual iUS sweep acquisitions in pre-operative MR data. Extensive experiments performed in real ultrasound data demonstrate the effectiveness of the proposed approach, allowing for adapting to the surgeon's definition of surgical targets and outperforming non-patient-specific models, neurosurgeon experts, and high-end tracking systems. Our code is available at: \url{https://github.com/ReubenDo/MHVAE-Seg}.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
454,599
2203.04358
ARcall: Real-Time AR Communication using Smartphones and Smartglasses
Augmented Reality (AR) smartglasses are increasingly regarded as the next generation personal computing platform. However, there is a lack of understanding about how to design communication systems using them. We present ARcall, a novel Augmented Reality-based real-time communication system that enables an immersive, delightful, and privacy-preserving experience between a smartphone user and a smartglasses wearer. ARcall allows a remote friend (Friend) to send and project AR content to a smartglasses wearer (Wearer). The ARcall system was designed with the practical limits of existing AR glasses in mind, including shorter battery life and a reduced field of view. We conduct a qualitative evaluation of the three main components of ARcall: Drop-In, ARaction, and Micro-Chat. Our results provide novel insights for building future AR-based communication methods, including, the importance of context priming, user control over AR content placement, and the feeling of co-presence while conversing.
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
284,435