id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2404.01230
LLM as a Mastermind: A Survey of Strategic Reasoning with Large Language Models
This paper presents a comprehensive survey of the current status and opportunities for Large Language Models (LLMs) in strategic reasoning, a sophisticated form of reasoning that necessitates understanding and predicting adversary actions in multi-agent settings while adjusting strategies accordingly. Strategic reasoning is distinguished by its focus on the dynamic and uncertain nature of interactions among multi-agents, where comprehending the environment and anticipating the behavior of others is crucial. We explore the scopes, applications, methodologies, and evaluation metrics related to strategic reasoning with LLMs, highlighting the burgeoning development in this area and the interdisciplinary approaches enhancing their decision-making performance. It aims to systematize and clarify the scattered literature on this subject, providing a systematic review that underscores the importance of strategic reasoning as a critical cognitive capability and offers insights into future research directions and potential improvements.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
443,324
2407.18525
Is larger always better? Evaluating and prompting large language models for non-generative medical tasks
The use of Large Language Models (LLMs) in medicine is growing, but their ability to handle both structured Electronic Health Record (EHR) data and unstructured clinical notes is not well-studied. This study benchmarks various models, including GPT-based LLMs, BERT-based models, and traditional clinical predictive models, for non-generative medical tasks utilizing renowned datasets. We assessed 14 language models (9 GPT-based and 5 BERT-based) and 7 traditional predictive models using the MIMIC dataset (ICU patient records) and the TJH dataset (early COVID-19 EHR data), focusing on tasks such as mortality and readmission prediction, disease hierarchy reconstruction, and biomedical sentence matching, comparing both zero-shot and finetuned performance. Results indicated that LLMs exhibited robust zero-shot predictive capabilities on structured EHR data when using well-designed prompting strategies, frequently surpassing traditional models. However, for unstructured medical texts, LLMs did not outperform finetuned BERT models, which excelled in both supervised and unsupervised tasks. Consequently, while LLMs are effective for zero-shot learning on structured data, finetuned BERT models are more suitable for unstructured texts, underscoring the importance of selecting models based on specific task requirements and data characteristics to optimize the application of NLP technology in healthcare.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
476,410
1905.09265
Bridging Stereo Matching and Optical Flow via Spatiotemporal Correspondence
Stereo matching and flow estimation are two essential tasks for scene understanding, spatially in 3D and temporally in motion. Existing approaches have been focused on the unsupervised setting due to the limited resource to obtain the large-scale ground truth data. To construct a self-learnable objective, co-related tasks are often linked together to form a joint framework. However, the prior work usually utilizes independent networks for each task, thus not allowing to learn shared feature representations across models. In this paper, we propose a single and principled network to jointly learn spatiotemporal correspondence for stereo matching and flow estimation, with a newly designed geometric connection as the unsupervised signal for temporally adjacent stereo pairs. We show that our method performs favorably against several state-of-the-art baselines for both unsupervised depth and flow estimation on the KITTI benchmark dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
131,691
1910.06573
IMMVP: An Efficient Daytime and Nighttime On-Road Object Detector
It is hard to detect on-road objects under various lighting conditions. To improve the quality of the classifier, three techniques are used. We define subclasses to separate daytime and nighttime samples. Then we skip similar samples in the training set to prevent overfitting. With the help of the outside training samples, the detection accuracy is also improved. To detect objects in an edge device, Nvidia Jetson TX2 platform, we exert the lightweight model ResNet-18 FPN as the backbone feature extractor. The FPN (Feature Pyramid Network) generates good features for detecting objects over various scales. With Cascade R-CNN technique, the bounding boxes are iteratively refined for better results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
149,381
2406.01863
Towards Effective Time-Aware Language Representation: Exploring Enhanced Temporal Understanding in Language Models
In the evolving field of Natural Language Processing, understanding the temporal context of text is increasingly crucial. This study investigates methods to incorporate temporal information during pre-training, aiming to achieve effective time-aware language representation for improved performance on time-related tasks. In contrast to common pre-trained models like BERT, which rely on synchronic document collections such as BookCorpus and Wikipedia, our research introduces BiTimeBERT 2.0, a novel language model pre-trained on a temporal news article collection. BiTimeBERT 2.0 utilizes this temporal news collection, focusing on three innovative pre-training objectives: Time-Aware Masked Language Modeling (TAMLM), Document Dating (DD), and Time-Sensitive Entity Replacement (TSER). Each objective targets a unique aspect of temporal information. TAMLM is designed to enhance the understanding of temporal contexts and relations, DD integrates document timestamps as chronological markers, and TSER focuses on the temporal dynamics of "Person" entities, recognizing their inherent temporal significance. The experimental results consistently demonstrate that BiTimeBERT 2.0 outperforms models like BERT and other existing pre-trained models, achieving substantial gains across a variety of downstream NLP tasks and applications where time plays a pivotal role.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
460,494
2301.10460
HAL3D: Hierarchical Active Learning for Fine-Grained 3D Part Labeling
We present the first active learning tool for fine-grained 3D part labeling, a problem which challenges even the most advanced deep learning (DL) methods due to the significant structural variations among the small and intricate parts. For the same reason, the necessary data annotation effort is tremendous, motivating approaches to minimize human involvement. Our labeling tool iteratively verifies or modifies part labels predicted by a deep neural network, with human feedback continually improving the network prediction. To effectively reduce human efforts, we develop two novel features in our tool, hierarchical and symmetry-aware active labeling. Our human-in-the-loop approach, coined HAL3D, achieves 100% accuracy (barring human errors) on any test set with pre-defined hierarchical part labels, with 80% time-saving over manual effort.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
341,815
1307.7973
Connecting Language and Knowledge Bases with Embedding Models for Relation Extraction
This paper proposes a novel approach for relation extraction from free text which is trained to jointly use information from the text and from existing knowledge. Our model is based on two scoring functions that operate by learning low-dimensional embeddings of words and of entities and relationships from a knowledge base. We empirically show on New York Times articles aligned with Freebase relations that our approach is able to efficiently use the extra information provided by a large subset of Freebase data (4M entities, 23k relationships) to improve over existing methods that rely on text features alone.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
26,155
2112.08961
Objective hearing threshold identification from auditory brainstem response measurements using supervised and self-supervised approaches
Hearing loss is a major health problem and psychological burden in humans. Mouse models offer a possibility to elucidate genes involved in the underlying developmental and pathophysiological mechanisms of hearing impairment. To this end, large-scale mouse phenotyping programs include auditory phenotyping of single-gene knockout mouse lines. Using the auditory brainstem response (ABR) procedure, the German Mouse Clinic and similar facilities worldwide have produced large, uniform data sets of averaged ABR raw data of mutant and wildtype mice. In the course of standard ABR analysis, hearing thresholds are assessed visually by trained staff from series of signal curves of increasing sound pressure level. This is time-consuming and prone to be biased by the reader as well as the graphical display quality and scale. In an attempt to reduce workload and improve quality and reproducibility, we developed and compared two methods for automated hearing threshold identification from averaged ABR raw data: a supervised approach involving two combined neural networks trained on human-generated labels and a self-supervised approach, which exploits the signal power spectrum and combines random forest sound level estimation with a piece-wise curve fitting algorithm for threshold finding. We show that both models work well, outperform human threshold detection, and are suitable for fast, reliable, and unbiased hearing threshold detection and quality control. In a high-throughput mouse phenotyping environment, both methods perform well as part of an automated end-to-end screening pipeline to detect candidate genes for hearing involvement. Code for both models as well as data used for this work are freely available.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
271,991
2103.04136
Perception Framework through Real-Time Semantic Segmentation and Scene Recognition on a Wearable System for the Visually Impaired
As the scene information, including objectness and scene type, are important for people with visual impairment, in this work we present a multi-task efficient perception system for the scene parsing and recognition tasks. Building on the compact ResNet backbone, our designed network architecture has two paths with shared parameters. In the structure, the semantic segmentation path integrates fast attention, with the aim of harvesting long-range contextual information in an efficient manner. Simultaneously, the scene recognition path attains the scene type inference by passing the semantic features into semantic-driven attention networks and combining the semantic extracted representations with the RGB extracted representations through a gated attention module. In the experiments, we have verified the systems' accuracy and efficiency on both public datasets and real-world scenes. This system runs on a wearable belt with an Intel RealSense LiDAR camera and an Nvidia Jetson AGX Xavier processor, which can accompany visually impaired people and provide assistive scene information in their navigation tasks.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
223,536
1803.03807
CIoTA: Collaborative IoT Anomaly Detection via Blockchain
Due to their rapid growth and deployment, Internet of things (IoT) devices have become a central aspect of our daily lives. However, they tend to have many vulnerabilities which can be exploited by an attacker. Unsupervised techniques, such as anomaly detection, can help us secure the IoT devices. However, an anomaly detection model must be trained for a long time in order to capture all benign behaviors. This approach is vulnerable to adversarial attacks since all observations are assumed to be benign while training the anomaly detection model. In this paper, we propose CIoTA, a lightweight framework that utilizes the blockchain concept to perform distributed and collaborative anomaly detection for devices with limited resources. CIoTA uses blockchain to incrementally update a trusted anomaly detection model via self-attestation and consensus among IoT devices. We evaluate CIoTA on our own distributed IoT simulation platform, which consists of 48 Raspberry Pis, to demonstrate CIoTA's ability to enhance the security of each device and the security of the network as a whole.
false
false
false
false
false
false
true
false
false
false
false
false
true
true
false
false
false
true
92,324
2201.12599
Semantic-assisted image compression
Conventional image compression methods typically aim at pixel-level consistency while ignoring the performance of downstream AI tasks.To solve this problem, this paper proposes a Semantic-Assisted Image Compression method (SAIC), which can maintain semantic-level consistency to enable high performance of downstream AI tasks.To this end, we train the compression network using semantic-level loss function. In particular, semantic-level loss is measured using gradient-based semantic weights mechanism (GSW). GSW directly consider downstream AI tasks' perceptual results. Then, this paper proposes a semantic-level distortion evaluation metric to quantify the amount of semantic information retained during the compression process. Experimental results show that the proposed SAIC method can retain more semantic-level information and achieve better performance of downstream AI tasks compared to the traditional deep learning-based method and the advanced perceptual method at the same compression ratio.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
277,707
2006.10643
Erdos Goes Neural: an Unsupervised Learning Framework for Combinatorial Optimization on Graphs
Combinatorial optimization problems are notoriously challenging for neural networks, especially in the absence of labeled instances. This work proposes an unsupervised learning framework for CO problems on graphs that can provide integral solutions of certified quality. Inspired by Erdos' probabilistic method, we use a neural network to parametrize a probability distribution over sets. Crucially, we show that when the network is optimized w.r.t. a suitably chosen loss, the learned distribution contains, with controlled probability, a low-cost integral solution that obeys the constraints of the combinatorial problem. The probabilistic proof of existence is then derandomized to decode the desired solutions. We demonstrate the efficacy of this approach to obtain valid solutions to the maximum clique problem and to perform local graph clustering. Our method achieves competitive results on both real datasets and synthetic hard instances.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
182,958
2209.06308
Risk-aware Resource Allocation for Multiple UAVs-UGVs Recharging Rendezvous
We study a resource allocation problem for the cooperative aerial-ground vehicle routing application, in which multiple Unmanned Aerial Vehicles (UAVs) with limited battery capacity and multiple Unmanned Ground Vehicles (UGVs) that can also act as a mobile recharging stations need to jointly accomplish a mission such as persistently monitoring a set of points. Due to the limited battery capacity of the UAVs, they sometimes have to deviate from their task to rendezvous with the UGVs and get recharged. Each UGV can serve a limited number of UAVs at a time. In contrast to prior work on deterministic multi-robot scheduling, we consider the challenge imposed by the stochasticity of the energy consumption of the UAV. We are interested in finding the optimal recharging schedule of the UAVs such that the travel cost is minimized and the probability that no UAV runs out of charge within the planning horizon is greater than a user-defined tolerance. We formulate this problem ({Risk-aware Recharging Rendezvous Problem (RRRP))} as an Integer Linear Program (ILP), in which the matching constraint captures the resource availability constraints and the knapsack constraint captures the success probability constraints. We propose a bicriteria approximation algorithm to solve RRRP. We demonstrate the effectiveness of our formulation and algorithm in the context of one persistent monitoring mission.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
317,348
2205.11308
Symptom Identification for Interpretable Detection of Multiple Mental Disorders
Mental disease detection (MDD) from social media has suffered from poor generalizability and interpretability, due to lack of symptom modeling. This paper introduces PsySym, the first annotated symptom identification corpus of multiple psychiatric disorders, to facilitate further research progress. PsySym is annotated according to a knowledge graph of the 38 symptom classes related to 7 mental diseases complied from established clinical manuals and scales, and a novel annotation framework for diversity and quality. Experiments show that symptom-assisted MDD enabled by PsySym can outperform strong pure-text baselines. We also exhibit the convincing MDD explanations provided by symptom predictions with case studies, and point to their further potential applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
298,090
2101.07957
Near-Optimal Regret Bounds for Contextual Combinatorial Semi-Bandits with Linear Payoff Functions
The contextual combinatorial semi-bandit problem with linear payoff functions is a decision-making problem in which a learner chooses a set of arms with the feature vectors in each round under given constraints so as to maximize the sum of rewards of arms. Several existing algorithms have regret bounds that are optimal with respect to the number of rounds $T$. However, there is a gap of $\tilde{O}(\max(\sqrt{d}, \sqrt{k}))$ between the current best upper and lower bounds, where $d$ is the dimension of the feature vectors, $k$ is the number of the chosen arms in a round, and $\tilde{O}(\cdot)$ ignores the logarithmic factors. The dependence of $k$ and $d$ is of practical importance because $k$ may be larger than $T$ in real-world applications such as recommender systems. In this paper, we fill the gap by improving the upper and lower bounds. More precisely, we show that the C${}^2$UCB algorithm proposed by Qin, Chen, and Zhu (2014) has the optimal regret bound $\tilde{O}(d\sqrt{kT} + dk)$ for the partition matroid constraints. For general constraints, we propose an algorithm that modifies the reward estimates of arms in the C${}^2$UCB algorithm and demonstrate that it enjoys the optimal regret bound for a more general problem that can take into account other objectives simultaneously. We also show that our technique would be applicable to related problems. Numerical experiments support our theoretical results and considerations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
216,184
2412.05696
Jointly RS Image Deblurring and Super-Resolution with Adjustable-Kernel and Multi-Domain Attention
Remote Sensing (RS) image deblurring and Super-Resolution (SR) are common tasks in computer vision that aim at restoring RS image detail and spatial scale, respectively. However, real-world RS images often suffer from a complex combination of global low-resolution (LR) degeneration and local blurring degeneration. Although carefully designed deblurring and SR models perform well on these two tasks individually, a unified model that performs jointly RS image deblurring and super-resolution (JRSIDSR) task is still challenging due to the vital dilemma of reconstructing the global and local degeneration simultaneously. Additionally, existing methods struggle to capture the interrelationship between deblurring and SR processes, leading to suboptimal results. To tackle these issues, we give a unified theoretical analysis of RS images' spatial and blur degeneration processes and propose a dual-branch parallel network named AKMD-Net for the JRSIDSR task. AKMD-Net consists of two main branches: deblurring and super-resolution branches. In the deblurring branch, we design a pixel-adjustable kernel block (PAKB) to estimate the local and spatial-varying blur kernels. In the SR branch, a multi-domain attention block (MDAB) is proposed to capture the global contextual information enhanced with high-frequency details. Furthermore, we develop an adaptive feature fusion (AFF) module to model the contextual relationships between the deblurring and SR branches. Finally, we design an adaptive Wiener loss (AW Loss) to depress the prior noise in the reconstructed images. Extensive experiments demonstrate that the proposed AKMD-Net achieves state-of-the-art (SOTA) quantitative and qualitative performance on commonly used RS image datasets. The source code is publicly available at https://github.com/zpc456/AKMD-Net.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
514,933
2012.14633
Supermodularity and valid inequalities for quadratic optimization with indicators
We study the minimization of a rank-one quadratic with indicators and show that the underlying set function obtained by projecting out the continuous variables is supermodular. Although supermodular minimization is, in general, difficult, the specific set function for the rank-one quadratic can be minimized in linear time. We show that the convex hull of the epigraph of the quadratic can be obtaining from inequalities for the underlying supermodular set function by lifting them into nonlinear inequalities in the original space of variables. Explicit forms of the convex-hull description are given, both in the original space of variables and in an extended formulation via conic quadratic-representable inequalities, along with a polynomial separation algorithm. Computational experiments indicate that the lifted supermodular inequalities in conic quadratic form are quite effective in reducing the integrality gap for quadratic optimization with indicators.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
213,567
2306.02659
Hybrid Trajectory Optimization for Autonomous Terrain Traversal of Articulated Tracked Robots
Autonomous terrain traversal of articulated tracked robots can reduce operator cognitive load to enhance task efficiency and facilitate extensive deployment. We present a novel hybrid trajectory optimization method aimed at generating efficient, stable, and smooth traversal motions. To achieve this, we develop a planar robot-terrain contact model and divide the robot's motion into hybrid modes of driving and traversing. By using a generalized coordinate description, the configuration space dimension is reduced, which facilitates real-time planning. The hybrid trajectory optimization is transcribed into a nonlinear programming problem and divided into subproblems to be solved in a receding-horizon planning fashion. Mode switching is facilitated by associating optimized motion durations with a predefined traversal sequence. A multi-objective cost function is formulated to further improve the traversal performance. Additionally, map sampling, terrain simplification, and tracking controller modules are integrated into the autonomous terrain traversal system. Our approach is validated in simulation and real-world scenarios with the Searcher robotic platform. Comparative experiments with expert operator control and state-of-the-art methods show advantages in terms of time and energy efficiency, stability, and smoothness of motion.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
371,000
2312.11795
MELO: Enhancing Model Editing with Neuron-Indexed Dynamic LoRA
Large language models (LLMs) have shown great success in various Natural Language Processing (NLP) tasks, whist they still need updates after deployment to fix errors or keep pace with the changing knowledge in the world. Researchers formulate such problem as Model Editing and have developed various editors focusing on different axes of editing properties. However, current editors can hardly support all properties and rely on heavy computational resources. In this paper, we propose a plug-in Model Editing method based on neuron-indexed dynamic LoRA (MELO), which alters the behavior of language models by dynamically activating certain LoRA blocks according to the index built in an inner vector database. Our method satisfies various editing properties with high efficiency and can be easily integrated into multiple LLM backbones. Experimental results show that our proposed MELO achieves state-of-the-art editing performance on three sequential editing tasks (document classification, question answering and hallucination correction), while requires the least trainable parameters and computational cost.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
416,715
cs/0701043
Adaptive Alternating Minimization Algorithms
The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in such an adaptive setting. We present applications of our results to adaptive decomposition of mixtures, adaptive log-optimal portfolio selection, and adaptive filter design.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
540,026
1810.11194
Distributed Market Clearing Approach for Local Energy Trading in Transactive Market
This paper proposes a market clearing mechanism for energy trading in a local transactive market, where each player can participate in the market as seller or buyer and tries to maximize its welfare individually. Market players send their demand and supply to a local data center, where clearing price is determined to balance demand and supply. The topology of the grid and associated network constraints are considered to compute a price signal in the data center to keep the system secure by applying this signal to the corresponding players. The proposed approach needs only the demanded/supplied power by each player to reach global optimum which means that utility and cost function parameters would remain private. Also, this approach uses distributed method by applying local market clearing price as coordination information and direct load flow (DLF) for power flow calculation saving computation resources and making it suitable for online and automatic operation for a market with a large number of players. The proposed method is tested on a market with 50 players and simulation results show that the convergence is guaranteed and the proposed distributed method can reach the same result as conventional centralized approach.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
111,453
2307.16773
AsdKB: A Chinese Knowledge Base for the Early Screening and Diagnosis of Autism Spectrum Disorder
To easily obtain the knowledge about autism spectrum disorder and help its early screening and diagnosis, we create AsdKB, a Chinese knowledge base on autism spectrum disorder. The knowledge base is built on top of various sources, including 1) the disease knowledge from SNOMED CT and ICD-10 clinical descriptions on mental and behavioural disorders, 2) the diagnostic knowledge from DSM-5 and different screening tools recommended by social organizations and medical institutes, and 3) the expert knowledge on professional physicians and hospitals from the Web. AsdKB contains both ontological and factual knowledge, and is accessible as Linked Data at https://w3id.org/asdkb/. The potential applications of AsdKB are question answering, auxiliary diagnosis, and expert recommendation, and we illustrate them with a prototype which can be accessed at http://asdkb.org.cn/.
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
false
false
382,729
2010.11619
Self-Supervised Shadow Removal
Shadow removal is an important computer vision task aiming at the detection and successful removal of the shadow produced by an occluded light source and a photo-realistic restoration of the image contents. Decades of re-search produced a multitude of hand-crafted restoration techniques and, more recently, learned solutions from shad-owed and shadow-free training image pairs. In this work,we propose an unsupervised single image shadow removal solution via self-supervised learning by using a conditioned mask. In contrast to existing literature, we do not require paired shadowed and shadow-free images, instead we rely on self-supervision and jointly learn deep models to remove and add shadows to images. We validate our approach on the recently introduced ISTD and USR datasets. We largely improve quantitatively and qualitatively over the compared methods and set a new state-of-the-art performance in single image shadow removal.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
202,332
2212.14124
Joint Action is a Framework for Understanding Partnerships Between Humans and Upper Limb Prostheses
Recent advances in upper limb prostheses have led to significant improvements in the number of movements provided by the robotic limb. However, the method for controlling multiple degrees of freedom via user-generated signals remains challenging. To address this issue, various machine learning controllers have been developed to better predict movement intent. As these controllers become more intelligent and take on more autonomy in the system, the traditional approach of representing the human-machine interface as a human controlling a tool becomes limiting. One possible approach to improve the understanding of these interfaces is to model them as collaborative, multi-agent systems through the lens of joint action. The field of joint action has been commonly applied to two human partners who are trying to work jointly together to achieve a task, such as singing or moving a table together, by effecting coordinated change in their shared environment. In this work, we compare different prosthesis controllers (proportional electromyography with sequential switching, pattern recognition, and adaptive switching) in terms of how they present the hallmarks of joint action. The results of the comparison lead to a new perspective for understanding how existing myoelectric systems relate to each other, along with recommendations for how to improve these systems by increasing the collaborative communication between each partner.
true
false
false
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
338,511
2302.03438
Uncoupled Learning of Differential Stackelberg Equilibria with Commitments
In multi-agent problems requiring a high degree of cooperation, success often depends on the ability of the agents to adapt to each other's behavior. A natural solution concept in such settings is the Stackelberg equilibrium, in which the ``leader'' agent selects the strategy that maximizes its own payoff given that the ``follower'' agent will choose their best response to this strategy. Recent work has extended this solution concept to two-player differentiable games, such as those arising from multi-agent deep reinforcement learning, in the form of the \textit{differential} Stackelberg equilibrium. While this previous work has presented learning dynamics which converge to such equilibria, these dynamics are ``coupled'' in the sense that the learning updates for the leader's strategy require some information about the follower's payoff function. As such, these methods cannot be applied to truly decentralised multi-agent settings, particularly ad hoc cooperation, where each agent only has access to its own payoff function. In this work we present ``uncoupled'' learning dynamics based on zeroth-order gradient estimators, in which each agent's strategy update depends only on their observations of the other's behavior. We analyze the convergence of these dynamics in general-sum games, and prove that they converge to differential Stackelberg equilibria under the same conditions as previous coupled methods. Furthermore, we present an online mechanism by which symmetric learners can negotiate leader-follower roles. We conclude with a discussion of the implications of our work for multi-agent reinforcement learning and ad hoc collaboration more generally.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
true
false
false
false
344,333
1507.00248
The Network Picture of Labor Flow
We construct a data-driven model of flows in graphs that captures the essential elements of the movement of workers between jobs in the companies (firms) of entire economic systems such as countries. The model is based on the observation that certain job transitions between firms are often repeated over time, showing persistent behavior, and suggesting the construction of static graphs to act as the scaffolding for job mobility. Individuals in the job market (the workforce) are modelled by a discrete-time random walk on graphs, where each individual at a node can possess two states: employed or unemployed, and the rates of becoming unemployed and of finding a new job are node dependent parameters. We calculate the steady state solution of the model and compare it to extensive micro-datasets for Mexico and Finland, comprised of hundreds of thousands of firms and individuals. We find that our model possesses the correct behavior for the numbers of employed and unemployed individuals in these countries down to the level of individual firms. Our framework opens the door to a new approach to the analysis of labor mobility at high resolution, with the tantalizing potential for the development of full forecasting methods in the future.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
44,738
2105.08590
UncertaintyFuseNet: Robust Uncertainty-aware Hierarchical Feature Fusion Model with Ensemble Monte Carlo Dropout for COVID-19 Detection
The COVID-19 (Coronavirus disease 2019) pandemic has become a major global threat to human health and well-being. Thus, the development of computer-aided detection (CAD) systems that are capable to accurately distinguish COVID-19 from other diseases using chest computed tomography (CT) and X-ray data is of immediate priority. Such automatic systems are usually based on traditional machine learning or deep learning methods. Differently from most of existing studies, which used either CT scan or X-ray images in COVID-19-case classification, we present a simple but efficient deep learning feature fusion model, called UncertaintyFuseNet, which is able to classify accurately large datasets of both of these types of images. We argue that the uncertainty of the model's predictions should be taken into account in the learning process, even though most of existing studies have overlooked it. We quantify the prediction uncertainty in our feature fusion model using effective Ensemble MC Dropout (EMCD) technique. A comprehensive simulation study has been conducted to compare the results of our new model to the existing approaches, evaluating the performance of competing models in terms of Precision, Recall, F-Measure, Accuracy and ROC curves. The obtained results prove the efficiency of our model which provided the prediction accuracy of 99.08\% and 96.35\% for the considered CT scan and X-ray datasets, respectively. Moreover, our UncertaintyFuseNet model was generally robust to noise and performed well with previously unseen data. The source code of our implementation is freely available at: https://github.com/moloud1987/UncertaintyFuseNet-for-COVID-19-Classification.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
235,806
2311.03332
Learning Hard-Constrained Models with One Sample
We consider the problem of estimating the parameters of a Markov Random Field with hard-constraints using a single sample. As our main running examples, we use the $k$-SAT and the proper coloring models, as well as general $H$-coloring models; for all of these we obtain both positive and negative results. In contrast to the soft-constrained case, we show in particular that single-sample estimation is not always possible, and that the existence of an estimator is related to the existence of non-satisfiable instances. Our algorithms are based on the pseudo-likelihood estimator. We show variance bounds for this estimator using coupling techniques inspired, in the case of $k$-SAT, by Moitra's sampling algorithm (JACM, 2019); our positive results for colorings build on this new coupling approach. For $q$-colorings on graphs with maximum degree $d$, we give a linear-time estimator when $q>d+1$, whereas the problem is non-identifiable when $q\leq d+1$. For general $H$-colorings, we show that standard conditions that guarantee sampling, such as Dobrushin's condition, are insufficient for one-sample learning; on the positive side, we provide a general condition that is sufficient to guarantee linear-time learning and obtain applications for proper colorings and permissive models. For the $k$-SAT model on formulas with maximum degree $d$, we provide a linear-time estimator when $k\gtrsim 6.45\log d$, whereas the problem becomes non-identifiable when $k\lesssim \log d$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
405,804
1609.06953
The Digital Synaptic Neural Substrate: Size and Quality Matters
We investigate the 'Digital Synaptic Neural Substrate' (DSNS) computational creativity approach further with respect to the size and quality of images that can be used to seed the process. In previous work we demonstrated how combining photographs of people and sequences taken from chess games between weak players can be used to generate chess problems or puzzles of higher aesthetic quality, on average, compared to alternative approaches. In this work we show experimentally that using larger images as opposed to smaller ones improves the output quality even further. The same is also true for using clearer or less corrupted images. The reasons why these things influence the DSNS process is presently not well-understood and debatable but the findings are nevertheless immediately applicable for obtaining better results.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
61,367
2303.00505
Robust consensus control of second-order uncertain multiagent systems with velocity and input constraints (extended version)
In this paper, we investigate the consensus problem of second-order multiagent systems under directed graphs. Simple yet robust consensus algorithms that advance existing achievements in accounting for velocity and input constraints, agent uncertainties, and lack of neighboring velocity measurements are proposed. Furthermore, we show that the proposed method can be applied to the consensus control of uncertain robotic manipulators with velocity and control torque constraints. We rigorously prove that the velocity and control inputs stay within prespecified ranges by tuning design parameters a priori and that asymptotic consensus can be achieved through Lyapunov functions and fixed-time stability. Simulations are performed for symmetric and asymmetric constraints to show the efficiency of the theoretical findings.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
348,619
1911.04464
MIDAS: Microcluster-Based Detector of Anomalies in Edge Streams
Given a stream of graph edges from a dynamic graph, how can we assign anomaly scores to edges in an online manner, for the purpose of detecting unusual behavior, using constant time and memory? Existing approaches aim to detect individually surprising edges. In this work, we propose MIDAS, which focuses on detecting microcluster anomalies, or suddenly arriving groups of suspiciously similar edges, such as lockstep behavior, including denial of service attacks in network traffic data. MIDAS has the following properties: (a) it detects microcluster anomalies while providing theoretical guarantees about its false positive probability; (b) it is online, thus processing each edge in constant time and constant memory, and also processes the data 162-644 times faster than state-of-the-art approaches; (c) it provides 42%-48% higher accuracy (in terms of AUC) than state-of-the-art approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
152,999
1909.04538
DeepPrivacy: A Generative Adversarial Network for Face Anonymization
We propose a novel architecture which is able to automatically anonymize faces in images while retaining the original data distribution. We ensure total anonymization of all faces in an image by generating images exclusively on privacy-safe information. Our model is based on a conditional generative adversarial network, generating images considering the original pose and image background. The conditional information enables us to generate highly realistic faces with a seamless transition between the generated face and the existing background. Furthermore, we introduce a diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds. Finally, we present experimental results reflecting the capability of our model to anonymize images while preserving the data distribution, making the data suitable for further training of deep learning models. As far as we know, no other solution has been proposed that guarantees the anonymization of faces while generating realistic images.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
144,832
2210.15134
Learning Variational Motion Prior for Video-based Motion Capture
Motion capture from a monocular video is fundamental and crucial for us humans to naturally experience and interact with each other in Virtual Reality (VR) and Augmented Reality (AR). However, existing methods still struggle with challenging cases involving self-occlusion and complex poses due to the lack of effective motion prior modeling. In this paper, we present a novel variational motion prior (VMP) learning approach for video-based motion capture to resolve the above issue. Instead of directly building the correspondence between the video and motion domain, We propose to learn a generic latent space for capturing the prior distribution of all natural motions, which serve as the basis for subsequent video-based motion capture tasks. To improve the generalization capacity of prior space, we propose a transformer-based variational autoencoder pretrained over marker-based 3D mocap data, with a novel style-mapping block to boost the generation quality. Afterward, a separate video encoder is attached to the pretrained motion generator for end-to-end fine-tuning over task-specific video datasets. Compared to existing motion prior models, our VMP model serves as a motion rectifier that can effectively reduce temporal jittering and failure modes in frame-wise pose estimation, leading to temporally stable and visually realistic motion capture results. Furthermore, our VMP-based framework models motion at sequence level and can directly generate motion clips in the forward pass, achieving real-time motion capture during inference. Extensive experiments over both public datasets and in-the-wild videos have demonstrated the efficacy and generalization capability of our framework.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
326,818
2211.06137
Emergence of Concepts in DNNs?
The present paper reviews and discusses work from computer science that proposes to identify concepts in internal representations (hidden layers) of DNNs. It is examined, first, how existing methods actually identify concepts that are supposedly represented in DNNs. Second, it is discussed how conceptual spaces -- sets of concepts in internal representations -- are shaped by a tradeoff between predictive accuracy and compression. These issues are critically examined by drawing on philosophy. While there is evidence that DNNs able to represent non-trivial inferential relations between concepts, our ability to identify concepts is severely limited.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
329,793
2412.08520
GR-NLP-TOOLKIT: An Open-Source NLP Toolkit for Modern Greek
We present GR-NLP-TOOLKIT, an open-source natural language processing (NLP) toolkit developed specifically for modern Greek. The toolkit provides state-of-the-art performance in five core NLP tasks, namely part-of-speech tagging, morphological tagging, dependency parsing, named entity recognition, and Greeklishto-Greek transliteration. The toolkit is based on pre-trained Transformers, it is freely available, and can be easily installed in Python (pip install gr-nlp-toolkit). It is also accessible through a demonstration platform on HuggingFace, along with a publicly available API for non-commercial use. We discuss the functionality provided for each task, the underlying methods, experiments against comparable open-source toolkits, and future possible enhancements. The toolkit is available at: https://github.com/nlpaueb/gr-nlp-toolkit
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
true
516,123
1806.10283
Optimal Scheduling of Electrolyzer in Power Market with Dynamic Prices
Optimal scheduling of hydrogen production in dynamic pricing power market can maximize the profit of hydrogen producer; however, it highly depends on the accurate forecast of hydrogen consumption. In this paper, we propose a deep leaning based forecasting approach for predicting hydrogen consumption of fuel cell vehicles in future taxi industry. The cost of hydrogen production is minimized by utilizing the proposed forecasting tool to reduce the hydrogen produced during high cost on-peak hours and guide hydrogen producer to store sufficient hydrogen during low cost off-peak hours.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
101,520
2010.06250
Regret minimization in stochastic non-convex learning via a proximal-gradient approach
Motivated by applications in machine learning and operations research, we study regret minimization with stochastic first-order oracle feedback in online constrained, and possibly non-smooth, non-convex problems. In this setting, the minimization of external regret is beyond reach for first-order methods, so we focus on a local regret measure defined via a proximal-gradient mapping. To achieve no (local) regret in this setting, we develop a prox-grad method based on stochastic first-order feedback, and a simpler method for when access to a perfect first-order oracle is possible. Both methods are min-max order-optimal, and we also establish a bound on the number of prox-grad queries these methods require. As an important application of our results, we also obtain a link between online and offline non-convex stochastic optimization manifested as a new prox-grad scheme with complexity guarantees matching those obtained via variance reduction techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
200,422
1806.09935
On the performance of multi-objective estimation of distribution algorithms for combinatorial problems
Fitness landscape analysis investigates features with a high influence on the performance of optimization algorithms, aiming to take advantage of the addressed problem characteristics. In this work, a fitness landscape analysis using problem features is performed for a Multi-objective Bayesian Optimization Algorithm (mBOA) on instances of MNK-landscape problem for 2, 3, 5 and 8 objectives. We also compare the results of mBOA with those provided by NSGA-III through the analysis of their estimated runtime necessary to identify an approximation of the Pareto front. Moreover, in order to scrutinize the probabilistic graphic model obtained by mBOA, the Pareto front is examined according to a probabilistic view. The fitness landscape study shows that mBOA is moderately or loosely influenced by some problem features, according to a simple and a multiple linear regression model, which is being proposed to predict the algorithms performance in terms of the estimated runtime. Besides, we conclude that the analysis of the probabilistic graphic model produced at the end of evolution can be useful to understand the convergence and diversity performances of the proposed approach.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
101,459
2403.14849
Output-Constrained Lossy Source Coding With Application to Rate-Distortion-Perception Theory
The distortion-rate function of output-constrained lossy source coding with limited common randomness is analyzed for the special case of squared error distortion measure. An explicit expression is obtained when both source and reconstruction distributions are Gaussian. This further leads to a partial characterization of the information-theoretic limit of quadratic Gaussian rate-distortion-perception coding with the perception measure given by Kullback-Leibler divergence or squared quadratic Wasserstein distance.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
440,280
2205.09226
Modeling Multi-hop Question Answering as Single Sequence Prediction
Fusion-in-decoder (Fid) (Izacard and Grave, 2020) is a generative question answering (QA) model that leverages passage retrieval with a pre-trained transformer and pushed the state of the art on single-hop QA. However, the complexity of multi-hop QA hinders the effectiveness of the generative QA approach. In this work, we propose a simple generative approach (PathFid) that extends the task beyond just answer generation by explicitly modeling the reasoning process to resolve the answer for multi-hop questions. By linearizing the hierarchical reasoning path of supporting passages, their key sentences, and finally the factoid answer, we cast the problem as a single sequence prediction task. To facilitate complex reasoning with multiple clues, we further extend the unified flat representation of multiple input documents by encoding cross-passage interactions. Our extensive experiments demonstrate that PathFid leads to strong performance gains on two multi-hop QA datasets: HotpotQA and IIRC. Besides the performance gains, PathFid is more interpretable, which in turn yields answers that are more faithfully grounded to the supporting passages and facts compared to the baseline Fid model.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
297,190
2308.13837
Class-constrained t-SNE: Combining Data Features and Class Probabilities
Data features and class probabilities are two main perspectives when, e.g., evaluating model results and identifying problematic items. Class probabilities represent the likelihood that each instance belongs to a particular class, which can be produced by probabilistic classifiers or even human labeling with uncertainty. Since both perspectives are multi-dimensional data, dimensionality reduction (DR) techniques are commonly used to extract informative characteristics from them. However, existing methods either focus solely on the data feature perspective or rely on class probability estimates to guide the DR process. In contrast to previous work where separate views are linked to conduct the analysis, we propose a novel approach, class-constrained t-SNE, that combines data features and class probabilities in the same DR result. Specifically, we combine them by balancing two corresponding components in a cost function to optimize the positions of data points and iconic representation of classes -- class landmarks. Furthermore, an interactive user-adjustable parameter balances these two components so that users can focus on the weighted perspectives of interest and also empowers a smooth visual transition between varying perspectives to preserve the mental map. We illustrate its application potential in model evaluation and visual-interactive labeling. A comparative analysis is performed to evaluate the DR results.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
388,076
2005.03008
Evaluating text coherence based on the graph of the consistency of phrases to identify symptoms of schizophrenia
Different state-of-the-art methods of the detection of schizophrenia symptoms based on the estimation of text coherence have been analyzed. The analysis of a text at the level of phrases has been suggested. The method based on the graph of the consistency of phrases has been proposed to evaluate the semantic coherence and the cohesion of a text. The semantic coherence, cohesion, and other linguistic features (lexical diversity, lexical density) have been taken into account to form feature vectors for the training of a model-classifier. The training of the classifier has been performed on the set of English-language interviews. According to the retrieved results, the impact of each feature on the output of the model has been analyzed. The results obtained can indicate that the proposed method based on the graph of the consistency of phrases may be used in the different tasks of the detection of mental illness.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
176,037
2205.03465
Power Control of Grid-Forming Converters Based on Full-State Feedback
The active and reactive power controllers of grid-forming converters are traditionally designed separately, which relies on the assumption of loop decoupling. This paper proposes a full-state feedback control for the power loops of grid-forming converters. First, the power loops are modeled considering their natural coupling, which, therefore, can apply to all kinds of line impedance, i.e., resistive, inductive, or complex. Then a full-state feedback control design is used. By this way, the eigenvalues of the system can be arbitrarily placed to any positions in the timescale of power loops. Therefore, the parameters can be directly chosen by the predefined specifications. A step-by-step parameters design procedure is also given in this paper. Experimental results verify the proposed method.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
295,286
2009.11321
Improving Dialog Evaluation with a Multi-reference Adversarial Dataset and Large Scale Pretraining
There is an increasing focus on model-based dialog evaluation metrics such as ADEM, RUBER, and the more recent BERT-based metrics. These models aim to assign a high score to all relevant responses and a low score to all irrelevant responses. Ideally, such models should be trained using multiple relevant and irrelevant responses for any given context. However, no such data is publicly available, and hence existing models are usually trained using a single relevant response and multiple randomly selected responses from other contexts (random negatives). To allow for better training and robust evaluation of model-based metrics, we introduce the DailyDialog++ dataset, consisting of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context. Using this dataset, we first show that even in the presence of multiple correct references, n-gram based metrics and embedding based metrics do not perform well at separating relevant responses from even random negatives. While model-based metrics perform better than n-gram and embedding based metrics on random negatives, their performance drops substantially when evaluated on adversarial examples. To check if large scale pretraining could help, we propose a new BERT-based evaluation metric called DEB, which is pretrained on 727M Reddit conversations and then finetuned on our dataset. DEB significantly outperforms existing models, showing better correlation with human judgements and better performance on random negatives (88.27% accuracy). However, its performance again drops substantially, when evaluated on adversarial responses, thereby highlighting that even large-scale pretrained evaluation models are not robust to the adversarial examples in our dataset. The dataset and code are publicly available.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
197,133
2012.01273
Regularization and False Alarms Quantification: Two Sides of the Explainability Coin
Regularization is a well-established technique in machine learning (ML) to achieve an optimal bias-variance trade-off which in turn reduces model complexity and enhances explainability. To this end, some hyper-parameters must be tuned, enabling the ML model to accurately fit the unseen data as well as the seen data. In this article, the authors argue that the regularization of hyper-parameters and quantification of costs and risks of false alarms are in reality two sides of the same coin, explainability. Incorrect or non-existent estimation of either quantities undermines the measurability of the economic value of using ML, to the extent that might make it practically useless.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
209,373
2312.11260
Leveraging Normalization Layer in Adapters With Progressive Learning and Adaptive Distillation for Cross-Domain Few-Shot Learning
Cross-domain few-shot learning presents a formidable challenge, as models must be trained on base classes and then tested on novel classes from various domains with only a few samples at hand. While prior approaches have primarily focused on parameter-efficient methods of using adapters, they often overlook two critical issues: shifts in batch statistics and noisy sample statistics arising from domain discrepancy variations. In this paper, we introduce a novel generic framework that leverages normalization layer in adapters with Progressive Learning and Adaptive Distillation (ProLAD), marking two principal contributions. First, our methodology utilizes two separate adapters: one devoid of a normalization layer, which is more effective for similar domains, and another embedded with a normalization layer, designed to leverage the batch statistics of the target domain, thus proving effective for dissimilar domains. Second, to address the pitfalls of noisy statistics, we deploy two strategies: a progressive training of the two adapters and an adaptive distillation technique derived from features determined by the model solely with the adapter devoid of a normalization layer. Through this adaptive distillation, our approach functions as a modulator, controlling the primary adapter for adaptation, based on each domain. Evaluations on standard cross-domain few-shot learning benchmarks confirm that our technique outperforms existing state-of-the-art methodologies.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
416,495
1912.08283
Progressive VAE Training on Highly Sparse and Imbalanced Data
In this paper, we present a novel approach for training a Variational Autoencoder (VAE) on a highly imbalanced data set. The proposed training of a high-resolution VAE model begins with the training of a low-resolution core model, which can be successfully trained on imbalanced data set. In subsequent training steps, new convolutional, upsampling, deconvolutional, and downsampling layers are iteratively attached to the model. In each iteration, the additional layers are trained based on the intermediate pretrained model - a result of previous training iterations. Thus, the resolution of the model is progressively increased up to the required resolution level. In this paper, the progressive VAE training is exploited for learning a latent representation with imbalanced, highly sparse data sets and, consequently, generating routes in a constrained 2D space. Routing problems (e.g., vehicle routing problem, travelling salesman problem, and arc routing) are of special significance in many modern applications (e.g., route planning, network maintenance, developing high-performance nanoelectronic systems, and others) and typically associated with sparse imbalanced data. In this paper, the critical problem of routing billions of components in nanoelectronic devices is considered. The proposed approach exhibits a significant training speedup as compared with state-of-the-art existing VAE training methods, while generating expected image outputs from unseen input data. Furthermore, the final progressive VAE models exhibit much more precise output representation, than the Generative Adversarial Network (GAN) models trained with comparable training time. The proposed method is expected to be applicable to a wide range of applications, including but not limited image impainting, sentence interpolation, and semi-supervised learning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,795
1211.6409
Obesity Heuristic, New Way On Artificial Immune Systems
There is a need for new metaphors from immunology to flourish the application areas of Artificial Immune Systems. A metaheuristic called Obesity Heuristic derived from advances in obesity treatment is proposed. The main forces of the algorithm are the generation omega-6 and omega-3 fatty acids. The algorithm works with Just-In-Time philosophy; by starting only when desired. A case study of data cleaning is provided. With experiments conducted on standard tables, results show that Obesity Heuristic outperforms other algorithms, with 100% recall. This is a great improvement over other algorithms
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
19,975
2412.19828
Quantum Implicit Neural Compression
Signal compression based on implicit neural representation (INR) is an emerging technique to represent multimedia signals with a small number of bits. While INR-based signal compression achieves high-quality reconstruction for relatively low-resolution signals, the accuracy of high-frequency details is significantly degraded with a small model. To improve the compression efficiency of INR, we introduce quantum INR (quINR), which leverages the exponentially rich expressivity of quantum neural networks for data compression. Evaluations using some benchmark datasets show that the proposed quINR-based compression could improve rate-distortion performance in image compression compared with traditional codecs and classic INR-based coding methods, up to 1.2dB gain.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
520,985
2110.04983
Understanding the Safety Requirements for Learning-based Power Systems Operations
Recent advancements in machine learning and reinforcement learning have brought increased attention to their applicability in a range of decision-making tasks in the operations of power systems, such as short-term emergency control, Volt/VAr control, long-term residential demand response and battery energy management. Despite the promises of providing strong representation of complex system dynamics and fast, efficient learned operation strategies, the safety requirements of such learning paradigms are less discussed. This paper explores the design requirements on both data and model side of such learning algorithms by exploiting the impacts of adversarial attacks on safety critical system operations. Case studies performed on both voltage regulation and topology control tasks demonstrated the potential vulnerabilities of the standard reinforcement learning algorithms, and possible measures of machine learning robustness and security are discussed for power systems operation tasks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
260,116
2108.02859
Evaluating the Tradeoff Between Abstractiveness and Factuality in Abstractive Summarization
Neural models for abstractive summarization tend to generate output that is fluent and well-formed but lacks semantic faithfulness, or factuality, with respect to the input documents. In this paper, we analyze the tradeoff between abstractiveness and factuality of generated summaries across multiple datasets and models, using extensive human evaluations of factuality. In our analysis, we visualize the rates of change in factuality as we gradually increase abstractiveness using a decoding constraint, and we observe that, while increased abstractiveness generally leads to a drop in factuality, the rate of factuality decay depends on factors such as the data that the system was trained on. We introduce two datasets with human factuality judgements; one containing 10.2k generated summaries with systematically varied degrees of abstractiveness; the other containing 4.2k summaries from five different summarization models. We propose new factuality metrics that adjust for the degree of abstractiveness, and we use them to compare the abstractiveness-adjusted factuality of previous summarization works, providing baselines for future work.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
249,474
1712.04753
Learning Spontaneity to Improve Emotion Recognition In Speech
We investigate the effect and usefulness of spontaneity (i.e. whether a given speech is spontaneous or not) in speech in the context of emotion recognition. We hypothesize that emotional content in speech is interrelated with its spontaneity, and use spontaneity classification as an auxiliary task to the problem of emotion recognition. We propose two supervised learning settings that utilize spontaneity to improve speech emotion recognition: a hierarchical model that performs spontaneity detection before performing emotion recognition, and a multitask learning model that jointly learns to recognize both spontaneity and emotion. Through various experiments on the well known IEMOCAP database, we show that by using spontaneity detection as an additional task, significant improvement can be achieved over emotion recognition systems that are unaware of spontaneity. We achieve state-of-the-art emotion recognition accuracy (4-class, 69.1%) on the IEMOCAP database outperforming several relevant and competitive baselines.
true
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
86,652
1504.05122
Optimal Nudging: Solving Average-Reward Semi-Markov Decision Processes as a Minimal Sequence of Cumulative Tasks
This paper describes a novel method to solve average-reward semi-Markov decision processes, by reducing them to a minimal sequence of cumulative reward problems. The usual solution methods for this type of problems update the gain (optimal average reward) immediately after observing the result of taking an action. The alternative introduced, optimal nudging, relies instead on setting the gain to some fixed value, which transitorily makes the problem a cumulative-reward task, solving it by any standard reinforcement learning method, and only then updating the gain in a way that minimizes uncertainty in a minmax sense. The rule for optimal gain update is derived by exploiting the geometric features of the w-l space, a simple mapping of the space of policies. The total number of cumulative reward tasks that need to be solved is shown to be small. Some experiments are presented to explore the features of the algorithm and to compare its performance with other approaches.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
42,229
2202.05594
The Shapley Value in Machine Learning
Over the last few years, the Shapley value, a solution concept from cooperative game theory, has found numerous applications in machine learning. In this paper, we first discuss fundamental concepts of cooperative game theory and axiomatic properties of the Shapley value. Then we give an overview of the most important applications of the Shapley value in machine learning: feature selection, explainability, multi-agent reinforcement learning, ensemble pruning, and data valuation. We examine the most crucial limitations of the Shapley value and point out directions for future research.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
279,922
2308.10638
SCULPT: Shape-Conditioned Unpaired Learning of Pose-dependent Clothed and Textured Human Meshes
We present SCULPT, a novel 3D generative model for clothed and textured 3D meshes of humans. Specifically, we devise a deep neural network that learns to represent the geometry and appearance distribution of clothed human bodies. Training such a model is challenging, as datasets of textured 3D meshes for humans are limited in size and accessibility. Our key observation is that there exist medium-sized 3D scan datasets like CAPE, as well as large-scale 2D image datasets of clothed humans and multiple appearances can be mapped to a single geometry. To effectively learn from the two data modalities, we propose an unpaired learning procedure for pose-dependent clothed and textured human meshes. Specifically, we learn a pose-dependent geometry space from 3D scan data. We represent this as per vertex displacements w.r.t. the SMPL model. Next, we train a geometry conditioned texture generator in an unsupervised way using the 2D image data. We use intermediate activations of the learned geometry model to condition our texture generator. To alleviate entanglement between pose and clothing type, and pose and clothing appearance, we condition both the texture and geometry generators with attribute labels such as clothing types for the geometry, and clothing colors for the texture generator. We automatically generated these conditioning labels for the 2D images based on the visual question answering model BLIP and CLIP. We validate our method on the SCULPT dataset, and compare to state-of-the-art 3D generative models for clothed human bodies. Our code and data can be found at https://sculpt.is.tue.mpg.de.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
386,821
2203.14887
HUNIS: High-Performance Unsupervised Nuclei Instance Segmentation
A high-performance unsupervised nuclei instance segmentation (HUNIS) method is proposed in this work. HUNIS consists of two-stage block-wise operations. The first stage includes: 1) adaptive thresholding of pixel intensities, 2) incorporation of nuclei size/shape priors and 3) removal of false positive nuclei instances. Then, HUNIS conducts the second stage segmentation by receiving guidance from the first one. The second stage exploits the segmentation masks obtained in the first stage and leverages color and shape distributions for a more accurate segmentation. The main purpose of the two-stage design is to provide pixel-wise pseudo-labels from the first to the second stage. This self-supervision mechanism is novel and effective. Experimental results on the MoNuSeg dataset show that HUNIS outperforms all other unsupervised methods by a substantial margin. It also has a competitive standing among state-of-the-art supervised methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
288,161
2204.00005
Graph-based Active Learning for Semi-supervised Classification of SAR Data
We present a novel method for classification of Synthetic Aperture Radar (SAR) data by combining ideas from graph-based learning and neural network methods within an active learning framework. Graph-based methods in machine learning are based on a similarity graph constructed from the data. When the data consists of raw images composed of scenes, extraneous information can make the classification task more difficult. In recent years, neural network methods have been shown to provide a promising framework for extracting patterns from SAR images. These methods, however, require ample training data to avoid overfitting. At the same time, such training data are often unavailable for applications of interest, such as automatic target recognition (ATR) and SAR data. We use a Convolutional Neural Network Variational Autoencoder (CNNVAE) to embed SAR data into a feature space, and then construct a similarity graph from the embedded data and apply graph-based semi-supervised learning techniques. The CNNVAE feature embedding and graph construction requires no labeled data, which reduces overfitting and improves the generalization performance of graph learning at low label rates. Furthermore, the method easily incorporates a human-in-the-loop for active learning in the data-labeling process. We present promising results and compare them to other standard machine learning methods on the Moving and Stationary Target Acquisition and Recognition (MSTAR) dataset for ATR with small amounts of labeled data.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
true
289,102
2312.08220
EventAid: Benchmarking Event-aided Image/Video Enhancement Algorithms with Real-captured Hybrid Dataset
Event cameras are emerging imaging technology that offers advantages over conventional frame-based imaging sensors in dynamic range and sensing speed. Complementing the rich texture and color perception of traditional image frames, the hybrid camera system of event and frame-based cameras enables high-performance imaging. With the assistance of event cameras, high-quality image/video enhancement methods make it possible to break the limits of traditional frame-based cameras, especially exposure time, resolution, dynamic range, and frame rate limits. This paper focuses on five event-aided image and video enhancement tasks (i.e., event-based video reconstruction, event-aided high frame rate video reconstruction, image deblurring, image super-resolution, and high dynamic range image reconstruction), provides an analysis of the effects of different event properties, a real-captured and ground truth labeled benchmark dataset, a unified benchmarking of state-of-the-art methods, and an evaluation for two mainstream event simulators. In detail, this paper collects a real-captured evaluation dataset EventAid for five event-aided image/video enhancement tasks, by using "Event-RGB" multi-camera hybrid system, taking into account scene diversity and spatiotemporal synchronization. We further perform quantitative and visual comparisons for state-of-the-art algorithms, provide a controlled experiment to analyze the performance limit of event-aided image deblurring methods, and discuss open problems to inspire future research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
415,231
1403.3602
Spontaneous expression classification in the encrypted domain
To date, most facial expression analysis have been based on posed image databases and is carried out without being able to protect the identity of the subjects whose expressions are being recognised. In this paper, we propose and implement a system for classifying facial expressions of images in the encrypted domain based on a Paillier cryptosystem implementation of Fisher Linear Discriminant Analysis and k-nearest neighbour (FLDA + kNN). We present results of experiments carried out on a recently developed natural visible and infrared facial expression (NVIE) database of spontaneous images. To the best of our knowledge, this is the first system that will allow the recog-nition of encrypted spontaneous facial expressions by a remote server on behalf of a client.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
31,585
2401.00676
Digger: Detecting Copyright Content Mis-usage in Large Language Model Training
Pre-training, which utilizes extensive and varied datasets, is a critical factor in the success of Large Language Models (LLMs) across numerous applications. However, the detailed makeup of these datasets is often not disclosed, leading to concerns about data security and potential misuse. This is particularly relevant when copyrighted material, still under legal protection, is used inappropriately, either intentionally or unintentionally, infringing on the rights of the authors. In this paper, we introduce a detailed framework designed to detect and assess the presence of content from potentially copyrighted books within the training datasets of LLMs. This framework also provides a confidence estimation for the likelihood of each content sample's inclusion. To validate our approach, we conduct a series of simulated experiments, the results of which affirm the framework's effectiveness in identifying and addressing instances of content misuse in LLM training processes. Furthermore, we investigate the presence of recognizable quotes from famous literary works within these datasets. The outcomes of our study have significant implications for ensuring the ethical use of copyrighted materials in the development of LLMs, highlighting the need for more transparent and responsible data management practices in this field.
false
false
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
419,065
2111.08900
A GNN-RNN Approach for Harnessing Geospatial and Temporal Information: Application to Crop Yield Prediction
Climate change is posing new challenges to crop-related concerns including food insecurity, supply stability and economic planning. As one of the central challenges, crop yield prediction has become a pressing task in the machine learning field. Despite its importance, the prediction task is exceptionally complicated since crop yields depend on various factors such as weather, land surface, soil quality as well as their interactions. In recent years, machine learning models have been successfully applied in this domain. However, these models either restrict their tasks to a relatively small region, or only study over a single or few years, which makes them hard to generalize spatially and temporally. In this paper, we introduce a novel graph-based recurrent neural network for crop yield prediction, to incorporate both geographical and temporal knowledge in the model, and further boost predictive power. Our method is trained, validated, and tested on over 2000 counties from 41 states in the US mainland, covering years from 1981 to 2019. As far as we know, this is the first machine learning method that embeds geographical knowledge in crop yield prediction and predicts the crop yields at county level nationwide. We also laid a solid foundation for the comparison with other machine learning baselines by applying well-known linear models, tree-based models, deep learning methods and comparing their performance. Experiments show that our proposed method consistently outperforms the existing state-of-the-art methods on various metrics, validating the effectiveness of geospatial and temporal information.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
266,847
2301.13573
Skill Decision Transformer
Recent work has shown that Large Language Models (LLMs) can be incredibly effective for offline reinforcement learning (RL) by representing the traditional RL problem as a sequence modelling problem (Chen et al., 2021; Janner et al., 2021). However many of these methods only optimize for high returns, and may not extract much information from a diverse dataset of trajectories. Generalized Decision Transformers (GDTs) (Furuta et al., 2021) have shown that utilizing future trajectory information, in the form of information statistics, can help extract more information from offline trajectory data. Building upon this, we propose Skill Decision Transformer (Skill DT). Skill DT draws inspiration from hindsight relabelling (Andrychowicz et al., 2017) and skill discovery methods to discover a diverse set of primitive behaviors, or skills. We show that Skill DT can not only perform offline state-marginal matching (SMM), but can discovery descriptive behaviors that can be easily sampled. Furthermore, we show that through purely reward-free optimization, Skill DT is still competitive with supervised offline RL approaches on the D4RL benchmark. The code and videos can be found on our project page: https://github.com/shyamsn97/skill-dt
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
342,962
2307.04892
Entity Identifier: A Natural Text Parsing-based Framework For Entity Relation Extraction
The field of programming has a diversity of paradigms that are used according to the working framework. While current neural code generation methods are able to learn and generate code directly from text, we believe that this approach is not optimal for certain code tasks, particularly the generation of classes in an object-oriented project. Specifically, we use natural language processing techniques to extract structured information from requirements descriptions, in order to automate the generation of CRUD (Create, Read, Update, Delete) class code. To facilitate this process, we introduce a pipeline for extracting entity and relation information, as well as a representation called an "Entity Tree" to model this information. We also create a dataset to evaluate the effectiveness of our approach.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
378,544
2306.16482
DenseBAM-GI: Attention Augmented DeneseNet with momentum aided GRU for HMER
The task of recognising Handwritten Mathematical Expressions (HMER) is crucial in the fields of digital education and scholarly research. However, it is difficult to accurately determine the length and complex spatial relationships among symbols in handwritten mathematical expressions. In this study, we present a novel encoder-decoder architecture (DenseBAM-GI) for HMER, where the encoder has a Bottleneck Attention Module (BAM) to improve feature representation and the decoder has a Gated Input-GRU (GI-GRU) unit with an extra gate to make decoding long and complex expressions easier. The proposed model is an efficient and lightweight architecture with performance equivalent to state-of-the-art models in terms of Expression Recognition Rate (exprate). It also performs better in terms of top 1, 2, and 3 error accuracy across the CROHME 2014, 2016, and 2019 datasets. DenseBAM-GI achieves the best exprate among all models on the CROHME 2019 dataset. Importantly, these successes are accomplished with a drop in the complexity of the calculation and a reduction in the need for GPU memory.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
376,369
2206.03261
Optimists at Heart: Why Do We Research Game AI? (Extended Version)
In this paper we survey the motivations behind contemporary game AI research by analysing individual publications, the researchers themselves, and the institutions that influence them. In doing so, we identify some negative effects on our field, caused both by external forces outside of our control as well as institutionalised behaviours that are easily overlooked. We suggest how we might begin to address some of these issues as a community, and reassert ourselves as the primary driving force behind the field.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
301,207
2402.16696
Look Before You Leap: Towards Decision-Aware and Generalizable Tool-Usage for Large Language Models
Tool-augmented large language models (LLMs) are attracting widespread attention when accessing up-to-date knowledge and alleviating hallucination issues. Nowadays, advanced closed-source LLMs (e.g., ChatGPT) have demonstrated surprising tool-usage capabilities through prompting and in-context learning techniques. To empower the capabilities of open-source LLMs (e.g., LLaMA) in manipulating tools, current efforts focus on either template-driven or token-triggered tool-usage. However, the former hampers LLMs' flexibility to address diverse user's queries due to constrained tool interactions, while the latter limits the generalizability when engaging with new tools, since tool-usage learning is based on task- and tool-specific datasets. To alleviate these concerns, in this paper, we propose a decision-aware and generalizable tool-usage framework (DEER). Specifically, we first construct the tool-usage samples with multiple decision branches via an automatic generation pipeline, thereby inspiring the decision-making awareness of LLMs under diverse scenarios. Meanwhile, we propose a novel tool sampling strategy to enhance the generalizability of LLMs over unseen tools. Extensive experiments demonstrate that our proposed DEER is effective and significantly outperforms baselines across various datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
432,659
2403.16612
Calibrating Bayesian UNet++ for Sub-Seasonal Forecasting
Seasonal forecasting is a crucial task when it comes to detecting the extreme heat and colds that occur due to climate change. Confidence in the predictions should be reliable since a small increase in the temperatures in a year has a big impact on the world. Calibration of the neural networks provides a way to ensure our confidence in the predictions. However, calibrating regression models is an under-researched topic, especially in forecasters. We calibrate a UNet++ based architecture, which was shown to outperform physics-based models in temperature anomalies. We show that with a slight trade-off between prediction error and calibration error, it is possible to get more reliable and sharper forecasts. We believe that calibration should be an important part of safety-critical machine learning applications such as weather forecasters.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
441,124
2104.10062
Pseudo-Boolean Functions for Optimal Z-Complementary Code Sets with Flexible Lengths
This paper aims to construct optimal Z-complementary code set (ZCCS) with non-power-of-two (NPT) lengths to enable interference-free multicarrier code-division multiple access (MC-CDMA) systems. The existing ZCCSs with NPT lengths, which are constructed from generalized Boolean functions (GBFs), are sub-optimal only with respect to the set size upper bound. For the first time in the literature, we advocate the use of pseudo-Boolean functions (PBFs) (each of which transforms a number of binary variables to a real number as a natural generalization of GBF) for direct constructions of optimal ZCCSs with NPT lengths.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
231,447
1905.10346
Mask-Guided Portrait Editing with Conditional GANs
Portrait editing is a popular subject in photo manipulation. The Generative Adversarial Network (GAN) advances the generating of realistic faces and allows more face editing. In this paper, we argue about three issues in existing techniques: diversity, quality, and controllability for portrait synthesis and editing. To address these issues, we propose a novel end-to-end learning framework that leverages conditional GANs guided by provided face masks for generating faces. The framework learns feature embeddings for every face component (e.g., mouth, hair, eye), separately, contributing to better correspondences for image translation, and local face editing. With the mask, our network is available to many applications, like face synthesis driven by mask, face Swap+ (including hair in swapping), and local manipulation. It can also boost the performance of face parsing a bit as an option of data augmentation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
132,028
2411.08901
SoccerGuard: Investigating Injury Risk Factors for Professional Soccer Players with Machine Learning
We present SoccerGuard, a novel framework for predicting injuries in women's soccer using Machine Learning (ML). This framework can ingest data from multiple sources, including subjective wellness and training load reports from players, objective GPS sensor measurements, third-party player statistics, and injury reports verified by medical personnel. We experiment with a number of different settings related to synthetic data generation, input and output window sizes, and ML models for prediction. Our results show that, given the right configurations and feature combinations, injury event prediction can be undertaken with considerable accuracy. The optimal results are achieved when input windows are reduced and larger combined output windows are defined, in combination with an ideally balanced data set. The framework also includes a dashboard with a user-friendly Graphical User Interface (GUI) to support interactive analysis and visualization.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
508,062
2306.11132
Fairness-aware Message Passing for Graph Neural Networks
Graph Neural Networks (GNNs) have shown great power in various domains. However, their predictions may inherit societal biases on sensitive attributes, limiting their adoption in real-world applications. Although many efforts have been taken for fair GNNs, most existing works just adopt widely used fairness techniques in machine learning to graph domains and ignore or don't have a thorough understanding of the message passing mechanism with fairness constraints, which is a distinctive feature of GNNs. To fill the gap, we propose a novel fairness-aware message passing framework GMMD, which is derived from an optimization problem that considers both graph smoothness and representation fairness. GMMD can be intuitively interpreted as encouraging a node to aggregate representations of other nodes from different sensitive groups while subtracting representations of other nodes from the same sensitive group, resulting in fair representations. We also provide a theoretical analysis to justify that GMMD can guarantee fairness, which leads to a simpler and theory-guided variant GMMD-S. Extensive experiments on graph benchmarks show that our proposed framework can significantly improve the fairness of various backbone GNN models while maintaining high accuracy.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
374,484
2406.13225
Communication-Efficient Federated Knowledge Graph Embedding with Entity-Wise Top-K Sparsification
Federated Knowledge Graphs Embedding learning (FKGE) encounters challenges in communication efficiency stemming from the considerable size of parameters and extensive communication rounds. However, existing FKGE methods only focus on reducing communication rounds by conducting multiple rounds of local training in each communication round, and ignore reducing the size of parameters transmitted within each communication round. To tackle the problem, we first find that universal reduction in embedding precision across all entities during compression can significantly impede convergence speed, underscoring the importance of maintaining embedding precision. We then propose bidirectional communication-efficient FedS based on Entity-Wise Top-K Sparsification strategy. During upload, clients dynamically identify and upload only the Top-K entity embeddings with the greater changes to the server. During download, the server first performs personalized embedding aggregation for each client. It then identifies and transmits the Top-K aggregated embeddings to each client. Besides, an Intermittent Synchronization Mechanism is used by FedS to mitigate negative effect of embedding inconsistency among shared entities of clients caused by heterogeneity of Federated Knowledge Graph. Extensive experiments across three datasets showcase that FedS significantly enhances communication efficiency with negligible (even no) performance degradation.
false
false
false
false
true
true
true
false
false
false
false
false
false
false
false
false
false
false
465,757
2412.17458
Progressive Boundary Guided Anomaly Synthesis for Industrial Anomaly Detection
Unsupervised anomaly detection methods can identify surface defects in industrial images by leveraging only normal samples for training. Due to the risk of overfitting when learning from a single class, anomaly synthesis strategies are introduced to enhance detection capability by generating artificial anomalies. However, existing strategies heavily rely on anomalous textures from auxiliary datasets. Moreover, their limitations in the coverage and directionality of anomaly synthesis may result in a failure to capture useful information and lead to significant redundancy. To address these issues, we propose a novel Progressive Boundary-guided Anomaly Synthesis (PBAS) strategy, which can directionally synthesize crucial feature-level anomalies without auxiliary textures. It consists of three core components: Approximate Boundary Learning (ABL), Anomaly Feature Synthesis (AFS), and Refined Boundary Optimization (RBO). To make the distribution of normal samples more compact, ABL first learns an approximate decision boundary by center constraint, which improves the center initialization through feature alignment. AFS then directionally synthesizes anomalies with more flexible scales guided by the hypersphere distribution of normal features. Since the boundary is so loose that it may contain real anomalies, RBO refines the decision boundary through the binary classification of artificial anomalies and normal features. Experimental results show that our method achieves state-of-the-art performance and the fastest detection speed on three widely used industrial datasets, including MVTec AD, VisA, and MPDD. The code will be available at: https://github.com/cqylunlun/PBAS.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
519,984
2412.10570
Adaptive Sampling to Reduce Epistemic Uncertainty Using Prediction Interval-Generation Neural Networks
Obtaining high certainty in predictive models is crucial for making informed and trustworthy decisions in many scientific and engineering domains. However, extensive experimentation required for model accuracy can be both costly and time-consuming. This paper presents an adaptive sampling approach designed to reduce epistemic uncertainty in predictive models. Our primary contribution is the development of a metric that estimates potential epistemic uncertainty leveraging prediction interval-generation neural networks. This estimation relies on the distance between the predicted upper and lower bounds and the observed data at the tested positions and their neighboring points. Our second contribution is the proposal of a batch sampling strategy based on Gaussian processes (GPs). A GP is used as a surrogate model of the networks trained at each iteration of the adaptive sampling process. Using this GP, we design an acquisition function that selects a combination of sampling locations to maximize the reduction of epistemic uncertainty across the domain. We test our approach on three unidimensional synthetic problems and a multi-dimensional dataset based on an agricultural field for selecting experimental fertilizer rates. The results demonstrate that our method consistently converges faster to minimum epistemic uncertainty levels compared to Normalizing Flows Ensembles, MC-Dropout, and simple GPs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
516,998
2409.17568
Showing Many Labels in Multi-label Classification Models: An Empirical Study of Adversarial Examples
With the rapid development of Deep Neural Networks (DNNs), they have been applied in numerous fields. However, research indicates that DNNs are susceptible to adversarial examples, and this is equally true in the multi-label domain. To further investigate multi-label adversarial examples, we introduce a novel type of attacks, termed "Showing Many Labels". The objective of this attack is to maximize the number of labels included in the classifier's prediction results. In our experiments, we select nine attack algorithms and evaluate their performance under "Showing Many Labels". Eight of the attack algorithms were adapted from the multi-class environment to the multi-label environment, while the remaining one was specifically designed for the multi-label environment. We choose ML-LIW and ML-GCN as target models and train them on four popular multi-label datasets: VOC2007, VOC2012, NUS-WIDE, and COCO. We record the success rate of each algorithm when it shows the expected number of labels in eight different scenarios. Experimental results indicate that under the "Showing Many Labels", iterative attacks perform significantly better than one-step attacks. Moreover, it is possible to show all labels in the dataset.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
491,872
2403.11473
Word Order's Impacts: Insights from Reordering and Generation Analysis
Existing works have studied the impacts of the order of words within natural text. They usually analyze it by destroying the original order of words to create a scrambled sequence, and then comparing the models' performance between the original and scrambled sequences. The experimental results demonstrate marginal drops. Considering this findings, different hypothesis about word order is proposed, including ``the order of words is redundant with lexical semantics'', and ``models do not rely on word order''. In this paper, we revisit the aforementioned hypotheses by adding a order reconstruction perspective, and selecting datasets of different spectrum. Specifically, we first select four different datasets, and then design order reconstruction and continuing generation tasks. Empirical findings support that ChatGPT relies on word order to infer, but cannot support or negate the redundancy relations between word order lexical semantics.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
438,716
2103.09991
A Soft-Aided Staircase Decoder Using Three-Level Channel Reliabilities
The soft-aided bit-marking (SABM) algorithm is based on the idea of marking bits as highly reliable bits (HRBs), highly unreliable bits (HUBs), and uncertain bits to improve the performance of hard-decision (HD) decoders. The HRBs and HUBs are used to assist the HD decoders to prevent miscorrections and to decode those originally uncorrectable cases via bit flipping (BF), respectively. In this paper, an improved SABM algorithm (called iSABM) is proposed for staircase codes (SCCs). Similar to the SABM, iSABM marks bits with the help of channel reliabilities, i.e., using the absolute values of the log-likelihood ratios. The improvements offered by iSABM include: (i) HUBs being classified using a reliability threshold, (ii) BF randomly selecting HUBs, and (iii) soft-aided decoding over multiple SCC blocks. The decoding complexity of iSABM is comparable of that of SABM. This is due to the fact that on the one hand no sorting is required (lower complexity) because of the use of a threshold for HUBs, while on the other hand multiple SCC blocks use soft information (higher complexity). Additional gains of up to 0.53 dB with respect to SABM and 0.91 dB with respect to standard SCC decoding at a bit error rate of $10^{-6}$ are reported. Furthermore, it is shown that using 1-bit reliability marking, i.e., only having HRBs and HUBs, only causes a gain penalty of up to 0.25 dB with a significantly reduced memory requirement.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
225,314
2301.10115
A Robust Hypothesis Test for Tree Ensemble Pruning
Gradient boosted decision trees are some of the most popular algorithms in applied machine learning. They are a flexible and powerful tool that can robustly fit to any tabular dataset in a scalable and computationally efficient way. One of the most critical parameters to tune when fitting these models are the various penalty terms used to distinguish signal from noise in the current model. These penalties are effective in practice, but are lacking in robust theoretical justifications. In this paper we develop and present a novel theoretically justified hypothesis test of split quality for gradient boosted tree ensembles and demonstrate that using this method instead of the common penalty terms leads to a significant reduction in out of sample loss. Additionally, this method provides a theoretically well-justified stopping condition for the tree growing algorithm. We also present several innovative extensions to the method, opening the door for a wide variety of novel tree pruning algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
341,705
2408.16662
Space3D-Bench: Spatial 3D Question Answering Benchmark
Answering questions about the spatial properties of the environment poses challenges for existing language and vision foundation models due to a lack of understanding of the 3D world notably in terms of relationships between objects. To push the field forward, multiple 3D Q&A datasets were proposed which, overall, provide a variety of questions, but they individually focus on particular aspects of 3D reasoning or are limited in terms of data modalities. To address this, we present Space3D-Bench - a collection of 1000 general spatial questions and answers related to scenes of the Replica dataset which offers a variety of data modalities: point clouds, posed RGB-D images, navigation meshes and 3D object detections. To ensure that the questions cover a wide range of 3D objectives, we propose an indoor spatial questions taxonomy inspired by geographic information systems and use it to balance the dataset accordingly. Moreover, we provide an assessment system that grades natural language responses based on predefined ground-truth answers by leveraging a Vision Language Model's comprehension of both text and images to compare the responses with ground-truth textual information or relevant visual data. Finally, we introduce a baseline called RAG3D-Chat integrating the world understanding of foundation models with rich context retrieval, achieving an accuracy of 67% on the proposed dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
484,410
2003.07602
Machine Learning the Phenomenology of COVID-19 From Early Infection Dynamics
We present a robust data-driven machine learning analysis of the COVID-19 pandemic from its early infection dynamics, specifically infection counts over time. The goal is to extract actionable public health insights. These insights include the infectious force, the rate of a mild infection becoming serious, estimates for asymtomatic infections and predictions of new infections over time. We focus on USA data starting from the first confirmed infection on January 20 2020. Our methods reveal significant asymptomatic (hidden) infection, a lag of about 10 days, and we quantitatively confirm that the infectious force is strong with about a 0.14% transition from mild to serious infection. Our methods are efficient, robust and general, being agnostic to the specific virus and applicable to different populations or cohorts.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
168,484
2407.00609
ESGNN: Towards Equivariant Scene Graph Neural Network for 3D Scene Understanding
Scene graphs have been proven to be useful for various scene understanding tasks due to their compact and explicit nature. However, existing approaches often neglect the importance of maintaining the symmetry-preserving property when generating scene graphs from 3D point clouds. This oversight can diminish the accuracy and robustness of the resulting scene graphs, especially when handling noisy, multi-view 3D data. This work, to the best of our knowledge, is the first to implement an Equivariant Graph Neural Network in semantic scene graph generation from 3D point clouds for scene understanding. Our proposed method, ESGNN, outperforms existing state-of-the-art approaches, demonstrating a significant improvement in scene estimation with faster convergence. ESGNN demands low computational resources and is easy to implement from available frameworks, paving the way for real-time applications such as robotics and computer vision.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
468,933
2102.10078
Rapid Multi-Physics Simulation for Electro-Thermal Origami Systems
Electro-thermally actuated origami provides a novel method for creating 3-D systems with advanced morphing and functional capabilities. However, it is currently difficult to simulate the multi-physical behavior of such systems because the electro-thermal actuation and large folding deformations are highly interdependent. In this work, we introduce a rapid multi-physics simulation framework for electro-thermally actuated origami systems that can simultaneously capture: thermo-mechancially coupled actuation, inter panel contact, heat transfer, large deformation folding, and other complex loading applied onto the origami. Comparisons with finite element models validate the proposed framework for simulating origami heat transfer with different system geometries, materials, and surrounding environments. Verification of the simulated folding behaviors against physical electro-thermal micro-origami further demonstrates the validity of the proposed model. Simulations of more complex origami patterns and a case study for origami optimization are provided as application examples to show the capability and efficiency of the model. The framework provides a novel simulation tool for analysis, design, control, and optimization of active origami systems, pushing the boundary for feasible shape morphing and functional capability.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
220,971
1703.07048
Whole-Building Fault Detection: A Scalable Approach Using Spectral Methods
In this paper, an extension to rules-based fault detection is demonstrated utilizing properties of the Koopman operator. The Koopman operator is an infinite-dimensional, linear operator that captures nonlinear, finite dimensional dynamics. The definition of the Koopman operator enables algorithms that can evaluate the magnitude and coincidence of time-series data. Using spectral properties of this operator, diagnostic rule signals generated from building management system (BMS) trend data can be decomposed into components that allow the capture of device behavior at varying time-scales and to a granular level. As it relates to the implementation of fault detection (FDD), this approach creates additional spatial and temporal characterizations of rule signals providing additional data structure and increasing effectiveness with which classification techniques can be applied to the analysis process. The approach permits a knowledge base to be applied in a similar manner to that of a rules-based approach, but the introduced extensions also facilitate the definition of new kinds of diagnostics and overall provide increased analysis potential.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
70,326
2111.01259
Verifying Contracts for Perturbed Control Systems using Linear Programming
Verifying specifications for large-scale control systems is of utmost importance, but can be hard in practice as most formal verification methods can not handle high-dimensional dynamics. Contract theory has been proposed as a modular alternative to formal verification in which specifications are defined by assumptions on the inputs to a component and guarantees on its outputs. In this paper, we present linear-programming-based tools for verifying contracts for control systems. We first consider the problem of verifying contracts defined by time-invariant inequalities for unperturbed systems. We use $k$-induction to show that contract verification can be achieved by considering a collection of implications between inequalities, which are then recast as linear programs. We then move our attention to perturbed systems. We present a comparison-based framework, verifying that a perturbed system satisfies a contract by checking that the corresponding unperturbed system satisfies a robustified (and $\epsilon$-approximated) contract. In both cases, we present explicit algorithms for contract verification, proving their correctness and analyzing their complexity. We also demonstrate the verification process for two case studies, one considering a two-vehicle autonomous driving scenario, and one considering formation control of a multi-agent system.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
264,500
1311.3062
Ants: Mobile Finite State Machines
Consider the Ants Nearby Treasure Search (ANTS) problem introduced by Feinerman, Korman, Lotker, and Sereni (PODC 2012), where $n$ mobile agents, initially placed at the origin of an infinite grid, collaboratively search for an adversarially hidden treasure. In this paper, the model of Feinerman et al. is adapted such that the agents are controlled by a (randomized) finite state machine: they possess a constant-size memory and are able to communicate with each other through constant-size messages. Despite the restriction to constant-size memory, we show that their collaborative performance remains the same by presenting a distributed algorithm that matches a lower bound established by Feinerman et al. on the run-time of any ANTS algorithm.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
28,380
1910.12441
Online News Media Website Ranking Using User Generated Content
News media websites are important online resources that have drawn great attention of text mining researchers. The main aim of this study is to propose a framework for ranking online news websites from different viewpoints. The ranking of news websites is useful information, which can benefit many news-related tasks such as news retrieval and news recommendation. In the proposed framework, the ranking of news websites is obtained by calculating three measures introduced in the paper and based on user-generated content. Each proposed measure is concerned with the performance of news websites from a particular viewpoint including the completeness of news reports, the diversity of events being covered by the website and its speed. The use of user-generated content in this framework, as a partly-unbiased, real-time and low cost content on the web distinguishes the proposed news website ranking framework from the literature. The results obtained for three prominent news websites, BBC, CNN, NYTimes, show that BBC has the best performance in terms of news completeness and speed, and NYTimes has the best diversity in comparison with the other two websites.
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
false
false
false
151,094
1710.09300
Feature learning in feature-sample networks using multi-objective optimization
Data and knowledge representation are fundamental concepts in machine learning. The quality of the representation impacts the performance of the learning model directly. Feature learning transforms or enhances raw data to structures that are effectively exploited by those models. In recent years, several works have been using complex networks for data representation and analysis. However, no feature learning method has been proposed for such category of techniques. Here, we present an unsupervised feature learning mechanism that works on datasets with binary features. First, the dataset is mapped into a feature--sample network. Then, a multi-objective optimization process selects a set of new vertices to produce an enhanced version of the network. The new features depend on a nonlinear function of a combination of preexisting features. Effectively, the process projects the input data into a higher-dimensional space. To solve the optimization problem, we design two metaheuristics based on the lexicographic genetic algorithm and the improved strength Pareto evolutionary algorithm (SPEA2). We show that the enhanced network contains more information and can be exploited to improve the performance of machine learning methods. The advantages and disadvantages of each optimization strategy are discussed.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
83,186
1905.12056
Information-Theoretic Registration with Explicit Reorientation of Diffusion-Weighted Images
We present an information-theoretic approach to the registration of images with directional information, and especially for diffusion-Weighted Images (DWI), with explicit optimization over the directional scale. We call it Locally Orderless Registration with Directions (LORD). We focus on normalized mutual information as a robust information-theoretic similarity measure for DWI. The framework is an extension of the LOR-DWI density-based hierarchical scale-space model that varies and optimizes the integration, spatial, directional, and intensity scales. As affine transformations are insufficient for inter-subject registration, we extend the model to non-rigid deformations. We illustrate that the proposed model deforms orientation distribution functions (ODFs) correctly and is capable of handling the classic complex challenges in DWI-registrations, such as the registration of fiber-crossings along with kissing, fanning, and interleaving fibers. Our experimental results clearly illustrate a novel promising regularizing effect, that comes from the nonlinear orientation-based cost function. We show the properties of the different image scales and, we show that including orientational information in our model makes the model better at retrieving deformations in contrast to standard scalar-based registration.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
132,632
2412.15301
Parametric $\rho$-Norm Scaling Calibration
Output uncertainty indicates whether the probabilistic properties reflect objective characteristics of the model output. Unlike most loss functions and metrics in machine learning, uncertainty pertains to individual samples, but validating it on individual samples is unfeasible. When validated collectively, it cannot fully represent individual sample properties, posing a challenge in calibrating model confidence in a limited data set. Hence, it is crucial to consider confidence calibration characteristics. To counter the adverse effects of the gradual amplification of the classifier output amplitude in supervised learning, we introduce a post-processing parametric calibration method, $\rho$-Norm Scaling, which expands the calibrator expression and mitigates overconfidence due to excessive amplitude while preserving accuracy. Moreover, bin-level objective-based calibrator optimization often results in the loss of significant instance-level information. Therefore, we include probability distribution regularization, which incorporates specific priori information that the instance-level uncertainty distribution after calibration should resemble the distribution before calibration. Experimental results demonstrate the substantial enhancement in the post-processing calibrator for uncertainty calibration with our proposed method.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
519,052
1908.04052
Sentence Specified Dynamic Video Thumbnail Generation
With the tremendous growth of videos over the Internet, video thumbnails, providing video content previews, are becoming increasingly crucial to influencing users' online searching experiences. Conventional video thumbnails are generated once purely based on the visual characteristics of videos, and then displayed as requested. Hence, such video thumbnails, without considering the users' searching intentions, cannot provide a meaningful snapshot of the video contents that users concern. In this paper, we define a distinctively new task, namely sentence specified dynamic video thumbnail generation, where the generated thumbnails not only provide a concise preview of the original video contents but also dynamically relate to the users' searching intentions with semantic correspondences to the users' query sentences. To tackle such a challenging task, we propose a novel graph convolved video thumbnail pointer (GTP). Specifically, GTP leverages a sentence specified video graph convolutional network to model both the sentence-video semantic interaction and the internal video relationships incorporated with the sentence information, based on which a temporal conditioned pointer network is then introduced to sequentially generate the sentence specified video thumbnails. Moreover, we annotate a new dataset based on ActivityNet Captions for the proposed new task, which consists of 10,000+ video-sentence pairs with each accompanied by an annotated sentence specified video thumbnail. We demonstrate that our proposed GTP outperforms several baseline methods on the created dataset, and thus believe that our initial results along with the release of the new dataset will inspire further research on sentence specified dynamic video thumbnail generation. Dataset and code are available at https://github.com/yytzsy/GTP.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
141,398
1705.00601
The Promise of Premise: Harnessing Question Premises in Visual Question Answering
In this paper, we make a simple observation that questions about images often contain premises - objects and relationships implied by the question - and that reasoning about premises can help Visual Question Answering (VQA) models respond more intelligently to irrelevant or previously unseen questions. When presented with a question that is irrelevant to an image, state-of-the-art VQA models will still answer purely based on learned language biases, resulting in non-sensical or even misleading answers. We note that a visual question is irrelevant to an image if at least one of its premises is false (i.e. not depicted in the image). We leverage this observation to construct a dataset for Question Relevance Prediction and Explanation (QRPE) by searching for false premises. We train novel question relevance detection models and show that models that reason about premises consistently outperform models that do not. We also find that forcing standard VQA models to reason about premises during training can lead to improvements on tasks requiring compositional reasoning.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
72,715
2012.09090
You Are What You Tweet: Profiling Users by Past Tweets to Improve Hate Speech Detection
Hate speech detection research has predominantly focused on purely content-based methods, without exploiting any additional context. We briefly critique pros and cons of this task formulation. We then investigate profiling users by their past utterances as an informative prior to better predict whether new utterances constitute hate speech. To evaluate this, we augment three Twitter hate speech datasets with additional timeline data, then embed this additional context into a strong baseline model. Promising results suggest merit for further investigation, though analysis is complicated by differences in annotation schemes and processes, as well as Twitter API limitations and data sharing policies.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
211,957
2306.09182
Rolling control and dynamics model of two section articulated-wing ornithopter
This paper invented a new rolling control mechanism of two section articulated-wing ornithopter, which is analogues to aileron control in plane, however, similar control mechanism leads to opposite result, indicating the ornithopter supposed to go left now go right instead. This research gives a qualitative dynamics model which explains this new phenomenon. Because of wing folding, the differential rotation of outer-section wing (analogues to aileron in plane, left aileron up and right aileron down make left turn) around pitch axis becomes common mode rotation around yaw axis,leading its rotating torque changing from left-handed rotation (using left-handed as example, right-handed is the same) around roll axis to a common mode force pointing to front-right (northeast, NE) direction from first player's view of the ornithopter.Because most of the flapping movement is in the upper hemisphere from ornithopter's view, the NE force is above on the center of mass of the orthopter, generating a right-handed moment around roll axis. Therefore, the ornithopter supposed to go left now goes right. This phenomenon is a unique and only observed in two section articulated-wing ornithopter by far. Many field tests conducted by authors confirm it is highly repetitive.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
373,706
1911.02436
On Data-Processing and Majorization Inequalities for $f$-Divergences with Applications
This paper is focused on derivations of data-processing and majorization inequalities for $f$-divergences, and their applications in information theory and statistics. For the accessibility of the material, the main results are first introduced without proofs, followed by exemplifications of the theorems with further related analytical results, interpretations, and information-theoretic applications. One application refers to the performance analysis of list decoding with either fixed or variable list sizes; some earlier bounds on the list decoding error probability are reproduced in a unified way, and new bounds are obtained and exemplified numerically. Another application is related to a study of the quality of approximating a probability mass function, induced by the leaves of a Tunstall tree, by an equiprobable distribution. The compression rates of finite-length Tunstall codes are further analyzed for asserting their closeness to the Shannon entropy of a memoryless and stationary discrete source. Almost all the analysis is relegated to the appendices, which form a major part of this manuscript.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
152,360
1805.10994
Long-term Large-scale Mapping and Localization Using maplab
This paper discusses a large-scale and long-term mapping and localization scenario using the maplab open-source framework. We present a brief overview of the specific algorithms in the system that enable building a consistent map from multiple sessions. We then demonstrate that such a map can be reused even a few months later for efficient 6-DoF localization and also new trajectories can be registered within the existing 3D model. The datasets presented in this paper are made publicly available.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
98,818
2112.15475
Shift-Equivariant Similarity-Preserving Hypervector Representations of Sequences
Hyperdimensional Computing (HDC), also known as Vector-Symbolic Architectures (VSA), is a promising framework for the development of cognitive architectures and artificial intelligence systems, as well as for technical applications and emerging neuromorphic and nanoscale hardware. HDC/VSA operate with hypervectors, i.e., distributed vector representations of large fixed dimension (usually > 1000). One of the key ingredients of HDC/VSA are the methods for encoding data of various types (from numeric scalars and vectors to graphs) into hypervectors. In this paper, we propose an approach for the formation of hypervectors of sequences that provides both an equivariance with respect to the shift of sequences and preserves the similarity of sequences with identical elements at nearby positions. Our methods represent the sequence elements by compositional hypervectors and exploit permutations of hypervectors for representing the order of sequence elements. We experimentally explored the proposed representations using a diverse set of tasks with data in the form of symbolic strings. Although our approach is feature-free as it forms the hypervector of a sequence from the hypervectors of its symbols at their positions, it demonstrated the performance on a par with the methods that apply various features, such as subsequences. The proposed techniques were designed for the HDC/VSA model known as Sparse Binary Distributed Representations. However, they can be adapted to hypervectors in formats of other HDC/VSA models, as well as for representing sequences of types other than symbolic strings.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
false
273,797
1805.09864
Inverse Rational Control: Inferring What You Think from How You Forage
Complex behaviors are often driven by an internal model, which integrates sensory information over time and facilitates long-term planning. Inferring an agent's internal model is a crucial ingredient in social interactions (theory of mind), for imitation learning, and for interpreting neural activities of behaving agents. Here we describe a generic method to model an agent's behavior under an environment with uncertainty, and infer the agent's internal model, reward function, and dynamic beliefs. We apply our method to a simulated agent performing a naturalistic foraging task. We assume the agent behaves rationally --- that is, they take actions that optimize their subjective utility according to their understanding of the task and its relevant causal variables. We model this rational solution as a Partially Observable Markov Decision Process (POMDP) where the agent may make wrong assumptions about the task parameters. Given the agent's sensory observations and actions, we learn its internal model and reward function by maximum likelihood estimation over a set of task-relevant parameters. The Markov property of the POMDP enables us to characterize the transition probabilities between internal belief states and iteratively estimate the agent's policy using a constrained Expectation-Maximization (EM) algorithm. We validate our method on simulated agents performing suboptimally on a foraging task currently used in many neuroscience experiments, and successfully recover their internal model and reward function. Our work lays a critical foundation to discover how the brain represents and computes with dynamic beliefs.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
98,511
2005.07277
SUPER: A Novel Lane Detection System
AI-based lane detection algorithms were actively studied over the last few years. Many have demonstrated superior performance compared with traditional feature-based methods. The accuracy, however, is still generally in the low 80% or high 90%, or even lower when challenging images are used. In this paper, we propose a real-time lane detection system, called Scene Understanding Physics-Enhanced Real-time (SUPER) algorithm. The proposed method consists of two main modules: 1) a hierarchical semantic segmentation network as the scene feature extractor and 2) a physics enhanced multi-lane parameter optimization module for lane inference. We train the proposed system using heterogeneous data from Cityscapes, Vistas and Apollo, and evaluate the performance on four completely separate datasets (that were never seen before), including Tusimple, Caltech, URBAN KITTI-ROAD, and X-3000. The proposed approach performs the same or better than lane detection models already trained on the same dataset and performs well even on datasets it was never trained on. Real-world vehicle tests were also conducted. Preliminary test results show promising real-time lane-detection performance compared with the Mobileye.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
177,235
1912.07833
Unpaired Image Enhancement Featuring Reinforcement-Learning-Controlled Image Editing Software
This paper tackles unpaired image enhancement, a task of learning a mapping function which transforms input images into enhanced images in the absence of input-output image pairs. Our method is based on generative adversarial networks (GANs), but instead of simply generating images with a neural network, we enhance images utilizing image editing software such as Adobe Photoshop for the following three benefits: enhanced images have no artifacts, the same enhancement can be applied to larger images, and the enhancement is interpretable. To incorporate image editing software into a GAN, we propose a reinforcement learning framework where the generator works as the agent that selects the software's parameters and is rewarded when it fools the discriminator. Our framework can use high-quality non-differentiable filters present in image editing software, which enables image enhancement with high performance. We apply the proposed method to two unpaired image enhancement tasks: photo enhancement and face beautification. Our experimental results demonstrate that the proposed method achieves better performance, compared to the performances of the state-of-the-art methods based on unpaired learning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
157,702
1806.06595
Uncertainty in multitask learning: joint representations for probabilistic MR-only radiotherapy planning
Multi-task neural network architectures provide a mechanism that jointly integrates information from distinct sources. It is ideal in the context of MR-only radiotherapy planning as it can jointly regress a synthetic CT (synCT) scan and segment organs-at-risk (OAR) from MRI. We propose a probabilistic multi-task network that estimates: 1) intrinsic uncertainty through a heteroscedastic noise model for spatially-adaptive task loss weighting and 2) parameter uncertainty through approximate Bayesian inference. This allows sampling of multiple segmentations and synCTs that share their network representation. We test our model on prostate cancer scans and show that it produces more accurate and consistent synCTs with a better estimation in the variance of the errors, state of the art results in OAR segmentation and a methodology for quality assurance in radiotherapy treatment planning.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
100,734