id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2210.03762
Trustworthiness of Laser-Induced Breakdown Spectroscopy Predictions via Simulation-based Synthetic Data Augmentation and Multitask Learning
We consider quantitative analyses of spectral data using laser-induced breakdown spectroscopy. We address the small size of training data available, and the validation of the predictions during inference on unknown data. For the purpose, we build robust calibration models using deep convolutional multitask learning architectures to predict the concentration of the analyte, alongside additional spectral information as auxiliary outputs. These secondary predictions can be used to validate the trustworthiness of the model by taking advantage of the mutual dependencies of the parameters of the multitask neural networks. Due to the experimental lack of training samples, we introduce a simulation-based data augmentation process to synthesise an arbitrary number of spectra, statistically representative of the experimental data. Given the nature of the deep learning model, no dimensionality reduction or data selection processes are required. The procedure is an end-to-end pipeline including the process of synthetic data augmentation, the construction of a suitable robust, homoscedastic, deep learning model, and the validation of its predictions. In the article, we compare the performance of the multitask model with traditional univariate and multivariate analyses, to highlight the separate contributions of each element introduced in the process.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
322,161
1202.2622
A Model for Web Page Usage Mining Based on Segmentation
The web page usage mining plays a vital role in enriching the page's content and structure based on the feedbacks received from the user's interactions with the page. This paper proposes a model for micro-managing the tracking activities by fine-tuning the mining from the page level to the segment level. The proposed model enables the web-master to identify the segments which receives more focus from users comparing with others. The segment level analytics of user actions provides an important metric to analyse the factors which facilitate the increase in traffic for the page. The empirical validation of the model is performed through prototype implementation.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
14,296
1908.06258
Language Graph Distillation for Low-Resource Machine Translation
Neural machine translation on low-resource language is challenging due to the lack of bilingual sentence pairs. Previous works usually solve the low-resource translation problem with knowledge transfer in a multilingual setting. In this paper, we propose the concept of Language Graph and further design a novel graph distillation algorithm that boosts the accuracy of low-resource translations in the graph with forward and backward knowledge distillation. Preliminary experiments on the TED talks multilingual dataset demonstrate the effectiveness of our proposed method. Specifically, we improve the low-resource translation pair by more than 3.13 points in terms of BLEU score.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
141,951
2406.17745
Light-weight End-to-End Graph Interest Network for CTR Prediction in E-commerce Search
Click-through-rate (CTR) prediction has an essential impact on improving user experience and revenue in e-commerce search. With the development of deep learning, graph-based methods are well exploited to utilize graph structure extracted from user behaviors and other information to help embedding learning. However, most of the previous graph-based methods mainly focus on recommendation scenarios, and therefore their graph structures highly depend on item's sequential information from user behaviors, ignoring query's sequential signal and query-item correlation. In this paper, we propose a new approach named Light-weight End-to-End Graph Interest Network (EGIN) to effectively mine users' search interests and tackle previous challenges. (i) EGIN utilizes query and item's correlation and sequential information from the search system to build a heterogeneous graph for better CTR prediction in e-commerce search. (ii) EGIN's graph embedding learning shares the same training input and is jointly trained with CTR prediction, making the end-to-end framework effortless to deploy in large-scale search systems. The proposed EGIN is composed of three parts: query-item heterogeneous graph, light-weight graph sampling, and multi-interest network. The query-item heterogeneous graph captures correlation and sequential information of query and item efficiently by the proposed light-weight graph sampling. The multi-interest network is well designed to utilize graph embedding to capture various similarity relationships between query and item to enhance the final CTR prediction. We conduct extensive experiments on both public and industrial datasets to demonstrate the effectiveness of the proposed EGIN. At the same time, the training cost of graph learning is relatively low compared with the main CTR prediction task, ensuring efficiency in practical applications.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
467,702
2305.18774
Bayesian Decision Trees Inspired from Evolutionary Algorithms
Bayesian Decision Trees (DTs) are generally considered a more advanced and accurate model than a regular Decision Tree (DT) because they can handle complex and uncertain data. Existing work on Bayesian DTs uses Markov Chain Monte Carlo (MCMC) with an accept-reject mechanism and sample using naive proposals to proceed to the next iteration, which can be slow because of the burn-in time needed. We can reduce the burn-in period by proposing a more sophisticated way of sampling or by designing a different numerical Bayesian approach. In this paper, we propose a replacement of the MCMC with an inherently parallel algorithm, the Sequential Monte Carlo (SMC), and a more effective sampling strategy inspired by the Evolutionary Algorithms (EA). Experiments show that SMC combined with the EA can produce more accurate results compared to MCMC in 100 times fewer iterations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
369,236
2311.16832
CharacterGLM: Customizing Chinese Conversational AI Characters with Large Language Models
In this paper, we present CharacterGLM, a series of models built upon ChatGLM, with model sizes ranging from 6B to 66B parameters. Our CharacterGLM is designed for generating Character-based Dialogues (CharacterDial), which aims to equip a conversational AI system with character customization for satisfying people's inherent social desires and emotional needs. On top of CharacterGLM, we can customize various AI characters or social agents by configuring their attributes (identities, interests, viewpoints, experiences, achievements, social relationships, etc.) and behaviors (linguistic features, emotional expressions, interaction patterns, etc.). Our model outperforms most mainstream close-source large langauge models, including the GPT series, especially in terms of consistency, human-likeness, and engagement according to manual evaluations. We will release our 6B version of CharacterGLM and a subset of training data to facilitate further research development in the direction of character-based dialogue generation.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
411,058
1804.05667
Evolution of the Chinese Guarantee Network under Financial Crisis and Stimulus Program
Our knowledge about the evolution of guarantee network in downturn period is limited due to the lack of comprehensive data of the whole credit system. Here we analyze the dynamic Chinese guarantee network constructed from a comprehensive bank loan dataset that accounts for nearly 80% total loans in China, during 01/2007-03/2012. The results show that, first, during the 2007-2008 global financial crisis, the guarantee network became smaller, less connected and more stable because of many bankruptcies; second, the stimulus program encouraged mutual guarantee behaviors, resulting in highly reciprocal and fragile network structure; third, the following monetary policy adjustment enhanced the resilience of the guarantee network by reducing mutual guarantees. Interestingly, our work reveals that the financial crisis made the network more resilient, and conversely, the government bailout degenerated network resilience. These counterintuitive findings can provide new insight into the resilience of real-world credit system under external shocks or rescues.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
95,124
1911.09467
Event-triggered Add-on Safety for Connected and Automated Vehicles Using Road-side Network Infrastructure
This paper proposes an event-triggered add-on safety mechanism to adjust the control parameters for timely braking in a networked vehicular system while maintaining maneuverability. Passenger vehicle maneuverability is significantly affected by the combined-slip friction effect, in which larger longitudinal tire slips result in considerable drop in lateral tire forces. This is of higher importance when unexpected dangerous situations occur on the road and immediate actions, such as braking, need to be taken to avoid collision. Harsh braking can lead to high-slip and loss of maneuverability, hence, timely braking is essential to reduce high-slip scenarios. In addition to the vehicles own active safety systems, the proposed event-triggered add-on safety is activated upon being informed about dangers by the road-side infrastructure. The aim is to incorporate the add-on safety feature to adjust the automatic control parameters for smooth and timely braking such that a collision is avoided while vehicle's maneuverability is maintained. We study two different wireless technologies for communication between the infrastructure and the vehicles, the Long-Term Evolution (LTE) and the fifth generation (5G) schemes. The framework is validated through high-fidelity software simulations and the advantages of including the add-on feature to augment the safety margins for each communication technology is evaluated.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
154,530
2402.01525
Non-Linear Analog Processing Gains in Task-Based Quantization
In task-based quantization, a multivariate analog signal is transformed into a digital signal using a limited number of low-resolution analog-to-digital converters (ADCs). This process aims to minimize a fidelity criterion, which is assessed against an unobserved task variable that is correlated with the analog signal. The scenario models various applications of interest such as channel estimation, medical imaging applications, and object localization. This work explores the integration of analog processing components -- such as analog delay elements, polynomial operators, and envelope detectors -- prior to ADC quantization. Specifically, four scenarios, involving different collections of analog processing operators are considered: (i) arbitrary polynomial operators with analog delay elements, (ii) limited-degree polynomial operators, excluding delay elements, (iii) sequences of envelope detectors, and (iv) a combination of analog delay elements and linear combiners. For each scenario, the minimum achievable distortion is quantified through derivation of computable expressions in various statistical settings. It is shown that analog processing can significantly reduce the distortion in task reconstruction. Numerical simulations in a Gaussian example are provided to give further insights into the aforementioned analog processing gains.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
426,066
2307.16530
Extraction of Road Users' Behavior From Realistic Data According to Assumptions in Safety-Related Models for Automated Driving Systems
In this work, we utilized the methodology outlined in the IEEE Standard 2846-2022 for "Assumptions in Safety-Related Models for Automated Driving Systems" to extract information on the behavior of other road users in driving scenarios. This method includes defining high-level scenarios, determining kinematic characteristics, evaluating safety relevance, and making assumptions on reasonably predictable behaviors. The assumptions were expressed as kinematic bounds. The numerical values for these bounds were extracted using Python scripts to process realistic data from the UniD dataset. The resulting information enables Automated Driving Systems designers to specify the parameters and limits of a road user's state in a specific scenario. This information can be utilized to establish starting conditions for testing a vehicle that is equipped with an Automated Driving System in simulations or on actual roads.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
382,640
2404.06715
Sparse Points to Dense Clouds: Enhancing 3D Detection with Limited LiDAR Data
3D detection is a critical task that enables machines to identify and locate objects in three-dimensional space. It has a broad range of applications in several fields, including autonomous driving, robotics and augmented reality. Monocular 3D detection is attractive as it requires only a single camera, however, it lacks the accuracy and robustness required for real world applications. High resolution LiDAR on the other hand, can be expensive and lead to interference problems in heavy traffic given their active transmissions. We propose a balanced approach that combines the advantages of monocular and point cloud-based 3D detection. Our method requires only a small number of 3D points, that can be obtained from a low-cost, low-resolution sensor. Specifically, we use only 512 points, which is just 1% of a full LiDAR frame in the KITTI dataset. Our method reconstructs a complete 3D point cloud from this limited 3D information combined with a single image. The reconstructed 3D point cloud and corresponding image can be used by any multi-modal off-the-shelf detector for 3D object detection. By using the proposed network architecture with an off-the-shelf multi-modal 3D detector, the accuracy of 3D detection improves by 20% compared to the state-of-the-art monocular detection methods and 6% to 9% compare to the baseline multi-modal methods on KITTI and JackRabbot datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
445,570
2007.12264
Anisotropic dual-continuum representations for multiscale poroelastic materials: Development and numerical modelling
Dual-continuum (DC) models can be tractable alternatives to explicit approaches for the numerical modelling of multiscale materials with multiphysics behaviours. This work concerns the conceptual and numerical modelling of poroelastically coupled dual-scale materials such as naturally fractured rock. Apart from a few exceptions, previous poroelastic DC models have assumed isotropy of the constituents and the dual-material. Additionally, it is common to assume that only one continuum has intrinsic stiffness properties. Finally, little has been done into validating whether the DC paradigm can capture the global poroelastic behaviours of explicit numerical representations at the DC modelling scale. We address the aforementioned knowledge gaps in two steps. First, we utilise a homogenisation approach based on Levin's theorem to develop a previously derived anisotropic poroelastic constitutive model. Our development incorporates anisotropic intrinsic stiffness properties of both continua. This addition is in analogy to anisotropic fractured rock masses with stiff fractures. Second, we perform numerical modelling to test the dual-continuum model against fine-scale explicit equivalents. In doing, we present our hybrid numerical framework, as well as the conditions required for interpretation of the numerical results. The tests themselves progress from materials with isotropic to anisotropic mechanical and flow properties. The fine-scale simulations show anisotropy can have noticeable effects on deformation and flow behaviour. However, our numerical experiments show the DC approach can capture the global poroelastic behaviours of both isotropic and anisotropic fine-scale representations.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
188,771
2104.12842
End-to-end grasping policies for human-in-the-loop robots via deep reinforcement learning
State-of-the-art human-in-the-loop robot grasping is hugely suffered by Electromyography (EMG) inference robustness issues. As a workaround, researchers have been looking into integrating EMG with other signals, often in an ad hoc manner. In this paper, we are presenting a method for end-to-end training of a policy for human-in-the-loop robot grasping on real reaching trajectories. For this purpose we use Reinforcement Learning (RL) and Imitation Learning (IL) in DEXTRON (DEXTerity enviRONment), a stochastic simulation environment with real human trajectories that are augmented and selected using a Monte Carlo (MC) simulation method. We also offer a success model which once trained on the expert policy data and the RL policy roll-out transitions, can provide transparency to how the deep policy works and when it is probably going to fail.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
232,328
2310.09275
Understanding and Modeling the Effects of Task and Context on Drivers' Gaze Allocation
To further advance driver monitoring and assistance systems, it is important to understand how drivers allocate their attention, in other words, where do they tend to look and why. Traditionally, factors affecting human visual attention have been divided into bottom-up (involuntary attraction to salient regions) and top-down (driven by the demands of the task being performed). Although both play a role in directing drivers' gaze, most of the existing models for drivers' gaze prediction apply techniques developed for bottom-up saliency and do not consider influences of the drivers' actions explicitly. Likewise, common driving attention benchmarks lack relevant annotations for drivers' actions and the context in which they are performed. Therefore, to enable analysis and modeling of these factors for drivers' gaze prediction, we propose the following: 1) we correct the data processing pipeline used in DR(eye)VE to reduce noise in the recorded gaze data; 2) we then add per-frame labels for driving task and context; 3) we benchmark a number of baseline and SOTA models for saliency and driver gaze prediction and use new annotations to analyze how their performance changes in scenarios involving different tasks; and, lastly, 4) we develop a novel model that modulates drivers' gaze prediction with explicit action and context information. While reducing noise in the DR(eye)VE gaze data improves results of all models, we show that using task information in our proposed model boosts performance even further compared to bottom-up models on the cleaned up data, both overall (by 24% KLD and 89% NSS) and on scenarios that involve performing safety-critical maneuvers and crossing intersections (by up to 10--30% KLD). Extended annotations and code are available at https://github.com/ykotseruba/SCOUT.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
399,717
2204.01722
Performance Portable Solid Mechanics via Matrix-Free $p$-Multigrid
Finite element analysis of solid mechanics is a foundational tool of modern engineering, with low-order finite element methods and assembled sparse matrices representing the industry standard for implicit analysis. We use performance models and numerical experiments to demonstrate that high-order methods greatly reduce the costs to reach engineering tolerances while enabling effective use of GPUs; these data structures also offer up to 2x benefit for linear elements. We demonstrate the reliability, efficiency, and scalability of matrix-free $p$-multigrid methods with algebraic multigrid coarse solvers through large deformation hyperelastic simulations of multiscale structures. We investigate accuracy, cost, and execution time on multi-node CPU and GPU systems for moderate to large models (millions to billions of degrees of freedom) using AMD MI250X (OLCF Crusher), NVIDIA A100 (NERSC Perlmutter), and V100 (LLNL Lassen and OLCF Summit), resulting in order of magnitude efficiency improvements over a broad range of model properties and scales. We discuss efficient matrix-free representation of Jacobians and demonstrate how automatic differentiation enables rapid development of nonlinear material models without impacting debuggability and workflows targeting GPUs. The methods are broadly applicable and amenable to common workflows, presented here via open source libraries that encapsulate all GPU-specific aspects and are accessible to both new and legacy code, allowing application code to be GPU-oblivious without compromising end-to-end performance on GPUs.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
289,708
1303.0582
Multiple Kernel Sparse Representations for Supervised and Unsupervised Learning
In complex visual recognition tasks it is typical to adopt multiple descriptors, that describe different aspects of the images, for obtaining an improved recognition performance. Descriptors that have diverse forms can be fused into a unified feature space in a principled manner using kernel methods. Sparse models that generalize well to the test data can be learned in the unified kernel space, and appropriate constraints can be incorporated for application in supervised and unsupervised learning. In this paper, we propose to perform sparse coding and dictionary learning in the multiple kernel space, where the weights of the ensemble kernel are tuned based on graph-embedding principles such that class discrimination is maximized. In our proposed algorithm, dictionaries are inferred using multiple levels of 1-D subspace clustering in the kernel space, and the sparse codes are obtained using a simple levelwise pursuit scheme. Empirical results for object recognition and image clustering show that our algorithm outperforms existing sparse coding based approaches, and compares favorably to other state-of-the-art methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
22,590
1806.05259
Analysis of Search Stratagem Utilisation
In Interactive IR, researchers consider the user behaviour towards systems and search tasks in order to adapt search results and to improve the search experience of users. Analysing the users' past interactions with the system is one typical approach. In this paper, we analyse the user behaviour in retrieval sessions towards Marcia Bates' search stratagems such as Footnote Chasing, Citation Searching, Keyword Searching, Author Searching and Journal Run in a real-life academic search engine. In fact, search stratagems represent high-level search behaviour as the users go beyond simple execution of queries and investigate more of the system functionalities. We performed analyses of these five search stratagems using two datasets extracted from the social sciences search engine sowiport. A specific focus was the detection of the search phase and frequency of the usage of these stratagems. In addition, we explored the impact of these stratagems on the whole search process performance. We addressed mainly the usage patterns' observation of the stratagems, their impact on the conduct of retrieval sessions and explore whether they are used similarly in both datasets. From the observation and metrics proposed, we can conclude that the utilisation of search stratagems in real retrieval sessions leads to an improvement of the precision in terms of positive interactions. However, the difference is that Footnote Chasing, Citation Searching and Journal Run appear mostly at the end of a session while Keyword and Author Searching appear typically at the beginning. Thus, we can conclude from the log analysis that the improvement of search functionalities including personalisation and/or recommendation could be achieved by considering references, citations, and journals in the ranking process.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
100,430
2306.03782
Non-parametric Probabilistic Time Series Forecasting via Innovations Representation
Probabilistic time series forecasting predicts the conditional probability distributions of the time series at a future time given past realizations. Such techniques are critical in risk-based decision-making and planning under uncertainties. Existing approaches are primarily based on parametric or semi-parametric time-series models that are restrictive, difficult to validate, and challenging to adapt to varying conditions. This paper proposes a nonparametric method based on the classic notion of {\em innovations} pioneered by Norbert Wiener and Gopinath Kallianpur that causally transforms a nonparametric random process to an independent and identical uniformly distributed {\em innovations process}. We present a machine-learning architecture and a learning algorithm that circumvent two limitations of the original Wiener-Kallianpur innovations representation: (i) the need for known probability distributions of the time series and (ii) the existence of a causal decoder that reproduces the original time series from the innovations representation. We develop a deep-learning approach and a Monte Carlo sampling technique to obtain a generative model for the predicted conditional probability distribution of the time series based on a weak notion of Wiener-Kallianpur innovations representation. The efficacy of the proposed probabilistic forecasting technique is demonstrated on a variety of electricity price datasets, showing marked improvement over leading benchmarks of probabilistic forecasting techniques.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
371,475
2203.06920
DS3-Net: Difficulty-perceived Common-to-T1ce Semi-Supervised Multimodal MRI Synthesis Network
Contrast-enhanced T1 (T1ce) is one of the most essential magnetic resonance imaging (MRI) modalities for diagnosing and analyzing brain tumors, especially gliomas. In clinical practice, common MRI modalities such as T1, T2, and fluid attenuation inversion recovery are relatively easy to access while T1ce is more challenging considering the additional cost and potential risk of allergies to the contrast agent. Therefore, it is of great clinical necessity to develop a method to synthesize T1ce from other common modalities. Current paired image translation methods typically have the issue of requiring a large amount of paired data and do not focus on specific regions of interest, e.g., the tumor region, in the synthesization process. To address these issues, we propose a Difficulty-perceived common-to-T1ce Semi-Supervised multimodal MRI Synthesis network (DS3-Net), involving both paired and unpaired data together with dual-level knowledge distillation. DS3-Net predicts a difficulty map to progressively promote the synthesis task. Specifically, a pixelwise constraint and a patchwise contrastive constraint are guided by the predicted difficulty map. Through extensive experiments on the publiclyavailable BraTS2020 dataset, DS3-Net outperforms its supervised counterpart in each respect. Furthermore, with only 5% paired data, the proposed DS3-Net achieves competitive performance with state-of-theart image translation methods utilizing 100% paired data, delivering an average SSIM of 0.8947 and an average PSNR of 23.60.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
285,272
1909.11591
Modular Deep Reinforcement Learning with Temporal Logic Specifications
We propose an actor-critic, model-free, and online Reinforcement Learning (RL) framework for continuous-state continuous-action Markov Decision Processes (MDPs) when the reward is highly sparse but encompasses a high-level temporal structure. We represent this temporal structure by a finite-state machine and construct an on-the-fly synchronised product with the MDP and the finite machine. The temporal structure acts as a guide for the RL agent within the product, where a modular Deep Deterministic Policy Gradient (DDPG) architecture is proposed to generate a low-level control policy. We evaluate our framework in a Mars rover experiment and we present the success rate of the synthesised policy.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
true
146,859
2311.07874
Towards Transaction as a Service
This paper argues for decoupling transaction processing from existing two-layer cloud-native databases and making transaction processing as an independent service. By building a transaction as a service (TaaS) layer, the transaction processing can be independently scaled for high resource utilization and can be independently upgraded for development agility. Accordingly, we architect an execution-transaction-storage three-layer cloud-native database. By connecting to TaaS, 1) the AP engines can be empowered with ACID TP capability, 2) multiple standalone TP engine instances can be incorporated to support multi-master distributed TP for horizontal scalability, 3) multiple execution engines with different data models can be integrated to support multi-model transactions, and 4) high performance TP is achieved through extensive TaaS optimizations and consistent evolution. Cloud-native databases deserve better architecture: we believe that TaaS provides a path forward to better cloud-native databases.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
407,499
2411.06511
Time-delayed Dynamic Mode Decomposition for families of periodic trajectories in Cislunar Space
In recent years, the development of the Lunar Gateway and Artemis missions has renewed interest in lunar exploration, including both manned and unmanned missions. This interest necessitates accurate initial orbit determination (IOD) and orbit prediction (OP) in this domain, which faces significant challenges such as severe nonlinearity, sensitivity to initial conditions, large state-space volume, and sparse, faint, and unreliable measurements. This paper explores the capability of data-driven Koopman operator-based approximations for OP in these scenarios. Three stable periodic trajectories from distinct cislunar families are analyzed. The analysis includes theoretical justification for using a linear time-invariant system as the data-driven surrogate. This theoretical framework is supported by experimental validation. Furthermore, the accuracy is assessed by comparing the spectral content captured to period estimates derived from the fast Fourier transform (FFT) and Poincare-like sections.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
507,144
cs/0206041
Anticipatory Guidance of Plot
An anticipatory system for guiding plot development in interactive narratives is described. The executable model is a finite automaton that provides the implemented system with a look-ahead. The identification of undesirable future states in the model is used to guide the player, in a transparent manner. In this way, too radical twists of the plot can be avoided. Since the player participates in the development of the plot, such guidance can have many forms, depending on the environment of the player, on the behavior of the other players, and on the means of player interaction. We present a design method for interactive narratives which produces designs suitable for the implementation of anticipatory mechanisms. Use of the method is illustrated by application to our interactive computer game Kaktus.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
537,621
2210.14498
The Biscari Archive. A case study of the application of Transkribus tool
The Paterno' Castello Principi di Biscari Archive, preserved at the State Archives of Catania, amongst one of the most crucial family archives, is, in the light of a digital historical methodology, the best computable historical heritage for demonstrating the applicability of applying an HTR tool, such as Transkribus, to digitised historical documents.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
326,573
2306.05827
Towards the Exploitation of LLM-based Chatbot for Providing Legal Support to Palestinian Cooperatives
With the ever-increasing utilization of natural language processing (NLP), we started to witness over the past few years a significant transformation in our interaction with legal texts. This technology has advanced the analysis and enhanced the understanding of complex legal terminology and contexts. The development of recent large language models (LLMs), particularly ChatGPT, has also introduced a revolutionary contribution to the way that legal texts can be processed and comprehended. In this paper, we present our work on a cooperative-legal question-answering LLM-based chatbot, where we developed a set of legal questions about Palestinian cooperatives, associated with their regulations and compared the auto-generated answers by the chatbot to their correspondences that are designed by a legal expert. To evaluate the proposed chatbot, we have used 50 queries generated by the legal expert and compared the answers produced by the chart to their relevance judgments. Finding demonstrated that an overall accuracy rate of 82% has been achieved when answering the queries, while exhibiting an F1 score equivalent to 79%.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
372,352
2405.18921
GLANCE: Global Actions in a Nutshell for Counterfactual Explainability
The widespread deployment of machine learning systems in critical real-world decision-making applications has highlighted the urgent need for counterfactual explainability methods that operate effectively. Global counterfactual explanations, expressed as actions to offer recourse, aim to provide succinct explanations and insights applicable to large population subgroups. Effectiveness is measured by the fraction of the population that is provided recourse, ensuring that the actions benefit as many individuals as possible. Keeping the cost of actions low ensures the proposed recourse actions remain practical and actionable. Limiting the number of actions that provide global counterfactuals is essential to maximize interpretability. The primary challenge, therefore, is balancing these trade-offs, i.e., maximizing effectiveness, minimizing cost, while maintaining a small number of actions. We introduce GLANCE, a versatile and adaptive framework, comprising two algorithms, that allows the careful balancing of the trade-offs among the three key objectives, with the size objective functioning as a tunable parameter to keep the actions few and easy to interpret. C-GLANCE employs a clustering approach that considers both the feature space and the space of counterfactual actions, thereby accounting for the distribution of points in a way that aligns with the structure of the model. T-GLANCE provides additional features to enhance flexibility. It employs a tree-based approach, that allows users to specify split features, to build a decision tree with a single counterfactual action at each node that can be used as a subgroup policy. Our extensive experimental evaluation demonstrates that our method consistently shows greater robustness and performance compared to existing methods across various datasets and models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
458,668
1604.05132
Using Self-Contradiction to Learn Confidence Measures in Stereo Vision
Learned confidence measures gain increasing importance for outlier removal and quality improvement in stereo vision. However, acquiring the necessary training data is typically a tedious and time consuming task that involves manual interaction, active sensing devices and/or synthetic scenes. To overcome this problem, we propose a new, flexible, and scalable way for generating training data that only requires a set of stereo images as input. The key idea of our approach is to use different view points for reasoning about contradictions and consistencies between multiple depth maps generated with the same stereo algorithm. This enables us to generate a huge amount of training data in a fully automated manner. Among other experiments, we demonstrate the potential of our approach by boosting the performance of three learned confidence measures on the KITTI2012 dataset by simply training them on a vast amount of automatically generated training data rather than a limited amount of laser ground truth data.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
54,769
2404.14934
G3R: Generating Rich and Fine-grained mmWave Radar Data from 2D Videos for Generalized Gesture Recognition
Millimeter wave radar is gaining traction recently as a promising modality for enabling pervasive and privacy-preserving gesture recognition. However, the lack of rich and fine-grained radar datasets hinders progress in developing generalized deep learning models for gesture recognition across various user postures (e.g., standing, sitting), positions, and scenes. To remedy this, we resort to designing a software pipeline that exploits wealthy 2D videos to generate realistic radar data, but it needs to address the challenge of simulating diversified and fine-grained reflection properties of user gestures. To this end, we design G3R with three key components: (i) a gesture reflection point generator expands the arm's skeleton points to form human reflection points; (ii) a signal simulation model simulates the multipath reflection and attenuation of radar signals to output the human intensity map; (iii) an encoder-decoder model combines a sampling module and a fitting module to address the differences in number and distribution of points between generated and real-world radar data for generating realistic radar data. We implement and evaluate G3R using 2D videos from public data sources and self-collected real-world radar data, demonstrating its superiority over other state-of-the-art approaches for gesture recognition.
true
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
448,872
2303.03608
Towards Interpretable and Efficient Automatic Reference-Based Summarization Evaluation
Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics. In this work, we develop strong-performing automatic metrics for reference-based summarization evaluation, based on a two-stage evaluation pipeline that first extracts basic information units from one text sequence and then checks the extracted units in another sequence. The metrics we developed include two-stage metrics that can provide high interpretability at both the fine-grained unit level and summary level, and one-stage metrics that achieve a balance between efficiency and interpretability. We make the developed tools publicly available at https://github.com/Yale-LILY/AutoACU.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
349,786
2212.07155
Autonomous Vehicle Navigation with LIDAR using Path Planning
In this paper, a complete framework for Autonomous Self Driving is implemented. LIDAR, Camera and IMU sensors are used together. The entire data communication is managed using Robot Operating System which provides a robust platform for implementation of Robotics Projects. Jetson Nano is used to provide powerful on-board processing capabilities. Sensor fusion is performed on the data received from the different sensors to improve the accuracy of the decision making and inferences that we derive from the data. This data is then used to create a localized map of the environment. In this step, the position of the vehicle is obtained with respect to the Mapping done using the sensor data.The different SLAM techniques used for this purpose are Hector Mapping and GMapping which are widely used mapping techniques in ROS. Apart from SLAM that primarily uses LIDAR data, Visual Odometry is implemented using a Monocular Camera. The sensor fused data is then used by Adaptive Monte Carlo Localization for car localization. Using the localized map developed, Path Planning techniques like "TEB planner" and "Dynamic Window Approach" are implemented for autonomous navigation of the vehicle. The last step in the Project is the implantation of Control which is the final decision making block in the pipeline that gives speed and steering data for the navigation that is compatible with Ackermann Kinematics. The implementation of such a control block under a ROS framework using the three sensors, viz, LIDAR, Camera and IMU is a novel approach that is undertaken in this project.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
336,320
2011.12854
Right for the Right Concept: Revising Neuro-Symbolic Concepts by Interacting with their Explanations
Most explanation methods in deep learning map importance estimates for a model's prediction back to the original input space. These "visual" explanations are often insufficient, as the model's actual concept remains elusive. Moreover, without insights into the model's semantic concept, it is difficult -- if not impossible -- to intervene on the model's behavior via its explanations, called Explanatory Interactive Learning. Consequently, we propose to intervene on a Neuro-Symbolic scene representation, which allows one to revise the model on the semantic level, e.g. "never focus on the color to make your decision". We compiled a novel confounded visual scene data set, the CLEVR-Hans data set, capturing complex compositions of different objects. The results of our experiments on CLEVR-Hans demonstrate that our semantic explanations, i.e. compositional explanations at a per-object level, can identify confounders that are not identifiable using "visual" explanations only. More importantly, feedback on this semantic level makes it possible to revise the model from focusing on these factors.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
208,295
1403.0481
Support Vector Machine Model for Currency Crisis Discrimination
Support Vector Machine (SVM) is powerful classification technique based on the idea of structural risk minimization. Use of kernel function enables curse of dimensionality to be addressed. However, proper kernel function for certain problem is dependent on specific dataset and as such there is no good method on choice of kernel function. In this paper, SVM is used to build empirical models of currency crisis in Argentina. An estimation technique is developed by training model on real life data set which provides reasonably accurate model outputs and helps policy makers to identify situations in which currency crisis may happen. The third and fourth order polynomial kernel is generally best choice to achieve high generalization of classifier performance. SVM has high level of maturity with algorithms that are simple, easy to implement, tolerates curse of dimensionality and good empirical performance. The satisfactory results show that currency crisis situation is properly emulated using only small fraction of database and could be used as an evaluation tool as well as an early warning system. To the best of knowledge this is the first work on SVM approach for currency crisis evaluation of Argentina.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
31,297
1910.02258
Object Segmentation Tracking from Generic Video Cues
We propose a light-weight variational framework for online tracking of object segmentations in videos based on optical flow and image boundaries. While high-end computer vision methods on this task rely on sequence specific training of dedicated CNN architectures, we show the potential of a variational model, based on generic video information from motion and color. Such cues are usually required for tasks such as robot navigation or grasp estimation. We leverage them directly for video object segmentation and thus provide accurate segmentations at potentially very low extra cost. Our simple method can provide competitive results compared to the costly CNN-based methods with parameter tuning. Furthermore, we show that our approach can be combined with state-of-the-art CNN-based segmentations in order to improve over their respective results. We evaluate our method on the datasets DAVIS 16,17 and SegTrack v2.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
148,190
2110.05678
Development of A Load Control Algorithm to Enhance Energy Sustainability for the International Space Station
This paper presents a load control algorithm for control of energy sources and loads to enhance energy sustainability and reliability of the International Space Station (ISS), which is a large spacecraft in orbit around Earth. In this paper, the ISS electric power system was simulated in MATLAB/Simulink to be able to evaluate the performance of the developed algorithm in a simulated environment. This study also aims to emphasize the importance of load control algorithms on energy sustainability for critical systems, like ISS, having limited energy sources.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
260,352
2302.03164
Adaptive Coverage Path Planning for Efficient Exploration of Unknown Environments
We present a method for solving the coverage problem with the objective of autonomously exploring an unknown environment under mission time constraints. Here, the robot is tasked with planning a path over a horizon such that the accumulated area swept out by its sensor footprint is maximized. Because this problem exhibits a diminishing returns property known as submodularity, we choose to formulate it as a tree-based sequential decision making process. This formulation allows us to evaluate the effects of the robot's actions on future world coverage states, while simultaneously accounting for traversability risk and the dynamic constraints of the robot. To quickly find near-optimal solutions, we propose an effective approximation to the coverage sensor model which adapts to the local environment. Our method was extensively tested across various complex environments and served as the local exploration algorithm for a competing entry in the DARPA Subterranean Challenge.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
344,241
2412.07956
Reciprocal Learning of Intent Inferral with Augmented Visual Feedback for Stroke
Intent inferral, the process by which a robotic device predicts a user's intent from biosignals, offers an effective and intuitive way to control wearable robots. Classical intent inferral methods treat biosignal inputs as unidirectional ground truths for training machine learning models, where the internal state of the model is not directly observable by the user. In this work, we propose reciprocal learning, a bidirectional paradigm that facilitates human adaptation to an intent inferral classifier. Our paradigm consists of iterative, interwoven stages that alternate between updating machine learning models and guiding human adaptation with the use of augmented visual feedback. We demonstrate this paradigm in the context of controlling a robotic hand orthosis for stroke, where the device predicts open, close, and relax intents from electromyographic (EMG) signals and provides appropriate assistance. We use LED progress-bar displays to communicate to the user the predicted probabilities for open and close intents by the classifier. Our experiments with stroke subjects show reciprocal learning improving performance in a subset of subjects (two out of five) without negatively impacting performance on the others. We hypothesize that, during reciprocal learning, subjects can learn to reproduce more distinguishable muscle activation patterns and generate more separable biosignals.
true
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
515,871
1911.06777
TinyCNN: A Tiny Modular CNN Accelerator for Embedded FPGA
In recent years, Convolutional Neural Network (CNN) based methods have achieved great success in a large number of applications and have been among the most powerful and widely used techniques in computer vision. However, CNN-based methods are computational-intensive and resource-consuming, and thus are hard to be integrated into embedded systems such as smart phones, smart glasses, and robots. FPGA is one of the most promising platforms for accelerating CNN, but the limited on-chip memory size limit the performance of FPGA accelerator for CNN. In this paper, we propose a framework for designing CNN accelerator on embedded FPGA for image classification. The proposed framework provides a tool for FPGA resource-aware design space exploration of CNNs and automatically generates the hardware description of the CNN to be programmed on a target FPGA. The framework consists of three main backends; software, hardware generation, and simulation/precision adjustment. The software backend serves as an API to the designer to design the CNN and train it according to the hardware resources that are available. Using the CNN model, hardware backend generates the necessary hardware components and integrates them to generate the hardware description of the CNN. Finaly, Simulation/precision adjustment backend adjusts the inter-layer precision units to minimize the classification error. We used 16-bit fixed-point data in a CNN accelerator (FPGA) and compared it to the exactly similar software version running on an ARM processor (32-bit floating point data). We encounter about 3% accuracy loss in classification of the accelerated (FPGA) version. In return, we got up to 15.75x speedup by classifying with the accelerated version on the FPGA.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
153,620
2210.11704
Accessible Survey of Evolutionary Robotics and Potential Future Research Directions
This paper reviews various Evolutionary Approaches applied to the domain of Evolutionary Robotics with the intention of resolving difficult problems in the areas of robotic design and control. Evolutionary Robotics is a fast-growing field that has attracted substantial research attention in recent years. The paper thus collates recent findings along with some anticipated applications. The reviewed literature is organized systematically to give a categorical overview of recent developments and is presented in tabulated form for quick reference. We discuss the outstanding potentialities and challenges that exist in robotics from an ER perspective, with the belief that these will be have the capacity to be addressed in the near future via the application of evolutionary approaches. The primary objective of this study is to explore the applicability of Evolutionary Approaches in robotic application development. We believe that this study will enable the researchers to utilize Evolutionary Approaches to solve complex outstanding problems in robotics.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
325,417
2104.13901
Symbolic Abstractions From Data: A PAC Learning Approach
Symbolic control techniques aim to satisfy complex logic specifications. A critical step in these techniques is the construction of a symbolic (discrete) abstraction, a finite-state system whose behaviour mimics that of a given continuous-state system. The methods used to compute symbolic abstractions, however, require knowledge of an accurate closed-form model. To generalize them to systems with unknown dynamics, we present a new data-driven approach that does not require closed-form dynamics, instead relying only the ability to evaluate successors of each state under given inputs. To provide guarantees for the learned abstraction, we use the Probably Approximately Correct (PAC) statistical framework. We first introduce a PAC-style behavioural relationship and an appropriate refinement procedure. We then show how the symbolic abstraction can be constructed to satisfy this new behavioural relationship. Moreover, we provide PAC bounds that dictate the number of data required to guarantee a prescribed level of accuracy and confidence. Finally, we present an illustrative example.
false
false
false
false
true
false
true
false
false
false
true
false
false
false
false
false
false
false
232,643
2312.00209
On the Interplay Between Stepsize Tuning and Progressive Sharpening
Recent empirical work has revealed an intriguing property of deep learning models by which the sharpness (largest eigenvalue of the Hessian) increases throughout optimization until it stabilizes around a critical value at which the optimizer operates at the edge of stability, given a fixed stepsize (Cohen et al, 2022). We investigate empirically how the sharpness evolves when using stepsize-tuners, the Armijo linesearch and Polyak stepsizes, that adapt the stepsize along the iterations to local quantities such as, implicitly, the sharpness itself. We find that the surprisingly poor performance of a classical Armijo linesearch in the deterministic setting may be well explained by its tendency to ever-increase the sharpness of the objective. On the other hand, we observe that Polyak stepsizes operate generally at the edge of stability or even slightly beyond, outperforming its Armijo and constant stepsizes counterparts in the deterministic setting. We conclude with an analysis that suggests unlocking stepsize tuners requires an understanding of the joint dynamics of the step size and the sharpness.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
411,958
1412.0156
Constant Step Size Least-Mean-Square: Bias-Variance Trade-offs and Optimal Sampling Distributions
We consider the least-squares regression problem and provide a detailed asymptotic analysis of the performance of averaged constant-step-size stochastic gradient descent (a.k.a. least-mean-squares). In the strongly-convex case, we provide an asymptotic expansion up to explicit exponentially decaying terms. Our analysis leads to new insights into stochastic approximation algorithms: (a) it gives a tighter bound on the allowed step-size; (b) the generalization error may be divided into a variance term which is decaying as O(1/n), independently of the step-size $\gamma$, and a bias term that decays as O(1/$\gamma$ 2 n 2); (c) when allowing non-uniform sampling, the choice of a good sampling density depends on whether the variance or bias terms dominate. In particular, when the variance term dominates, optimal sampling densities do not lead to much gain, while when the bias term dominates, we can choose larger step-sizes that leads to significant improvements.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
38,001
2303.09660
Explainable GeoAI: Can saliency maps help interpret artificial intelligence's learning process? An empirical study on natural feature detection
Improving the interpretability of geospatial artificial intelligence (GeoAI) models has become critically important to open the "black box" of complex AI models, such as deep learning. This paper compares popular saliency map generation techniques and their strengths and weaknesses in interpreting GeoAI and deep learning models' reasoning behaviors, particularly when applied to geospatial analysis and image processing tasks. We surveyed two broad classes of model explanation methods: perturbation-based and gradient-based methods. The former identifies important image areas, which help machines make predictions by modifying a localized area of the input image. The latter evaluates the contribution of every single pixel of the input image to the model's prediction results through gradient backpropagation. In this study, three algorithms-the occlusion method, the integrated gradients method, and the class activation map method-are examined for a natural feature detection task using deep learning. The algorithms' strengths and weaknesses are discussed, and the consistency between model-learned and human-understandable concepts for object recognition is also compared. The experiments used two GeoAI-ready datasets to demonstrate the generalizability of the research findings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
352,132
2501.06271
Large Language Models for Bioinformatics
With the rapid advancements in large language model (LLM) technology and the emergence of bioinformatics-specific language models (BioLMs), there is a growing need for a comprehensive analysis of the current landscape, computational characteristics, and diverse applications. This survey aims to address this need by providing a thorough review of BioLMs, focusing on their evolution, classification, and distinguishing features, alongside a detailed examination of training methodologies, datasets, and evaluation frameworks. We explore the wide-ranging applications of BioLMs in critical areas such as disease diagnosis, drug discovery, and vaccine development, highlighting their impact and transformative potential in bioinformatics. We identify key challenges and limitations inherent in BioLMs, including data privacy and security concerns, interpretability issues, biases in training data and model outputs, and domain adaptation complexities. Finally, we highlight emerging trends and future directions, offering valuable insights to guide researchers and clinicians toward advancing BioLMs for increasingly sophisticated biological and clinical applications.
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
523,923
2210.08412
Towards an Interpretable Hierarchical Agent Framework using Semantic Goals
Learning to solve long horizon temporally extended tasks with reinforcement learning has been a challenge for several years now. We believe that it is important to leverage both the hierarchical structure of complex tasks and to use expert supervision whenever possible to solve such tasks. This work introduces an interpretable hierarchical agent framework by combining planning and semantic goal directed reinforcement learning. We assume access to certain spatial and haptic predicates and construct a simple and powerful semantic goal space. These semantic goal representations are more interpretable, making expert supervision and intervention easier. They also eliminate the need to write complex, dense reward functions thereby reducing human engineering effort. We evaluate our framework on a robotic block manipulation task and show that it performs better than other methods, including both sparse and dense reward functions. We also suggest some next steps and discuss how this framework makes interaction and collaboration with humans easier.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
324,136
0809.5204
A Distributed MAC Protocol for Cooperation in Random Access Networks
WLAN is one of the most successful applications of wireless communications in daily life because of low cost and ease of deployment. The enabling technique for this success is the use of random access schemes for the wireless channel. Random access requires minimal coordination between the nodes, which considerably reduces the cost of the infrastructure. Recently, cooperative communication in wireless networks has been of increasing interest because it promises higher rates and reliability. An additional MAC overhead is necessary to coordinate the nodes to allow cooperation and this overhead can possibly cancel out the cooperative benefits. In this work, a completely distributed protocol is proposed that allows nodes in the network to cooperate via Two-Hop and Decode-and-Forward for transmitting their data to a common gateway node. It is shown that high throughput gains are obtained in terms of the individual throughput that can be guaranteed to any node in the network. These results are validated by Monte Carlo simulations.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,432
2307.06283
Tackling Computational Heterogeneity in FL: A Few Theoretical Insights
The future of machine learning lies in moving data collection along with training to the edge. Federated Learning, for short FL, has been recently proposed to achieve this goal. The principle of this approach is to aggregate models learned over a large number of distributed clients, i.e., resource-constrained mobile devices that collect data from their environment, to obtain a new more general model. The latter is subsequently redistributed to clients for further training. A key feature that distinguishes federated learning from data-center-based distributed training is the inherent heterogeneity. In this work, we introduce and analyse a novel aggregation framework that allows for formalizing and tackling computational heterogeneity in federated optimization, in terms of both heterogeneous data and local updates. Proposed aggregation algorithms are extensively analyzed from a theoretical, and an experimental prospective.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
379,021
2004.01407
FeederGAN: Synthetic Feeder Generation via Deep Graph Adversarial Nets
This paper presents a novel, automated, generative adversarial networks (GAN) based synthetic feeder generation mechanism, abbreviated as FeederGAN. FeederGAN digests real feeder models represented by directed graphs via a deep learning framework powered by GAN and graph convolutional networks (GCN). Information of a distribution feeder circuit is extracted from its model input files so that the device connectivity is mapped onto the adjacency matrix and the device characteristics, such as circuit types (i.e., 3-phase, 2-phase, and 1-phase) and component attributes (e.g., length and current ratings), are mapped onto the attribute matrix. Then, Wasserstein distance is used to optimize the GAN and GCN is used to discriminate the generated graphs from the actual ones. A greedy method based on graph theory is developed to reconstruct the feeder using the generated adjacency and attribute matrices. Our results show that the GAN generated feeders resemble the actual feeder in both topology and attributes verified by visual inspection and by empirical statistics obtained from actual distribution feeders.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
170,903
2408.13651
Narratives at Conflict: Computational Analysis of News Framing in Multilingual Disinformation Campaigns
Any report frames issues to favor a particular interpretation by highlighting or excluding certain aspects of a story. Despite the widespread use of framing in disinformation, framing properties and detection methods remain underexplored outside the English-speaking world. We explore how multilingual framing of the same issue differs systematically. We use eight years of Russia-backed disinformation campaigns, spanning 8k news articles in 4 languages targeting 15 countries. We find that disinformation campaigns consistently and intentionally favor specific framing, depending on the target language of the audience. We further discover how Russian-language articles consistently highlight selected frames depending on the region of the media coverage. We find that the two most prominent models for automatic frame analysis underperform and show high disagreement, highlighting the need for further research.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
483,230
2010.05842
Remote Electrical Tilt Optimization via Safe Reinforcement Learning
Remote Electrical Tilt (RET) optimization is an efficient method for adjusting the vertical tilt angle of Base Stations (BSs) antennas in order to optimize Key Performance Indicators (KPIs) of the network. Reinforcement Learning (RL) provides a powerful framework for RET optimization because of its self-learning capabilities and adaptivity to environmental changes. However, an RL agent may execute unsafe actions during the course of its interaction, i.e., actions resulting in undesired network performance degradation. Since the reliability of services is critical for Mobile Network Operators (MNOs), the prospect of performance degradation has prohibited the real-world deployment of RL methods for RET optimization. In this work, we model the RET optimization problem in the Safe Reinforcement Learning (SRL) framework with the goal of learning a tilt control strategy providing performance improvement guarantees with respect to a safe baseline. We leverage a recent SRL method, namely Safe Policy Improvement through Baseline Bootstrapping (SPIBB), to learn an improved policy from an offline dataset of interactions collected by the safe baseline. Our experiments show that the proposed approach is able to learn a safe and improved tilt update policy, providing a higher degree of reliability and potential for real-world network deployment.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
200,271
1410.7223
The probatilistic Quantifier Fuzzification Mechanism FA: A theoretical analysis
The main goal of this work is to analyze the behaviour of the FA quantifier fuzzification mechanism. As we prove in the paper, this model has a very solid theorethical behaviour, superior to most of the models defined in the literature. Moreover, we show that the underlying probabilistic interpretation has very interesting consequences.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
37,053
1709.04029
Probability Reversal and the Disjunction Effect in Reasoning Systems
Data based judgments go into artificial intelligence applications but they undergo paradoxical reversal when seemingly unnecessary additional data is provided. Examples of this are Simpson's reversal and the disjunction effect where the beliefs about the data change once it is presented or aggregated differently. Sometimes the significance of the difference can be evaluated using statistical tests such as Pearson's chi-squared or Fisher's exact test, but this may not be helpful in threshold-based decision systems that operate with incomplete information. To mitigate risks in the use of algorithms in decision-making, we consider the question of modeling of beliefs. We argue that evidence supports that beliefs are not classical statistical variables and they should, in the general case, be considered as superposition states of disjoint or polar outcomes. We analyze the disjunction effect from the perspective of the belief as a quantum vector.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
80,583
2305.07367
S-REINFORCE: A Neuro-Symbolic Policy Gradient Approach for Interpretable Reinforcement Learning
This paper presents a novel RL algorithm, S-REINFORCE, which is designed to generate interpretable policies for dynamic decision-making tasks. The proposed algorithm leverages two types of function approximators, namely Neural Network (NN) and Symbolic Regressor (SR), to produce numerical and symbolic policies, respectively. The NN component learns to generate a numerical probability distribution over the possible actions using a policy gradient, while the SR component captures the functional form that relates the associated states with the action probabilities. The SR-generated policy expressions are then utilized through importance sampling to improve the rewards received during the learning process. We have tested the proposed S-REINFORCE algorithm on various dynamic decision-making problems with low and high dimensional action spaces, and the results demonstrate its effectiveness and impact in achieving interpretable solutions. By leveraging the strengths of both NN and SR, S-REINFORCE produces policies that are not only well-performing but also easy to interpret, making it an ideal choice for real-world applications where transparency and causality are crucial.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
363,871
2405.10984
Predictive Energy Management for Battery Electric Vehicles with Hybrid Models
This paper addresses the problem of predicting the energy consumption for the drivers of Battery electric vehicles (BEVs). Several external factors (e.g., weather) are shown to have huge impacts on the energy consumption of a vehicle besides the vehicle or powertrain dynamics. Thus, it is challenging to take all of those influencing variables into consideration. The proposed approach is based on a hybrid model which improves the prediction accuracy of energy consumption of BEVs. The novelty of this approach is to combine a physics-based simulation model, which captures the basic vehicle and powertrain dynamics, with a data-driven model. The latter accounts for other external influencing factors neglected by the physical simulation model, using machine learning techniques, such as generalized additive mixed models, random forests and boosting. The hybrid modeling method is evaluated with a real data set from TUM and the hybrid models were shown that decrease the average prediction error from 40% of the pure physics model to 10%.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
454,956
2401.06970
TemporalAugmenter: An Ensemble Recurrent Based Deep Learning Approach for Signal Classification
Ensemble modeling has been widely used to solve complex problems as it helps to improve overall performance and generalization. In this paper, we propose a novel TemporalAugmenter approach based on ensemble modeling for augmenting the temporal information capturing for long-term and short-term dependencies in data integration of two variations of recurrent neural networks in two learning streams to obtain the maximum possible temporal extraction. Thus, the proposed model augments the extraction of temporal dependencies. In addition, the proposed approach reduces the preprocessing and prior stages of feature extraction, which reduces the required energy to process the models built upon the proposed TemporalAugmenter approach, contributing towards green AI. Moreover, the proposed model can be simply integrated into various domains including industrial, medical, and human-computer interaction applications. Our proposed approach empirically evaluated the speech emotion recognition, electrocardiogram signal, and signal quality examination tasks as three different signals with varying complexity and different temporal dependency features.
true
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
421,365
2110.15596
Training Integrable Parameterizations of Deep Neural Networks in the Infinite-Width Limit
To theoretically understand the behavior of trained deep neural networks, it is necessary to study the dynamics induced by gradient methods from a random initialization. However, the nonlinear and compositional structure of these models make these dynamics difficult to analyze. To overcome these challenges, large-width asymptotics have recently emerged as a fruitful viewpoint and led to practical insights on real-world deep networks. For two-layer neural networks, it has been understood via these asymptotics that the nature of the trained model radically changes depending on the scale of the initial random weights, ranging from a kernel regime (for large initial variance) to a feature learning regime (for small initial variance). For deeper networks more regimes are possible, and in this paper we study in detail a specific choice of ''small'' initialization corresponding to "mean-field" limits of neural networks, which we call integrable parameterizations (IPs). First, we show that under standard i.i.d. zero-mean initialization, integrable parameterizations of neural networks with more than four layers start at a stationary point in the infinite-width limit and no learning occurs. We then propose various methods to avoid this trivial behavior and analyze in detail the resulting dynamics. In particular, one of these methods consists in using large initial learning rates, and we show that it is equivalent to a modification of the recently proposed maximal update parameterization $\mu$P. We confirm our results with numerical experiments on image classification tasks, which additionally show a strong difference in behavior between various choices of activation functions that is not yet captured by theory.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
263,930
0803.3880
Asymptotically Optimum Universal One-Bit Watermarking for Gaussian Covertexts and Gaussian Attacks
The problem of optimum watermark embedding and detection was addressed in a recent paper by Merhav and Sabbag, where the optimality criterion was the maximum false-negative error exponent subject to a guaranteed false-positive error exponent. In particular, Merhav and Sabbag derived universal asymptotically optimum embedding and detection rules under the assumption that the detector relies solely on second order joint empirical statistics of the received signal and the watermark. In the case of a Gaussian host signal and a Gaussian attack, however, closed-form expressions for the optimum embedding strategy and the false-negative error exponent were not obtained in that work. In this paper, we derive such expressions, again, under the universality assumption that neither the host variance nor the attack power are known to either the embedder or the detector. The optimum embedding rule turns out to be very simple and with an intuitively-appealing geometrical interpretation. The improvement with respect to existing sub-optimum schemes is demonstrated by displaying the optimum false-negative error exponent as a function of the guaranteed false-positive error exponent.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,496
2406.03359
SuperFormer: Volumetric Transformer Architectures for MRI Super-Resolution
This paper presents a novel framework for processing volumetric medical information using Visual Transformers (ViTs). First, We extend the state-of-the-art Swin Transformer model to the 3D medical domain. Second, we propose a new approach for processing volumetric information and encoding position in ViTs for 3D applications. We instantiate the proposed framework and present SuperFormer, a volumetric transformer-based approach for Magnetic Resonance Imaging (MRI) Super-Resolution. Our method leverages the 3D information of the MRI domain and uses a local self-attention mechanism with a 3D relative positional encoding to recover anatomical details. In addition, our approach takes advantage of multi-domain information from volume and feature domains and fuses them to reconstruct the High-Resolution MRI. We perform an extensive validation on the Human Connectome Project dataset and demonstrate the superiority of volumetric transformers over 3D CNN-based methods. Our code and pretrained models are available at https://github.com/BCV-Uniandes/SuperFormer.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
461,206
0810.5148
Scheduling Kalman Filters in Continuous Time
A set of N independent Gaussian linear time invariant systems is observed by M sensors whose task is to provide the best possible steady-state causal minimum mean square estimate of the state of the systems, in addition to minimizing a steady-state measurement cost. The sensors can switch between systems instantaneously, and there are additional resource constraints, for example on the number of sensors which can observe a given system simultaneously. We first derive a tractable relaxation of the problem, which provides a bound on the achievable performance. This bound can be computed by solving a convex program involving linear matrix inequalities. Exploiting the additional structure of the sites evolving independently, we can decompose this program into coupled smaller dimensional problems. In the scalar case with identical sensors, we give an analytical expression of an index policy proposed in a more general context by Whittle. In the general case, we develop open-loop periodic switching policies whose performance matches the bound arbitrarily closely.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,574
2203.14499
NOC-REK: Novel Object Captioning with Retrieved Vocabulary from External Knowledge
Novel object captioning aims at describing objects absent from training data, with the key ingredient being the provision of object vocabulary to the model. Although existing methods heavily rely on an object detection model, we view the detection step as vocabulary retrieval from an external knowledge in the form of embeddings for any object's definition from Wiktionary, where we use in the retrieval image region features learned from a transformers model. We propose an end-to-end Novel Object Captioning with Retrieved vocabulary from External Knowledge method (NOC-REK), which simultaneously learns vocabulary retrieval and caption generation, successfully describing novel objects outside of the training dataset. Furthermore, our model eliminates the requirement for model retraining by simply updating the external knowledge whenever a novel object appears. Our comprehensive experiments on held-out COCO and Nocaps datasets show that our NOC-REK is considerably effective against SOTAs.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
288,033
2005.13596
Breiman's "Two Cultures" Revisited and Reconciled
In a landmark paper published in 2001, Leo Breiman described the tense standoff between two cultures of data modeling: parametric statistical and algorithmic machine learning. The cultural division between these two statistical learning frameworks has been growing at a steady pace in recent years. What is the way forward? It has become blatantly obvious that this widening gap between "the two cultures" cannot be averted unless we find a way to blend them into a coherent whole. This article presents a solution by establishing a link between the two cultures. Through examples, we describe the challenges and potential gains of this new integrated statistical thinking.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
179,048
1705.10851
Estimating Human Intent for Physical Human-Robot Co-Manipulation
Human teams can be exceptionally efficient at adapting and collaborating during manipulation tasks using shared mental models. However, the same shared mental models that can be used by humans to perform robust low-level force and motion control during collaborative manipulation tasks are non-existent for robots. For robots to perform collaborative tasks with people naturally and efficiently, understanding and predicting human intent is necessary. However, humans are difficult to predict and model. We have completed an exploratory study recording motion and force for 20 human dyads moving an object in tandem in order to better understand how they move and how their movement can be predicted. In this paper, we show how past motion data can be used to predict human intent. In order to predict human intent, which we equate with the human team's velocity for a short time horizon, we used a neural network. Using the previous 150 time steps at a rate of 200 Hz, human intent can be predicted for the next 50 time steps with a mean squared error of 0.02 (m/s)^2. We also show that human intent can be estimated in a human-robot dyad. This work is an important first step in enabling future work of integrating human intent estimation on a robot controller to execute a short-term collaborative trajectory.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
74,479
1607.00656
A Hybrid POMDP-BDI Agent Architecture with Online Stochastic Planning and Plan Caching
This article presents an agent architecture for controlling an autonomous agent in stochastic environments. The architecture combines the partially observable Markov decision process (POMDP) model with the belief-desire-intention (BDI) framework. The Hybrid POMDP-BDI agent architecture takes the best features from the two approaches, that is, the online generation of reward-maximizing courses of action from POMDP theory, and sophisticated multiple goal management from BDI theory. We introduce the advances made since the introduction of the basic architecture, including (i) the ability to pursue multiple goals simultaneously and (ii) a plan library for storing pre-written plans and for storing recently generated plans for future reuse. A version of the architecture without the plan library is implemented and is evaluated using simulations. The results of the simulation experiments indicate that the approach is feasible.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
58,121
1503.01425
Combinatorial Auction-Based Pricing for Multi-tenant Autonomous Vehicle Public Transportation System
A smart city provides its people with high standard of living through advanced technologies and transport is one of the major foci. With the advent of autonomous vehicles (AVs), an AV-based public transportation system has been proposed recently, which is capable of providing new forms of transportation services with high efficiency, high flexibility, and low cost. For the benefit of passengers, multitenancy can increase market competition leading to lower service charge and higher quality of service. In this paper, we study the pricing issue of the multi-tenant AV public transportation system and three types of services are defined. The pricing process for each service type is modeled as a combinatorial auction, in which the service providers, as bidders, compete for offering transportation services. The winners of the auction are determined through an integer linear program. To prevent the bidders from raising their bids for higher returns, we propose a strategy-proof Vickrey-Clarke-Groves-based charging mechanism, which can maximize the social welfare, to settle the final charges for the customers. We perform extensive simulations to verify the analytical results and evaluate the performance of the charging mechanism.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
40,826
2112.01229
An AI-based Solution for Enhancing Delivery of Digital Learning for Future Teachers
There has been a recent and rapid shift to digital learning hastened by the pandemic but also influenced by ubiquitous availability of digital tools and platforms now, making digital learning ever more accessible. An integral and one of the most difficult part of scaling digital learning and teaching is to be able to assess learner's knowledge and competency. An educator can record a lecture or create digital content that can be delivered to thousands of learners but assessing learners is extremely time consuming. In the paper, we propose an Artificial Intelligence (AI)-based solution namely VidVersityQG for generating questions automatically from pre-recorded video lectures. The solution can automatically generate different types of assessment questions (including short answer, multiple choice, true/false and fill in the blank questions) based on contextual and semantic information inferred from the videos. The proposed solution takes a human-centred approach, wherein teachers are provided the ability to modify/edit any AI generated questions. This approach encourages trust and engagement of teachers in the use and implementation of AI in education. The AI-based solution was evaluated for its accuracy in generating questions by 7 experienced teaching professionals and 117 education videos from multiple domains provided to us by our industry partner VidVersity. VidVersityQG solution showed promising results in generating high-quality questions automatically from video thereby significantly reducing the time and effort for educators in manual question generation.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
269,414
2208.13358
Billion-user Customer Lifetime Value Prediction: An Industrial-scale Solution from Kuaishou
Customer Life Time Value (LTV) is the expected total revenue that a single user can bring to a business. It is widely used in a variety of business scenarios to make operational decisions when acquiring new customers. Modeling LTV is a challenging problem, due to its complex and mutable data distribution. Existing approaches either directly learn from posterior feature distributions or leverage statistical models that make strong assumption on prior distributions, both of which fail to capture those mutable distributions. In this paper, we propose a complete set of industrial-level LTV modeling solutions. Specifically, we introduce an Order Dependency Monotonic Network (ODMN) that models the ordered dependencies between LTVs of different time spans, which greatly improves model performance. We further introduce a Multi Distribution Multi Experts (MDME) module based on the Divide-and-Conquer idea, which transforms the severely imbalanced distribution modeling problem into a series of relatively balanced sub-distribution modeling problems hence greatly reduces the modeling complexity. In addition, a novel evaluation metric Mutual Gini is introduced to better measure the distribution difference between the estimated value and the ground-truth label based on the Lorenz Curve. The ODMN framework has been successfully deployed in many business scenarios of Kuaishou, and achieved great performance. Extensive experiments on real-world industrial data demonstrate the superiority of the proposed methods compared to state-of-the-art baselines including ZILN and Two-Stage XGBoost models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
315,038
2410.04037
Is Score Matching Suitable for Estimating Point Processes?
Score matching estimators have gained widespread attention in recent years partly because they are free from calculating the integral of normalizing constant, thereby addressing the computational challenges in maximum likelihood estimation (MLE). Some existing works have proposed score matching estimators for point processes. However, this work demonstrates that the incompleteness of the estimators proposed in those works renders them applicable only to specific problems, and they fail for more general point processes. To address this issue, this work introduces the weighted score matching estimator to point processes. Theoretically, we prove the consistency of our estimator and establish its rate of convergence. Experimental results indicate that our estimator accurately estimates model parameters on synthetic data and yields results consistent with MLE on real data. In contrast, existing score matching estimators fail to perform effectively. Codes are publicly available at \url{https://github.com/KenCao2007/WSM_TPP}.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
495,098
1611.09482
Fast Wavenet Generation Algorithm
This paper presents an efficient implementation of the Wavenet generation process called Fast Wavenet. Compared to a naive implementation that has complexity O(2^L) (L denotes the number of layers in the network), our proposed approach removes redundant convolution operations by caching previous calculations, thereby reducing the complexity to O(L) time. Timing experiments show significant advantages of our fast implementation over a naive one. While this method is presented for Wavenet, the same scheme can be applied anytime one wants to perform autoregressive generation or online prediction using a model with dilated convolution layers. The code for our method is publicly available.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
64,681
2003.12684
Coordinate-free Isoline Tracking in Unknown 2-D Scalar Fields
The isoline tracking of this work is concerned with the control design for a sensing robot to track a given isoline of an unknown 2-D scalar filed. To this end, we propose a coordinate-free controller with a simple PI-like form using only the concentration feedback for a Dubins robot, which is particularly useful in GPS-denied environments. The key idea lies in the novel design of a sliding surface based error term in the standard PI controller. Interestingly, we also prove that the tracking error can be reduced by increasing the proportion gain, and is eliminated for circular fields with a non-zero integral gain. The effectiveness of our controller is validated via simulations by using a fixed-wing UAV on the real dataset of the concentration distribution of PM 2.5 in Handan, China.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
169,979
0807.4074
Low-delay, Low-PAPR, High-rate Non-square Complex Orthogonal Designs
The maximal rate for non-square Complex Orthogonal Designs (CODs) with $n$ transmit antennas is ${1/2}+\frac{1}{n}$ if $n$ is even and ${1/2}+\frac{1}{n+1}$ if $n$ is odd, which are close to 1/2 for large values of $n.$ A class of maximal rate non-square CODs have been constructed by Liang (IEEE Trans. Inform. Theory, 2003) and Lu et. al. (IEEE Trans. Inform. Theory, 2005) have shown that the decoding delay of the codes given by Liang, can be reduced by 50% when number of transmit antennas is a multiple of 4. Adams et. al. (IEEE Trans. Inform. Theory, 2007) have shown that the designs of Liang are of minimal-delay for $n$ equal to 1 and 3 modulo 4 and that of Lu et.al. are of minimal delay when $n$ is a multiple of $4.$ However, these minimal delays are large compared to the delays of the rate 1/2 non-square CODs constructed by Tarokh et al (IEEE Trans. Inform. Theory, 1999) from rate-1 real orthogonal designs (RODs). In this paper, we construct a class of rate-1/2 non-square CODs for any $n$ with the decoding delay equal to 50% of that of the delay of the rate-1/2 codes given by Tarokh et al. This is achieved by giving first a general construction of rate-1 square Real Orthogonal Designs (RODs) which includes as special cases the well known constructions of Adams, Lax and Phillips and Geramita and Pullman, and then making use of it to obtain the desired rate-1/2 non-square COD. For the case of 9 transmit antennas, our rate-1/2 COD is shown to be of minimal-delay. The proposed construction results in designs with zero entries which may have high Peak-to-Average Power Ratio (PAPR) and it is shown that by appropriate postmultiplication, a design with no zero entries can be obtained with no change in the code parameters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,119
2311.14391
\'UFAL CorPipe at CRAC 2023: Larger Context Improves Multilingual Coreference Resolution
We present CorPipe, the winning entry to the CRAC 2023 Shared Task on Multilingual Coreference Resolution. Our system is an improved version of our earlier multilingual coreference pipeline, and it surpasses other participants by a large margin of 4.5 percent points. CorPipe first performs mention detection, followed by coreference linking via an antecedent-maximization approach on the retrieved spans. Both tasks are trained jointly on all available corpora using a shared pretrained language model. Our main improvements comprise inputs larger than 512 subwords and changing the mention decoding to support ensembling. The source code is available at https://github.com/ufal/crac2023-corpipe.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
410,093
2309.08743
Active Learning for Fine-Grained Sketch-Based Image Retrieval
The ability to retrieve a photo by mere free-hand sketching highlights the immense potential of Fine-grained sketch-based image retrieval (FG-SBIR). However, its rapid practical adoption, as well as scalability, is limited by the expense of acquiring faithful sketches for easily available photo counterparts. A solution to this problem is Active Learning, which could minimise the need for labeled sketches while maximising performance. Despite extensive studies in the field, there exists no work that utilises it for reducing sketching effort in FG-SBIR tasks. To this end, we propose a novel active learning sampling technique that drastically minimises the need for drawing photo sketches. Our proposed approach tackles the trade-off between uncertainty and diversity by utilising the relationship between the existing photo-sketch pair to a photo that does not have its sketch and augmenting this relation with its intermediate representations. Since our approach relies only on the underlying data distribution, it is agnostic of the modelling approach and hence is applicable to other cross-modal instance-level retrieval tasks as well. With experimentation over two publicly available fine-grained SBIR datasets ChairV2 and ShoeV2, we validate our approach and reveal its superiority over adapted baselines.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,302
2008.10417
Optimal control towards sustainable wastewater treatment plants based on multi-agent reinforcement learning
Wastewater treatment plants are designed to eliminate pollutants and alleviate environmental pollution. However, the construction and operation of WWTPs consume resources, emit greenhouse gases (GHGs) and produce residual sludge, thus require further optimization. WWTPs are complex to control and optimize because of high nonlinearity and variation. This study used a novel technique, multi-agent deep reinforcement learning, to simultaneously optimize dissolved oxygen and chemical dosage in a WWTP. The reward function was specially designed from life cycle perspective to achieve sustainable optimization. Five scenarios were considered: baseline, three different effluent quality and cost-oriented scenarios. The result shows that optimization based on LCA has lower environmental impacts compared to baseline scenario, as cost, energy consumption and greenhouse gas emissions reduce to 0.890 CNY/m3-ww, 0.530 kWh/m3-ww, 2.491 kg CO2-eq/m3-ww respectively. The cost-oriented control strategy exhibits comparable overall performance to the LCA driven strategy since it sacrifices environmental bene ts but has lower cost as 0.873 CNY/m3-ww. It is worth mentioning that the retrofitting of WWTPs based on resources should be implemented with the consideration of impact transfer. Specifically, LCA SW scenario decreases 10 kg PO4-eq in eutrophication potential compared to the baseline within 10 days, while significantly increases other indicators. The major contributors of each indicator are identified for future study and improvement. Last, the author discussed that novel dynamic control strategies required advanced sensors or a large amount of data, so the selection of control strategies should also consider economic and ecological conditions.
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
false
192,988
2006.09895
Ranking and benchmarking framework for sampling algorithms on synthetic data streams
In the fields of big data, AI, and streaming processing, we work with large amounts of data from multiple sources. Due to memory and network limitations, we process data streams on distributed systems to alleviate computational and network loads. When data streams with non-uniform distributions are processed, we often observe overloaded partitions due to the use of simple hash partitioning. To tackle this imbalance, we can use dynamic partitioning algorithms that require a sampling algorithm to precisely estimate the underlying distribution of the data stream. There is no standardized way to test these algorithms. We offer an extensible ranking framework with benchmark and hyperparameter optimization capabilities and supply our framework with a data generator that can handle concept drifts. Our work includes a generator for dynamic micro-bursts that we can apply to any data stream. We provide algorithms that react to concept drifts and compare those against the state-of-the-art algorithms using our framework.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
182,691
1805.07180
Approximate Model Counting by Partial Knowledge Compilation
Model counting is the problem of computing the number of satisfying assignments of a given propositional formula. Although exact model counters can be naturally furnished by most of the knowledge compilation (KC) methods, in practice, they fail to generate the compiled results for the exact counting of models for certain formulas due to the explosion in sizes. Decision-DNNF is an important KC language that captures most of the practical compilers. We propose a generalized Decision-DNNF (referred to as partial Decision-DNNF) via introducing a class of new leaf vertices (called unknown vertices), and then propose an algorithm called PartialKC to generate randomly partial Decision-DNNF formulas from the given formulas. An unbiased estimate of the model number can be computed via a randomly partial Decision-DNNF formula. Each calling of PartialKC consists of multiple callings of MicroKC, while each of the latter callings is a process of importance sampling equipped with KC technologies. The experimental results show that PartialKC is more accurate than both SampleSearch and SearchTreeSampler, PartialKC scales better than SearchTreeSampler, and the KC technologies can obviously accelerate sampling.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
97,758
2106.00887
Exploiting Global Contextual Information for Document-level Named Entity Recognition
Most existing named entity recognition (NER) approaches are based on sequence labeling models, which focus on capturing the local context dependencies. However, the way of taking one sentence as input prevents the modeling of non-sequential global context, which is useful especially when local context information is limited or ambiguous. To this end, we propose a model called Global Context enhanced Document-level NER (GCDoc) to leverage global contextual information from two levels, i.e., both word and sentence. At word-level, a document graph is constructed to model a wider range of dependencies between words, then obtain an enriched contextual representation for each word via graph neural networks (GNN). To avoid the interference of noise information, we further propose two strategies. First we apply the epistemic uncertainty theory to find out tokens whose representations are less reliable, thereby helping prune the document graph. Then a selective auxiliary classifier is proposed to effectively learn the weight of edges in document graph and reduce the importance of noisy neighbour nodes. At sentence-level, for appropriately modeling wider context beyond single sentence, we employ a cross-sentence module which encodes adjacent sentences and fuses it with the current sentence representation via attention and gating mechanisms. Extensive experiments on two benchmark NER datasets (CoNLL 2003 and Ontonotes 5.0 English dataset) demonstrate the effectiveness of our proposed model. Our model reaches F1 score of 92.22 (93.40 with BERT) on CoNLL 2003 dataset and 88.32 (90.49 with BERT) on Ontonotes 5.0 dataset, achieving new state-of-the-art performance.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
238,290
1903.00884
CodeGRU: Context-aware Deep Learning with Gated Recurrent Unit for Source Code Modeling
Recently deep learning based Natural Language Processing (NLP) models have shown great potential in the modeling of source code. However, a major limitation of these approaches is that they take source code as simple tokens of text and ignore its contextual, syntactical and structural dependencies. In this work, we present CodeGRU, a gated recurrent unit based source code language model that is capable of capturing source code's contextual, syntactical and structural dependencies. We introduce a novel approach which can capture the source code context by leveraging the source code token types. Further, we adopt a novel approach which can learn variable size context by taking into account source code's syntax, and structural information. We evaluate CodeGRU with real-world data set and it shows that CodeGRU outperforms the state-of-the-art language models and help reduce the vocabulary size up to 24.93\%. Unlike previous works, we tested CodeGRU with an independent test set which suggests that our methodology does not requisite the source code comes from the same domain as training data while providing suggestions. We further evaluate CodeGRU with two software engineering applications: source code suggestion, and source code completion. Our experiment confirms that the source code's contextual information can be vital and can help improve the software language models. The extensive evaluation of CodeGRU shows that it outperforms the state-of-the-art models. The results further suggest that the proposed approach can help reduce the vocabulary size and is of practical use for software developers.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
123,129
1503.06383
Real-time Dynamic MRI Reconstruction using Stacked Denoising Autoencoder
In this work we address the problem of real-time dynamic MRI reconstruction. There are a handful of studies on this topic; these techniques are either based on compressed sensing or employ Kalman Filtering. These techniques cannot achieve the reconstruction speed necessary for real-time reconstruction. In this work, we propose a new approach to MRI reconstruction. We learn a non-linear mapping from the unstructured aliased images to the corresponding clean images using a stacked denoising autoencoder (SDAE). The training for SDAE is slow, but the reconstruction is very fast - only requiring a few matrix vector multiplications. In this work, we have shown that using SDAE one can reconstruct the MRI frame faster than the data acquisition rate, thereby achieving real-time reconstruction. The quality of reconstruction is of the same order as a previous compressed sensing based online reconstruction technique.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
41,338
1908.01539
Analysis and Exploitation of Synchronized Parallel Executions in Behavior Trees
Behavior Trees (BTs) are becoming a popular tool to model the behaviors of autonomous agents in the computer game and the robotics industry. One of the key advantages of BTs lies in their composability, where complex behaviors can be built by composing simpler ones. The parallel composition is the one with the highest potential since the complexity of composing pre-existing behaviors in parallel is much lower than the one needed using classical control architectures as finite state machines. However, the parallel composition is rarely used due to the underlying concurrency problems that are similar to the ones faced in concurrent programming. In this paper, we define two synchronization techniques to tackle the concurrency problems in BTs compositions and we show how to exploit them to improve behavior predictability. Also, we introduce measures to assess execution performance, and we show how design choices can affect them. To illustrate the proposed framework, we provide a set of experiments using the R1 robot and we gather statistically-significant data.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
140,792
1605.01348
Private Coded Caching
Recent work by Maddah-Ali and Niesen introduced coded caching which demonstrated the benefits of joint design of storage and transmission policies in content delivery networks. They studied a setup where a server communicates with a set of users, each equipped with a local cache, over a shared error-free link and proposed an order-optimal caching and delivery scheme. In this paper, we introduce the problem of secretive coded caching where we impose the additional constraint that a user should not be able to learn anything, from either the content stored in its cache or the server transmissions, about a file it did not request. We propose a feasible scheme for this setting and demonstrate its order-optimality with respect to information-theoretic lower bounds.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
55,469
2008.13305
An Integrated Approach to Produce Robust Models with High Efficiency
Deep Neural Networks (DNNs) needs to be both efficient and robust for practical uses. Quantization and structure simplification are promising ways to adapt DNNs to mobile devices, and adversarial training is the most popular method to make DNNs robust. In this work, we try to obtain both features by applying a convergent relaxation quantization algorithm, Binary-Relax (BR), to a robust adversarial-trained model, ResNets Ensemble via Feynman-Kac Formalism (EnResNet). We also discover that high precision, such as ternary (tnn) and 4-bit, quantization will produce sparse DNNs. However, this sparsity is unstructured under advarsarial training. To solve the problems that adversarial training jeopardizes DNNs' accuracy on clean images and the struture of sparsity, we design a trade-off loss function that helps DNNs preserve their natural accuracy and improve the channel sparsity. With our trade-off loss function, we achieve both goals with no reduction of resistance under weak attacks and very minor reduction of resistance under strong attcks. Together with quantized EnResNet with trade-off loss function, we provide robust models that have high efficiency.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
193,814
2011.01054
Information-theoretic Task Selection for Meta-Reinforcement Learning
In Meta-Reinforcement Learning (meta-RL) an agent is trained on a set of tasks to prepare for and learn faster in new, unseen, but related tasks. The training tasks are usually hand-crafted to be representative of the expected distribution of test tasks and hence all used in training. We show that given a set of training tasks, learning can be both faster and more effective (leading to better performance in the test tasks), if the training tasks are appropriately selected. We propose a task selection algorithm, Information-Theoretic Task Selection (ITTS), based on information theory, which optimizes the set of tasks used for training in meta-RL, irrespectively of how they are generated. The algorithm establishes which training tasks are both sufficiently relevant for the test tasks, and different enough from one another. We reproduce different meta-RL experiments from the literature and show that ITTS improves the final performance in all of them.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
204,467
1802.09490
Controlling Human Utilization of Failure-Prone Systems via Taxes
We consider a game-theoretic model where individuals compete over a shared failure-prone system or resource. We investigate the effectiveness of a taxation mechanism in controlling the utilization of the resource at the Nash equilibrium when the decision-makers have behavioral risk preferences, captured by prospect theory. We first observe that heterogeneous prospect-theoretic risk preferences can lead to counter-intuitive outcomes. In particular, for resources that exhibit network effects, utilization can increase under taxation and there may not exist a tax rate that achieves the socially optimal level of utilization. We identify conditions under which utilization is monotone and continuous, and then characterize the range of utilizations that can be achieved by a suitable choice of tax rate. We further show that resource utilization is higher when players are charged differentiated tax rates compared to the case when all players are charged an identical tax rate, under suitable assumptions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
true
91,334
2403.11425
Narrative Feature or Structured Feature? A Study of Large Language Models to Identify Cancer Patients at Risk of Heart Failure
Cancer treatments are known to introduce cardiotoxicity, negatively impacting outcomes and survivorship. Identifying cancer patients at risk of heart failure (HF) is critical to improving cancer treatment outcomes and safety. This study examined machine learning (ML) models to identify cancer patients at risk of HF using electronic health records (EHRs), including traditional ML, Time-Aware long short-term memory (T-LSTM), and large language models (LLMs) using novel narrative features derived from the structured medical codes. We identified a cancer cohort of 12,806 patients from the University of Florida Health, diagnosed with lung, breast, and colorectal cancers, among which 1,602 individuals developed HF after cancer. The LLM, GatorTron-3.9B, achieved the best F1 scores, outperforming the traditional support vector machines by 39%, the T-LSTM deep learning model by 7%, and a widely used transformer model, BERT, by 5.6%. The analysis shows that the proposed narrative features remarkably increased feature density and improved performance.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
438,691
0906.5039
A new approach for digit recognition based on hand gesture analysis
We present in this paper a new approach for hand gesture analysis that allows digit recognition. The analysis is based on extracting a set of features from a hand image and then combining them by using an induction graph. The most important features we extract from each image are the fingers locations, their heights and the distance between each pair of fingers. Our approach consists of three steps: (i) Hand detection and localization, (ii) fingers extraction and (iii) features identification and combination to digit recognition. Each input image is assumed to contain only one person, thus we apply a fuzzy classifier to identify the skin pixels. In the finger extraction step, we attempt to remove all the hand components except the fingers, this process is based on the hand anatomy properties. The final step consists on representing histogram of the detected fingers in order to extract features that will be used for digit recognition. The approach is invariant to scale, rotation and translation of the hand. Some experiments have been undertaken to show the effectiveness of the proposed approach.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
3,986
2011.14963
Free Energy Minimization: A Unified Framework for Modelling, Inference, Learning,and Optimization
The goal of these lecture notes is to review the problem of free energy minimization as a unified framework underlying the definition of maximum entropy modelling, generalized Bayesian inference, learning with latent variables, statistical learning analysis of generalization,and local optimization. Free energy minimization is first introduced, here and historically, as a thermodynamic principle. Then, it is described mathematically in the context of Fenchel duality. Finally, the mentioned applications to modelling, inference, learning, and optimization are covered starting from basic principles.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
208,947
2407.00584
Hyperparameter Optimization for Randomized Algorithms: A Case Study on Random Features
Randomized algorithms exploit stochasticity to reduce computational complexity. One important example is random feature regression (RFR) that accelerates Gaussian process regression (GPR). RFR approximates an unknown function with a random neural network whose hidden weights and biases are sampled from a probability distribution. Only the final output layer is fit to data. In randomized algorithms like RFR, the hyperparameters that characterize the sampling distribution greatly impact performance, yet are not directly accessible from samples. This makes optimization of hyperparameters via standard (gradient-based) optimization tools inapplicable. Inspired by Bayesian ideas from GPR, this paper introduces a random objective function that is tailored for hyperparameter tuning of vector-valued random features. The objective is minimized with ensemble Kalman inversion (EKI). EKI is a gradient-free particle-based optimizer that is scalable to high-dimensions and robust to randomness in objective functions. A numerical study showcases the new black-box methodology to learn hyperparameter distributions in several problems that are sensitive to the hyperparameter selection: two global sensitivity analyses, integrating a chaotic dynamical system, and solving a Bayesian inverse problem from atmospheric dynamics. The success of the proposed EKI-based algorithm for RFR suggests its potential for automated optimization of hyperparameters arising in other randomized algorithms.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,924
2401.00165
Mitigating the Impact of False Negatives in Dense Retrieval with Contrastive Confidence Regularization
In open-domain Question Answering (QA), dense retrieval is crucial for finding relevant passages for answer generation. Typically, contrastive learning is used to train a retrieval model that maps passages and queries to the same semantic space. The objective is to make similar ones closer and dissimilar ones further apart. However, training such a system is challenging due to the false negative issue, where relevant passages may be missed during data annotation. Hard negative sampling, which is commonly used to improve contrastive learning, can introduce more noise in training. This is because hard negatives are those closer to a given query, and thus more likely to be false negatives. To address this issue, we propose a novel contrastive confidence regularizer for Noise Contrastive Estimation (NCE) loss, a commonly used loss for dense retrieval. Our analysis shows that the regularizer helps dense retrieval models be more robust against false negatives with a theoretical guarantee. Additionally, we propose a model-agnostic method to filter out noisy negative passages in the dataset, improving any downstream dense retrieval models. Through experiments on three datasets, we demonstrate that our method achieves better retrieval performance in comparison to existing state-of-the-art dense retrieval systems.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
418,904
2502.00246
Context-Preserving Tensorial Reconfiguration in Large Language Model Training
Handling long-range dependencies in neural architectures has remained a persistent challenge due to computational limitations and inefficient contextual retention mechanisms. Tensorial operations have provided a foundation for restructuring model representations, yet conventional architectures have struggled to incorporate such techniques without introducing excessive complexity. A novel approach, Context-Preserving Tensorial Reconfiguration (CPTR), enables dynamic reorganization of weight tensors through structured factorization and adaptive contraction, allowing for enhanced contextual integration without substantial computational overhead. Empirical evaluations demonstrate that CPTR improves coherence retention across extended sequences, leading to measurable reductions in perplexity and improved recall accuracy for long-context tasks. Performance comparisons reveal that CPTR-enhanced models exhibit greater computational efficiency and reduced memory consumption while maintaining competitive language generation fluency and accuracy. Gradient stability metrics further validate the improved training efficiency, revealing more controlled variance in weight updates. Comparative studies across baseline and CPTR-enhanced models confirm that tensorial reconfiguration contributes to more stable and computationally efficient language modeling. The findings support the potential of CPTR in refining contemporary neural architectures for tasks requiring long-range contextual understanding and efficient memory utilization.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
529,276
2406.17313
Modelling and Hovering Stabilisation of a Free-Rotating Wing UAV
We propose a multibody model of a freewing UAV. This model allows obtaining simulations of the UAV's behaviour and, in the future, to design a control law stabilising the entire flight envelope (hovering and forward flight). We also describe the realisation of a prototype and a comparison of possible methods for estimating the UAV's states. With this prototype, we report on experimental hovering flights with a non-linear incremental dynamic inversion controller to stabilise the wing and a proportional derivative controller for the fuselage stabilization.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
467,521
2402.11676
A Multi-Aspect Framework for Counter Narrative Evaluation using Large Language Models
Counter narratives - informed responses to hate speech contexts designed to refute hateful claims and de-escalate encounters - have emerged as an effective hate speech intervention strategy. While previous work has proposed automatic counter narrative generation methods to aid manual interventions, the evaluation of these approaches remains underdeveloped. Previous automatic metrics for counter narrative evaluation lack alignment with human judgment as they rely on superficial reference comparisons instead of incorporating key aspects of counter narrative quality as evaluation criteria. To address prior evaluation limitations, we propose a novel evaluation framework prompting LLMs to provide scores and feedback for generated counter narrative candidates using 5 defined aspects derived from guidelines from counter narrative specialized NGOs. We found that LLM evaluators achieve strong alignment to human-annotated scores and feedback and outperform alternative metrics, indicating their potential as multi-aspect, reference-free and interpretable evaluators for counter narrative evaluation.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
430,508
2204.10497
Active Domain-Invariant Self-Localization Using Ego-Centric and World-Centric Maps
The training of a next-best-view (NBV) planner for visual place recognition (VPR) is a fundamentally important task in autonomous robot navigation, for which a typical approach is the use of visual experiences that are collected in the target domain as training data. However, the collection of a wide variety of visual experiences in everyday navigation is costly and prohibitive for real-time robotic applications. We address this issue by employing a novel {\it domain-invariant} NBV planner. A standard VPR subsystem based on a convolutional neural network (CNN) is assumed to be available, and its domain-invariant state recognition ability is proposed to be transferred to train the domain-invariant NBV planner. Specifically, we divide the visual cues that are available from the CNN model into two types: the output layer cue (OLC) and intermediate layer cue (ILC). The OLC is available at the output layer of the CNN model and aims to estimate the state of the robot (e.g., the robot viewpoint) with respect to the world-centric view coordinate system. The ILC is available within the middle layers of the CNN model as a high-level description of the visual content (e.g., a saliency image) with respect to the ego-centric view. In our framework, the ILC and OLC are mapped to a state vector and subsequently used to train a multiview NBV planner via deep reinforcement learning. Experiments using the public NCLT dataset validate the effectiveness of the proposed method.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
292,809
1804.04316
Entangled-photon decision maker
The competitive multi-armed bandit (CMAB) problem is related to social issues such as maximizing total social benefits while preserving equality among individuals by overcoming conflicts between individual decisions, which could seriously decrease social benefits. The study described herein provides experimental evidence that entangled photons physically resolve the CMAB in the 2-arms 2-players case, maximizing the social rewards while ensuring equality. Moreover, we demonstrated that deception, or outperforming the other player by receiving a greater reward, cannot be accomplished in a polarization-entangled-photon-based system, while deception is achievable in systems based on classical polarization-correlated photons with fixed polarizations. Besides, random polarization-correlated photons have been studied numerically and shown to ensure equality between players and deception prevention as well, although the CMAB maximum performance is reduced as compared with entangled photon experiments. Autonomous alignment schemes for polarization bases were also experimentally demonstrated based only on decision conflict information observed by an individual without communications between players. This study paves a way for collective decision making in uncertain dynamically changing environments based on entangled quantum states, a crucial step toward utilizing quantum systems for intelligent functionalities.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
94,811
2502.03146
Symmetry-Aware Bayesian Flow Networks for Crystal Generation
The discovery of new crystalline materials is essential to scientific and technological progress. However, traditional trial-and-error approaches are inefficient due to the vast search space. Recent advancements in machine learning have enabled generative models to predict new stable materials by incorporating structural symmetries and to condition the generation on desired properties. In this work, we introduce SymmBFN, a novel symmetry-aware Bayesian Flow Network (BFN) for crystalline material generation that accurately reproduces the distribution of space groups found in experimentally observed crystals. SymmBFN substantially improves efficiency, generating stable structures at least 50 times faster than the next-best method. Furthermore, we demonstrate its capability for property-conditioned generation, enabling the design of materials with tailored properties. Our findings establish BFNs as an effective tool for accelerating the discovery of crystalline materials.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
530,614
1210.6293
MLPACK: A Scalable C++ Machine Learning Library
MLPACK is a state-of-the-art, scalable, multi-platform C++ machine learning library released in late 2011 offering both a simple, consistent API accessible to novice users and high performance and flexibility to expert users by leveraging modern features of C++. MLPACK provides cutting-edge algorithms whose benchmarks exhibit far better performance than other leading machine learning libraries. MLPACK version 1.0.3, licensed under the LGPL, is available at http://www.mlpack.org.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
true
19,357
2305.04930
Simultaneously Transmitting and Reflecting RIS (STAR-RIS) Assisted Multi-Antenna Covert Communications: Analysis and Optimization
This paper investigates the multi-antenna covert communications assisted by a simultaneously transmitting and reflecting reconfigurable intelligent surface (STAR-RIS). In particular, to shelter the existence of covert communications between a multi-antenna transmitter and a single-antenna receiver from a warden, a friendly full-duplex receiver with two antennas is leveraged to make contributions where one antenna is responsible for receiving the transmitted signals and the other one transmits the jamming signals with a varying power to confuse the warden. Considering the worst case, the closed-form expression of the minimum detection error probability (DEP) at the warden is derived and utilized in a covert constraint to guarantee the system performance. Then, we formulate an optimization problem maximizing the covert rate of the system under the covertness constraint and quality of service (QoS) constraint with communication outage analysis. To jointly design the active and passive beamforming of the transmitter and STAR-RIS, an iterative algorithm based on semi-definite relaxation (SDR) method and Dinkelbachs algorithm is proposed to effectively solve the non-convex optimization problem. Simulation results show that the proposed STAR-RIS-assisted scheme highly outperforms the case with conventional RIS, which validates the effectiveness of the proposed algorithm as well as the superiority of STAR-RIS in guaranteeing the covertness of wireless communications.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
362,955
2011.06317
Learning causal representations for robust domain adaptation
Domain adaptation solves the learning problem in a target domain by leveraging the knowledge in a relevant source domain. While remarkable advances have been made, almost all existing domain adaptation methods heavily require large amounts of unlabeled target domain data for learning domain invariant representations to achieve good generalizability on the target domain. In fact, in many real-world applications, target domain data may not always be available. In this paper, we study the cases where at the training phase the target domain data is unavailable and only well-labeled source domain data is available, called robust domain adaptation. To tackle this problem, under the assumption that causal relationships between features and the class variable are robust across domains, we propose a novel Causal AutoEncoder (CAE), which integrates deep autoencoder and causal structure learning into a unified model to learn causal representations only using data from a single source domain. Specifically, a deep autoencoder model is adopted to learn low-dimensional representations, and a causal structure learning model is designed to separate the low-dimensional representations into two groups: causal representations and task-irrelevant representations. Using three real-world datasets the extensive experiments have validated the effectiveness of CAE compared to eleven state-of-the-art methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
206,209
2305.06590
FactKG: Fact Verification via Reasoning on Knowledge Graphs
In real world applications, knowledge graphs (KG) are widely used in various domains (e.g. medical applications and dialogue agents). However, for fact verification, KGs have not been adequately utilized as a knowledge source. KGs can be a valuable knowledge source in fact verification due to their reliability and broad applicability. A KG consists of nodes and edges which makes it clear how concepts are linked together, allowing machines to reason over chains of topics. However, there are many challenges in understanding how these machine-readable concepts map to information in text. To enable the community to better use KGs, we introduce a new dataset, FactKG: Fact Verification via Reasoning on Knowledge Graphs. It consists of 108k natural language claims with five types of reasoning: One-hop, Conjunction, Existence, Multi-hop, and Negation. Furthermore, FactKG contains various linguistic patterns, including colloquial style claims as well as written style claims to increase practicality. Lastly, we develop a baseline approach and analyze FactKG over these reasoning types. We believe FactKG can advance both reliability and practicality in KG-based fact verification.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
363,600
2211.15398
Leveraging per Image-Token Consistency for Vision-Language Pre-training
Most existing vision-language pre-training (VLP) approaches adopt cross-modal masked language modeling (CMLM) to learn vision-language associations. However, we find that CMLM is insufficient for this purpose according to our observations: (1) Modality bias: a considerable amount of masked tokens in CMLM can be recovered with only the language information, ignoring the visual inputs. (2) Under-utilization of the unmasked tokens: CMLM primarily focuses on the masked token but it cannot simultaneously leverage other tokens to learn vision-language associations. To handle those limitations, we propose EPIC (lEveraging Per Image-Token Consistency for vision-language pre-training). In EPIC, for each image-sentence pair, we mask tokens that are salient to the image (i.e., Saliency-based Masking Strategy) and replace them with alternatives sampled from a language model (i.e., Inconsistent Token Generation Procedure), and then the model is required to determine for each token in the sentence whether it is consistent with the image (i.e., Image-Token Consistency Task). The proposed EPIC method is easily combined with pre-training methods. Extensive experiments show that the combination of the EPIC method and state-of-the-art pre-training approaches, including ViLT, ALBEF, METER, and X-VLM, leads to significant improvements on downstream tasks. The code is released at https://github.com/gyhdog99/epic.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
333,243
2306.03374
PGformer: Proxy-Bridged Game Transformer for Multi-Person Highly Interactive Extreme Motion Prediction
Multi-person motion prediction is a challenging task, especially for real-world scenarios of highly interacted persons. Most previous works have been devoted to studying the case of weak interactions (e.g., walking together), in which typically forecasting each human pose in isolation can still achieve good performances. This paper focuses on collaborative motion prediction for multiple persons with extreme motions and attempts to explore the relationships between the highly interactive persons' pose trajectories. Specifically, a novel cross-query attention (XQA) module is proposed to bilaterally learn the cross-dependencies between the two pose sequences tailored for this situation. A proxy unit is additionally introduced to bridge the involved persons, which cooperates with our proposed XQA module and subtly controls the bidirectional spatial information flows. These designs are then integrated into a Transformer-based architecture and the resulting model is called Proxy-bridged Game Transformer (PGformer) for multi-person interactive motion prediction. Its effectiveness has been evaluated on the challenging ExPI dataset, which involves highly interactive actions. Our PGformer consistently outperforms the state-of-the-art methods in both short- and long-term predictions by a large margin. Besides, our approach can also be compatible with the weakly interacted CMU-Mocap and MuPoTS-3D datasets and extended to the case of more than 2 individuals with encouraging results.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
371,294
2305.18096
Improving Textless Spoken Language Understanding with Discrete Units as Intermediate Target
Spoken Language Understanding (SLU) is a task that aims to extract semantic information from spoken utterances. Previous research has made progress in end-to-end SLU by using paired speech-text data, such as pre-trained Automatic Speech Recognition (ASR) models or paired text as intermediate targets. However, acquiring paired transcripts is expensive and impractical for unwritten languages. On the other hand, Textless SLU extracts semantic information from speech without utilizing paired transcripts. However, the absence of intermediate targets and training guidance for textless SLU often results in suboptimal performance. In this work, inspired by the content-disentangled discrete units from self-supervised speech models, we proposed to use discrete units as intermediate guidance to improve textless SLU performance. Our method surpasses the baseline method on five SLU benchmark corpora. Additionally, we find that unit guidance facilitates few-shot learning and enhances the model's ability to handle noise.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
368,857