id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1611.09240 | Linear vs Nonlinear MPC for Trajectory Tracking Applied to Rotary Wing
Micro Aerial Vehicles | Precise trajectory tracking is a crucial property for \acp{MAV} to operate in cluttered environment or under disturbances. In this paper we present a detailed comparison between two state-of-the-art model-based control techniques for \ac{MAV} trajectory tracking. A classical \ac{LMPC} is presented and compared against a more advanced \ac{NMPC} that considers the full system model. In a careful analysis we show the advantages and disadvantages of the two implementations in terms of speed and tracking performance. This is achieved by evaluating hovering performance, step response, and aggressive trajectory tracking under nominal conditions and under external wind disturbances. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 64,644 |
2208.14569 | A new construction of nonlinear codes via algebraic function fields | In coding theory, constructing codes with good parameters is one of the most important and fundamental problems. Though a great many of good codes have been produced, most of them are defined over alphabets of sizes equal to prime powers. In this paper, we provide a new explicit construction of $(q+1)$-ary nonlinear codes via algebraic function fields, where $q$ is a prime power. Our codes are constructed by evaluations of rational functions at all rational places of the algebraic function field. Compared with algebraic geometry codes, the main difference is that we allow rational functions to be evaluated at pole places. After evaluating rational functions from a union of Riemann-Roch spaces, we obtain a family of nonlinear codes over the alphabet $\mathbb{F}_{q}\cup \{\infty\}$. It turns out that our codes have better parameters than those obtained from MDS codes or good algebraic geometry codes via code alphabet extension and restriction. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 315,354 |
1802.08241 | Hessian-based Analysis of Large Batch Training and Robustness to
Adversaries | Large batch size training of Neural Networks has been shown to incur accuracy loss when trained with the current methods. The exact underlying reasons for this are still not completely understood. Here, we study large batch size training through the lens of the Hessian operator and robust optimization. In particular, we perform a Hessian based study to analyze exactly how the landscape of the loss function changes when training with large batch size. We compute the true Hessian spectrum, without approximation, by back-propagating the second derivative. Extensive experiments on multiple networks show that saddle-points are not the cause for generalization gap of large batch size training, and the results consistently show that large batch converges to points with noticeably higher Hessian spectrum. Furthermore, we show that robust training allows one to favor flat areas, as points with large Hessian spectrum show poor robustness to adversarial perturbation. We further study this relationship, and provide empirical and theoretical proof that the inner loop for robust training is a saddle-free optimization problem \textit{almost everywhere}. We present detailed experiments with five different network architectures, including a residual network, tested on MNIST, CIFAR-10, and CIFAR-100 datasets. We have open sourced our method which can be accessed at [1]. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 91,058 |
1604.07788 | A Framework for Human Pose Estimation in Videos | In this paper, we present a method to estimate a sequence of human poses in unconstrained videos. We aim to demonstrate that by using temporal information, the human pose estimation results can be improved over image based pose estimation methods. In contrast to the commonly employed graph optimization formulation, which is NP-hard and needs approximate solutions, we formulate this problem into a unified two stage tree-based optimization problem for which an efficient and exact solution exists. Although the proposed method finds an exact solution, it does not sacrifice the ability to model the spatial and temporal constraints between body parts in the frames; in fact it models the {\em symmetric} parts better than the existing methods. The proposed method is based on two main ideas: `Abstraction' and `Association' to enforce the intra- and inter-frame body part constraints without inducing extra computational complexity to the polynomial time solution. Using the idea of `Abstraction', a new concept of `abstract body part' is introduced to conceptually combine the symmetric body parts and model them in the tree based body part structure. Using the idea of `Association', the optimal tracklets are generated for each abstract body part, in order to enforce the spatiotemporal constraints between body parts in adjacent frames. A sequence of the best poses is inferred from the abstract body part tracklets through the tree-based optimization. Finally, the poses are refined by limb alignment and refinement schemes. We evaluated the proposed method on three publicly available video based human pose estimation datasets, and obtained dramatically improved performance compared to the state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 55,129 |
2202.05195 | Uncovering Instabilities in Variational-Quantum Deep Q-Networks | Deep Reinforcement Learning (RL) has considerably advanced over the past decade. At the same time, state-of-the-art RL algorithms require a large computational budget in terms of training time to converge. Recent work has started to approach this problem through the lens of quantum computing, which promises theoretical speed-ups for several traditionally hard tasks. In this work, we examine a class of hybrid quantum-classical RL algorithms that we collectively refer to as variational quantum deep Q-networks (VQ-DQN). We show that VQ-DQN approaches are subject to instabilities that cause the learned policy to diverge, study the extent to which this afflicts reproduciblity of established results based on classical simulation, and perform systematic experiments to identify potential explanations for the observed instabilities. Additionally, and in contrast to most existing work on quantum reinforcement learning, we execute RL algorithms on an actual quantum processing unit (an IBM Quantum Device) and investigate differences in behaviour between simulated and physical quantum systems that suffer from implementation deficiencies. Our experiments show that, contrary to opposite claims in the literature, it cannot be conclusively decided if known quantum approaches, even if simulated without physical imperfections, can provide an advantage as compared to classical approaches. Finally, we provide a robust, universal and well-tested implementation of VQ-DQN as a reproducible testbed for future experiments. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 279,798 |
1908.06369 | Robust DCD-Based Recursive Adaptive Algorithms | The dichotomous coordinate descent (DCD) algorithm has been successfully used for significant reduction in the complexity of recursive least squares (RLS) algorithms. In this work, we generalize the application of the DCD algorithm to RLS adaptive filtering in impulsive noise scenarios and derive a unified update formula. By employing different robust strategies against impulsive noise, we develop novel computationally efficient DCD-based robust recursive algorithms. Furthermore, to equip the proposed algorithms with the ability to track abrupt changes in unknown systems, a simple variable forgetting factor mechanism is also developed. Simulation results for channel identification scenarios in impulsive noise demonstrate the effectiveness of the proposed algorithms. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 141,997 |
1106.4215 | Heterogenous mean-field analysis of a generalized voter-like model on
networks | We propose a generalized framework for the study of voter models in complex networks at the the heterogeneous mean-field (HMF) level that (i) yields a unified picture for existing copy/invasion processes and (ii) allows for the introduction of further heterogeneity through degree-selectivity rules. In the context of the HMF approximation, our model is capable of providing straightforward estimates for central quantities such as the exit probability and the consensus/fixation time, based on the statistical properties of the complex network alone. The HMF approach has the advantage of being readily applicable also in those cases in which exact solutions are difficult to work out. Finally, the unified formalism allows one to understand previously proposed voter-like processes as simple limits of the generalized model. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,931 |
2005.06023 | Increased-confidence adversarial examples for deep learning
counter-forensics | Transferability of adversarial examples is a key issue to apply this kind of attacks against multimedia forensics (MMF) techniques based on Deep Learning (DL) in a real-life setting. Adversarial example transferability, in fact, would open the way to the deployment of successful counter forensics attacks also in cases where the attacker does not have a full knowledge of the to-be-attacked system. Some preliminary works have shown that adversarial examples against CNN-based image forensics detectors are in general non-transferrable, at least when the basic versions of the attacks implemented in the most popular libraries are adopted. In this paper, we introduce a general strategy to increase the strength of the attacks and evaluate their transferability when such a strength varies. We experimentally show that, in this way, attack transferability can be largely increased, at the expense of a larger distortion. Our research confirms the security threats posed by the existence of adversarial examples even in multimedia forensics scenarios, thus calling for new defense strategies to improve the security of DL-based MMF techniques. | false | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | 176,889 |
1112.0736 | Measurement-induced nonlocality based on the relative entropy | We quantify the measurement-induced nonlocality [Luo and Fu, Phys. Rev. Lett. 106, 120401 (2011)] from the perspective of the relative entropy. This quantification leads to an operational interpretation for the measurementinduced nonlocality, namely, it is the maximal entropy increase after the locally invariant measurements. The relative entropy of nonlocality is upper bounded by the entropy of the measured subsystem. We establish a relationship between the relative entropy of nonlocality and the geometric nonlocality based on the Hilbert- Schmidt norm, and show that it is equal to the maximal distillable entanglement. Several trade-off relations are obtained for tripartite pure states. We also give explicit expressions for the relative entropy of nonlocality for Bell-diagonal states. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 13,306 |
2306.14343 | TCE: A Test-Based Approach to Measuring Calibration Error | This paper proposes a new metric to measure the calibration error of probabilistic binary classifiers, called test-based calibration error (TCE). TCE incorporates a novel loss function based on a statistical test to examine the extent to which model predictions differ from probabilities estimated from data. It offers (i) a clear interpretation, (ii) a consistent scale that is unaffected by class imbalance, and (iii) an enhanced visual representation with repect to the standard reliability diagram. In addition, we introduce an optimality criterion for the binning procedure of calibration error metrics based on a minimal estimation error of the empirical probabilities. We provide a novel computational algorithm for optimal bins under bin-size constraints. We demonstrate properties of TCE through a range of experiments, including multiple real-world imbalanced datasets and ImageNet 1000. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 375,645 |
2010.06238 | Massive MIMO for Cellular-Connected UAV: Challenges and Promising
Solutions | Massive multiple-input multiple-output (MIMO) is a promising technology for enabling cellular-connected unmanned aerial vehicle (UAV) communications in the future. Equipped with full-dimensional large arrays, ground base stations (GBSs) can apply adaptive fine-grained three-dimensional (3D) beamforming to mitigate the strong interference between high-altitude UAVs and low-altitude terrestrial users, thus significantly enhancing the network spectral efficiency. However, the performance gain of massive MIMO critically depends on the accurate channel state information (CSI) of both UAVs and terrestrial users at the GBSs, which is practically difficult to achieve due to UAV-induced pilot contamination and UAV's high mobility in 3D. Moreover, the increasingly popular applications relying on a large group of coordinated UAVs or UAV swarm as well as the practical hybrid GBS beamforming architecture for massive MIMO further complicate the pilot contamination and channel/beam tracking problems. In this article, we provide an overview of the above challenging issues, propose new solutions to cope with them, and discuss about promising directions for future research. Preliminary simulation results are also provided to validate the effectiveness of proposed solutions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 200,419 |
1706.09789 | Two-Stage Synthesis Networks for Transfer Learning in Machine
Comprehension | We develop a technique for transfer learning in machine comprehension (MC) using a novel two-stage synthesis network (SynNet). Given a high-performing MC model in one domain, our technique aims to answer questions about documents in another domain, where we use no labeled data of question-answer pairs. Using the proposed SynNet with a pretrained model from the SQuAD dataset on the challenging NewsQA dataset, we achieve an F1 measure of 44.3% with a single model and 46.6% with an ensemble, approaching performance of in-domain models (F1 measure of 50.0%) and outperforming the out-of-domain baseline of 7.6%, without use of provided annotations. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 76,200 |
2212.12121 | Federated PCA on Grassmann Manifold for Anomaly Detection in IoT
Networks | In the era of Internet of Things (IoT), network-wide anomaly detection is a crucial part of monitoring IoT networks due to the inherent security vulnerabilities of most IoT devices. Principal Components Analysis (PCA) has been proposed to separate network traffics into two disjoint subspaces corresponding to normal and malicious behaviors for anomaly detection. However, the privacy concerns and limitations of devices' computing resources compromise the practical effectiveness of PCA. We propose a federated PCA-based Grassmannian optimization framework that coordinates IoT devices to aggregate a joint profile of normal network behaviors for anomaly detection. First, we introduce a privacy-preserving federated PCA framework to simultaneously capture the profile of various IoT devices' traffic. Then, we investigate the alternating direction method of multipliers gradient-based learning on the Grassmann manifold to guarantee fast training and the absence of detecting latency using limited computational resources. Empirical results on the NSL-KDD dataset demonstrate that our method outperforms baseline approaches. Finally, we show that the Grassmann manifold algorithm is highly adapted for IoT anomaly detection, which permits drastically reducing the analysis time of the system. To the best of our knowledge, this is the first federated PCA algorithm for anomaly detection meeting the requirements of IoT networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 337,967 |
1404.6445 | Belief merging within fragments of propositional logic | Recently, belief change within the framework of fragments of propositional logic has gained increasing attention. Previous works focused on belief contraction and belief revision on the Horn fragment. However, the problem of belief merging within fragments of propositional logic has been neglected so far. This paper presents a general approach to define new merging operators derived from existing ones such that the result of merging remains in the fragment under consideration. Our approach is not limited to the case of Horn fragment but applicable to any fragment of propositional logic characterized by a closure property on the sets of models of its formulae. We study the logical properties of the proposed operators in terms of satisfaction of merging postulates, considering in particular distance-based merging operators for Horn and Krom fragments. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 32,590 |
2312.06653 | Adaptive Human Trajectory Prediction via Latent Corridors | Human trajectory prediction is typically posed as a zero-shot generalization problem: a predictor is learnt on a dataset of human motion in training scenes, and then deployed on unseen test scenes. While this paradigm has yielded tremendous progress, it fundamentally assumes that trends in human behavior within the deployment scene are constant over time. As such, current prediction models are unable to adapt to scene-specific transient human behaviors, such as crowds temporarily gathering to see buskers, pedestrians hurrying through the rain and avoiding puddles, or a protest breaking out. We formalize the problem of scene-specific adaptive trajectory prediction and propose a new adaptation approach inspired by prompt tuning called latent corridors. By augmenting the input of any pre-trained human trajectory predictor with learnable image prompts, the predictor can improve in the deployment scene by inferring trends from extremely small amounts of new data (e.g., 2 humans observed for 30 seconds). With less than 0.1% additional model parameters, we see up to 23.9% ADE improvement in MOTSynth simulated data and 16.4% ADE in MOT and Wildtrack real pedestrian data. Qualitatively, we observe that latent corridors imbue predictors with an awareness of scene geometry and scene-specific human behaviors that non-adaptive predictors struggle to capture. The project website can be found at https://neerja.me/atp_latent_corridors/. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 414,614 |
1807.08233 | Rapid Autonomous Car Control based on Spatial and Temporal Visual Cues | We present a novel approach to modern car control utilizing a combination of Deep Convolutional Neural Networks and Long Short-Term Memory Systems: Both of which are a subsection of Hierarchical Representations Learning, more commonly known as Deep Learning. Using Deep Convolutional Neural Networks and Long Short-Term Memory Systems (DCNN/LSTM), we propose an end-to-end approach to accurately predict steering angles and throttle values. We use this algorithm on our latest robot, El Toro Grande 1 (ETG) which is equipped with a variety of sensors in order to localize itself in its environment. Using previous training data and the data that it collects during circuit and drag races, it predicts throttle and steering angles in order to stay on path and avoid colliding into other robots. This allows ETG to theoretically race on any track with sufficient training data. | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | 103,489 |
2401.07115 | Open Models, Closed Minds? On Agents Capabilities in Mimicking Human
Personalities through Open Large Language Models | The emergence of unveiling human-like behaviors in Large Language Models (LLMs) has led to a closer connection between NLP and human psychology. Scholars have been studying the inherent personalities exhibited by LLMs and attempting to incorporate human traits and behaviors into them. However, these efforts have primarily focused on commercially-licensed LLMs, neglecting the widespread use and notable advancements seen in Open LLMs. This work aims to address this gap by employing a set of 12 LLM Agents based on the most representative Open models and subject them to a series of assessments concerning the Myers-Briggs Type Indicator (MBTI) test and the Big Five Inventory (BFI) test. Our approach involves evaluating the intrinsic personality traits of Open LLM agents and determining the extent to which these agents can mimic human personalities when conditioned by specific personalities and roles. Our findings unveil that $(i)$ each Open LLM agent showcases distinct human personalities; $(ii)$ personality-conditioned prompting produces varying effects on the agents, with only few successfully mirroring the imposed personality, while most of them being ``closed-minded'' (i.e., they retain their intrinsic traits); and $(iii)$ combining role and personality conditioning can enhance the agents' ability to mimic human personalities. Our work represents a step up in understanding the dense relationship between NLP and human psychology through the lens of Open LLMs. | true | false | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | 421,423 |
2311.14521 | GaussianEditor: Swift and Controllable 3D Editing with Gaussian
Splatting | 3D editing plays a crucial role in many areas such as gaming and virtual reality. Traditional 3D editing methods, which rely on representations like meshes and point clouds, often fall short in realistically depicting complex scenes. On the other hand, methods based on implicit 3D representations, like Neural Radiance Field (NeRF), render complex scenes effectively but suffer from slow processing speeds and limited control over specific scene areas. In response to these challenges, our paper presents GaussianEditor, an innovative and efficient 3D editing algorithm based on Gaussian Splatting (GS), a novel 3D representation. GaussianEditor enhances precision and control in editing through our proposed Gaussian semantic tracing, which traces the editing target throughout the training process. Additionally, we propose Hierarchical Gaussian splatting (HGS) to achieve stabilized and fine results under stochastic generative guidance from 2D diffusion models. We also develop editing strategies for efficient object removal and integration, a challenging task for existing methods. Our comprehensive experiments demonstrate GaussianEditor's superior control, efficacy, and rapid performance, marking a significant advancement in 3D editing. Project Page: https://buaacyw.github.io/gaussian-editor/ | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 410,138 |
2404.16023 | Learning Car-Following Behaviors Using Bayesian Matrix Normal Mixture
Regression | Learning and understanding car-following (CF) behaviors are crucial for microscopic traffic simulation. Traditional CF models, though simple, often lack generalization capabilities, while many data-driven methods, despite their robustness, operate as "black boxes" with limited interpretability. To bridge this gap, this work introduces a Bayesian Matrix Normal Mixture Regression (MNMR) model that simultaneously captures feature correlations and temporal dynamics inherent in CF behaviors. This approach is distinguished by its separate learning of row and column covariance matrices within the model framework, offering an insightful perspective into the human driver decision-making processes. Through extensive experiments, we assess the model's performance across various historical steps of inputs, predictive steps of outputs, and model complexities. The results consistently demonstrate our model's adeptness in effectively capturing the intricate correlations and temporal dynamics present during CF. A focused case study further illustrates the model's outperforming interpretability of identifying distinct operational conditions through the learned mean and covariance matrices. This not only underlines our model's effectiveness in understanding complex human driving behaviors in CF scenarios but also highlights its potential as a tool for enhancing the interpretability of CF behaviors in traffic simulations and autonomous driving systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 449,321 |
1912.12555 | Visual Perception and Modelling in Unstructured Orchard for Apple
Harvesting Robots | Vision perception and modelling are the essential tasks of robotic harvesting in the unstructured orchard. This paper develops a framework of visual perception and modelling for robotic harvesting of fruits in the orchard environments. The developed framework includes visual perception, scenarios mapping, and fruit modelling. The Visual perception module utilises a deep-learning model to perform multi-purpose visual perception task within the working scenarios; The scenarios mapping module applies OctoMap to represent the multiple classes of objects or elements within the environment; The fruit modelling module estimates the geometry property of objects and estimates the proper access pose of each fruit. The developed framework is implemented and evaluated in the apple orchards. The experiment results show that visual perception and modelling algorithm can accurately detect and localise the fruits, and modelling working scenarios in real orchard environments. The $F_{1}$ score and mean intersection of union of visual perception module on fruit detection and segmentation are 0.833 and 0.852, respectively. The accuracy of the fruit modelling in terms of centre localisation and pose estimation are 0.955 and 0.923, respectively. Overall, an accurate visual perception and modelling algorithm are presented in this paper. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 158,873 |
1907.11237 | Precise localization relative to 3D Automated Driving map using the
Decentralized Kalman filter with Feedback | This paper represents the novel high precision localization approach for Automated Driving (AD) relative to 3D map. The AD maps are not necessarily flat. Hence, the problem of localization is solved here in 3D. The vehicle motion is modeled as piecewise planner but with vertical curvature which is approximated with clothoids. The localization problem is solved with Decentralized Kalman filter with feedback (DKFF) by fusing all available information. The odometry, visual odometry, GPS, the different sensor and mono camera inputs are fused together to obtain the precise localization relative to map. Polylines and landmarks from the map are dealt in the same way because of the line - point geometrical duality. A set of weak filters are accumulated in the strong tracking approach leading to the precise localization results. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 139,804 |
2110.01691 | AI Chains: Transparent and Controllable Human-AI Interaction by Chaining
Large Language Model Prompts | Although large language models (LLMs) have demonstrated impressive potential on simple tasks, their breadth of scope, lack of transparency, and insufficient controllability can make them less effective when assisting humans on more complex tasks. In response, we introduce the concept of Chaining LLM steps together, where the output of one step becomes the input for the next, thus aggregating the gains per step. We first define a set of LLM primitive operations useful for Chain construction, then present an interactive system where users can modify these Chains, along with their intermediate results, in a modular way. In a 20-person user study, we found that Chaining not only improved the quality of task outcomes, but also significantly enhanced system transparency, controllability, and sense of collaboration. Additionally, we saw that users developed new ways of interacting with LLMs through Chains: they leveraged sub-tasks to calibrate model expectations, compared and contrasted alternative strategies by observing parallel downstream effects, and debugged unexpected model outputs by "unit-testing" sub-components of a Chain. In two case studies, we further explore how LLM Chains may be used in future applications | true | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 258,853 |
1505.01749 | Object detection via a multi-region & semantic segmentation-aware CNN
model | We propose an object detection system that relies on a multi-region deep convolutional neural network (CNN) that also encodes semantic segmentation-aware features. The resulting CNN-based representation aims at capturing a diverse set of discriminative appearance factors and exhibits localization sensitivity that is essential for accurate object localization. We exploit the above properties of our recognition module by integrating it on an iterative localization mechanism that alternates between scoring a box proposal and refining its location with a deep CNN regression model. Thanks to the efficient use of our modules, we detect objects with very high localization accuracy. On the detection challenges of PASCAL VOC2007 and PASCAL VOC2012 we achieve mAP of 78.2% and 73.9% correspondingly, surpassing any other published work by a significant margin. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 42,878 |
2407.19906 | Reverse Map Projections as Equivariant Quantum Embeddings | We introduce the novel class $(E_\alpha)_{\alpha \in [-\infty,1)}$ of reverse map projection embeddings, each one defining a unique new method of encoding classical data into quantum states. Inspired by well-known map projections from the unit sphere onto its tangent planes, used in practice in cartography, these embeddings address the common drawback of the amplitude embedding method, wherein scalar multiples of data points are identified and information about the norm of data is lost. We show how reverse map projections can be utilised as equivariant embeddings for quantum machine learning. Using these methods, we can leverage symmetries in classical datasets to significantly strengthen performance on quantum machine learning tasks. Finally, we select four values of $\alpha$ with which to perform a simple classification task, taking $E_\alpha$ as the embedding and experimenting with both equivariant and non-equivariant setups. We compare their results alongside those of standard amplitude embedding. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 476,980 |
2306.07935 | Multi-modal Representation Learning for Social Post Location Inference | Inferring geographic locations via social posts is essential for many practical location-based applications such as product marketing, point-of-interest recommendation, and infector tracking for COVID-19. Unlike image-based location retrieval or social-post text embedding-based location inference, the combined effect of multi-modal information (i.e., post images, text, and hashtags) for social post positioning receives less attention. In this work, we collect real datasets of social posts with images, texts, and hashtags from Instagram and propose a novel Multi-modal Representation Learning Framework (MRLF) capable of fusing different modalities of social posts for location inference. MRLF integrates a multi-head attention mechanism to enhance location-salient information extraction while significantly improving location inference compared with single domain-based methods. To overcome the noisy user-generated textual content, we introduce a novel attention-based character-aware module that considers the relative dependencies between characters of social post texts and hashtags for flexible multi-model information fusion. The experimental results show that MRLF can make accurate location predictions and open a new door to understanding the multi-modal data of social posts for online inference tasks. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 373,203 |
2401.07286 | CANDLE: Iterative Conceptualization and Instantiation Distillation from
Large Language Models for Commonsense Reasoning | The sequential process of conceptualization and instantiation is essential to generalizable commonsense reasoning as it allows the application of existing knowledge to unfamiliar scenarios. However, existing works tend to undervalue the step of instantiation and heavily rely on pre-built concept taxonomies and human annotations to collect both types of knowledge, resulting in a lack of instantiated knowledge to complete reasoning, high cost, and limited scalability. To tackle these challenges, we introduce CANDLE, a distillation framework that iteratively performs contextualized conceptualization and instantiation over commonsense knowledge bases by instructing large language models to generate both types of knowledge with critic filtering. By applying CANDLE to ATOMIC, we construct a comprehensive knowledge base comprising six million conceptualizations and instantiated commonsense knowledge triples. Both types of knowledge are firmly rooted in the original ATOMIC dataset, and intrinsic evaluations demonstrate their exceptional quality and diversity. Empirical results indicate that distilling CANDLE on student models provides benefits across four downstream tasks. Our code, data, and models are publicly available at https://github.com/HKUST-KnowComp/CANDLE. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 421,482 |
1809.04320 | Learning regression and verification networks for long-term visual
tracking | Compared with short-term tracking, the long-term tracking task requires determining the tracked object is present or absent, and then estimating the accurate bounding box if present or conducting image-wide re-detection if absent. Until now, few attempts have been done although this task is much closer to designing practical tracking systems. In this work, we propose a novel long-term tracking framework based on deep regression and verification networks. The offline-trained regression model is designed using the object-aware feature fusion and region proposal networks to generate a series of candidates and estimate their similarity scores effectively. The verification network evaluates these candidates to output the optimal one as the tracked object with its classification score, which is online updated to adapt to the appearance variations based on newly reliable observations. The similarity and classification scores are combined to obtain a final confidence value, based on which our tracker can determine the absence of the target accurately and conduct image-wide re-detection to capture the target successfully when it reappears. Extensive experiments show that our tracker achieves the best performance on the VOT2018 long-term challenge and state-of-the-art results on the OxUvA long-term dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 107,537 |
1812.04359 | Efficient Model-Free Reinforcement Learning Using Gaussian Process | Efficient Reinforcement Learning usually takes advantage of demonstration or good exploration strategy. By applying posterior sampling in model-free RL under the hypothesis of GP, we propose Gaussian Process Posterior Sampling Reinforcement Learning(GPPSTD) algorithm in continuous state space, giving theoretical justifications and empirical results. We also provide theoretical and empirical results that various demonstration could lower expected uncertainty and benefit posterior sampling exploration. In this way, we combined the demonstration and exploration process together to achieve a more efficient reinforcement learning. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 116,198 |
2407.11537 | AEMIM: Adversarial Examples Meet Masked Image Modeling | Masked image modeling (MIM) has gained significant traction for its remarkable prowess in representation learning. As an alternative to the traditional approach, the reconstruction from corrupted images has recently emerged as a promising pretext task. However, the regular corrupted images are generated using generic generators, often lacking relevance to the specific reconstruction task involved in pre-training. Hence, reconstruction from regular corrupted images cannot ensure the difficulty of the pretext task, potentially leading to a performance decline. Moreover, generating corrupted images might introduce an extra generator, resulting in a notable computational burden. To address these issues, we propose to incorporate adversarial examples into masked image modeling, as the new reconstruction targets. Adversarial examples, generated online using only the trained models, can directly aim to disrupt tasks associated with pre-training. Therefore, the incorporation not only elevates the level of challenge in reconstruction but also enhances efficiency, contributing to the acquisition of superior representations by the model. In particular, we introduce a novel auxiliary pretext task that reconstructs the adversarial examples corresponding to the original images. We also devise an innovative adversarial attack to craft more suitable adversarial examples for MIM pre-training. It is noted that our method is not restricted to specific model architectures and MIM strategies, rendering it an adaptable plug-in capable of enhancing all MIM methods. Experimental findings substantiate the remarkable capability of our approach in amplifying the generalization and robustness of existing MIM methods. Notably, our method surpasses the performance of baselines on various tasks, including ImageNet, its variants, and other downstream tasks. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 473,504 |
2411.10739 | A Wearable Gait Monitoring System for 17 Gait Parameters Based on
Computer Vision | We developed a shoe-mounted gait monitoring system capable of tracking up to 17 gait parameters, including gait length, step time, stride velocity, and others. The system employs a stereo camera mounted on one shoe to track a marker placed on the opposite shoe, enabling the estimation of spatial gait parameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel of the shoe, combined with a custom-designed algorithm, is utilized to measure temporal gait parameters. Through testing on multiple participants and comparison with the gait mat, the proposed gait monitoring system exhibited notable performance, with the accuracy of all measured gait parameters exceeding 93.61%. The system also demonstrated a low drift of 4.89% during long-distance walking. A gait identification task conducted on participants using a trained Transformer model achieved 95.7% accuracy on the dataset collected by the proposed system, demonstrating that our hardware has the potential to collect long-sequence gait data suitable for integration with current Large Language Models (LLMs). The system is cost-effective, user-friendly, and well-suited for real-life measurements. | false | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | 508,769 |
2008.00123 | Noise-Response Analysis of Deep Neural Networks Quantifies Robustness
and Fingerprints Structural Malware | The ubiquity of deep neural networks (DNNs), cloud-based training, and transfer learning is giving rise to a new cybersecurity frontier in which unsecure DNNs have `structural malware' (i.e., compromised weights and activation pathways). In particular, DNNs can be designed to have backdoors that allow an adversary to easily and reliably fool an image classifier by adding a pattern of pixels called a trigger. It is generally difficult to detect backdoors, and existing detection methods are computationally expensive and require extensive resources (e.g., access to the training data). Here, we propose a rapid feature-generation technique that quantifies the robustness of a DNN, `fingerprints' its nonlinearity, and allows us to detect backdoors (if present). Our approach involves studying how a DNN responds to noise-infused images with varying noise intensity, which we summarize with titration curves. We find that DNNs with backdoors are more sensitive to input noise and respond in a characteristic way that reveals the backdoor and where it leads (its `target'). Our empirical results demonstrate that we can accurately detect backdoors with high confidence orders-of-magnitude faster than existing approaches (seconds versus hours). | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 189,909 |
1506.04036 | On the similarities between generalized rank and Hamming weights and
their applications to network coding | Rank weights and generalized rank weights have been proven to characterize error and erasure correction, and information leakage in linear network coding, in the same way as Hamming weights and generalized Hamming weights describe classical error and erasure correction, and information leakage in wire-tap channels of type II and code-based secret sharing. Although many similarities between both cases have been established and proven in the literature, many other known results in the Hamming case, such as bounds or characterizations of weight-preserving maps, have not been translated to the rank case yet, or in some cases have been proven after developing a different machinery. The aim of this paper is to further relate both weights and generalized weights, show that the results and proofs in both cases are usually essentially the same, and see the significance of these similarities in network coding. Some of the new results in the rank case also have new consequences in the Hamming case. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 44,120 |
2407.01560 | 3DMeshNet: A Three-Dimensional Differential Neural Network for
Structured Mesh Generation | Mesh generation is a crucial step in numerical simulations, significantly impacting simulation accuracy and efficiency. However, generating meshes remains time-consuming and requires expensive computational resources. In this paper, we propose a novel method, 3DMeshNet, for three-dimensional structured mesh generation. The method embeds the meshing-related differential equations into the loss function of neural networks, formulating the meshing task as an unsupervised optimization problem. It takes geometric points as input to learn the potential mapping between parametric and computational domains. After suitable offline training, 3DMeshNet can efficiently output a three-dimensional structured mesh with a user-defined number of quadrilateral/hexahedral cells through the feed-forward neural prediction. To enhance training stability and accelerate convergence, we integrate loss function reweighting through weight adjustments and gradient projection alongside applying finite difference methods to streamline derivative computations in the loss. Experiments on different cases show that 3DMeshNet is robust and fast. It outperforms neural network-based methods and yields superior meshes compared to traditional mesh partitioning methods. 3DMeshNet significantly reduces training times by up to 85% compared to other neural network-based approaches and lowers meshing overhead by 4 to 8 times relative to traditional meshing methods. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 469,355 |
2003.12462 | TextCaps: a Dataset for Image Captioning with Reading Comprehension | Image descriptions can help visually impaired people to quickly understand the image content. While we made significant progress in automatically describing images and optical character recognition, current approaches are unable to include written text in their descriptions, although text is omnipresent in human environments and frequently critical to understand our surroundings. To study how to comprehend text in the context of an image we collect a novel dataset, TextCaps, with 145k captions for 28k images. Our dataset challenges a model to recognize text, relate it to its visual context, and decide what part of the text to copy or paraphrase, requiring spatial, semantic, and visual reasoning between multiple text tokens and visual entities, such as objects. We study baselines and adapt existing approaches to this new task, which we refer to as image captioning with reading comprehension. Our analysis with automatic and human studies shows that our new TextCaps dataset provides many new technical challenges over previous datasets. | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | 169,928 |
2101.02185 | Adaptive Synthetic Characters for Military Training | Behaviors of the synthetic characters in current military simulations are limited since they are generally generated by rule-based and reactive computational models with minimal intelligence. Such computational models cannot adapt to reflect the experience of the characters, resulting in brittle intelligence for even the most effective behavior models devised via costly and labor-intensive processes. Observation-based behavior model adaptation that leverages machine learning and the experience of synthetic entities in combination with appropriate prior knowledge can address the issues in the existing computational behavior models to create a better training experience in military training simulations. In this paper, we introduce a framework that aims to create autonomous synthetic characters that can perform coherent sequences of believable behavior while being aware of human trainees and their needs within a training simulation. This framework brings together three mutually complementary components. The first component is a Unity-based simulation environment - Rapid Integration and Development Environment (RIDE) - supporting One World Terrain (OWT) models and capable of running and supporting machine learning experiments. The second is Shiva, a novel multi-agent reinforcement and imitation learning framework that can interface with a variety of simulation environments, and that can additionally utilize a variety of learning algorithms. The final component is the Sigma Cognitive Architecture that will augment the behavior models with symbolic and probabilistic reasoning capabilities. We have successfully created proof-of-concept behavior models leveraging this framework on realistic terrain as an essential step towards bringing machine learning into military simulations. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 214,548 |
2404.02282 | Smooth Deep Saliency | In this work, we investigate methods to reduce the noise in deep saliency maps coming from convolutional downsampling. Those methods make the investigated models more interpretable for gradient-based saliency maps, computed in hidden layers. We evaluate the faithfulness of those methods using insertion and deletion metrics, finding that saliency maps computed in hidden layers perform better compared to both the input layer and GradCAM. We test our approach on different models trained for image classification on ImageNet1K, and models trained for tumor detection on Camelyon16 and in-house real-world digital pathology scans of stained tissue samples. Our results show that the checkerboard noise in the gradient gets reduced, resulting in smoother and therefore easier to interpret saliency maps. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 443,789 |
2105.12836 | On the Exploitation of Neuroevolutionary Information: Analyzing the Past
for a More Efficient Future | Neuroevolutionary algorithms, automatic searches of neural network structures by means of evolutionary techniques, are computationally costly procedures. In spite of this, due to the great performance provided by the architectures which are found, these methods are widely applied. The final outcome of neuroevolutionary processes is the best structure found during the search, and the rest of the procedure is commonly omitted in the literature. However, a good amount of residual information consisting of valuable knowledge that can be extracted is also produced during these searches. In this paper, we propose an approach that extracts this information from neuroevolutionary runs, and use it to build a metamodel that could positively impact future neural architecture searches. More specifically, by inspecting the best structures found during neuroevolutionary searches of generative adversarial networks with varying characteristics (e.g., based on dense or convolutional layers), we propose a Bayesian network-based model which can be used to either find strong neural structures right away, conveniently initialize different structural searches for different problems, or help future optimization of structures of any type to keep finding increasingly better structures where uninformed methods get stuck into local optima. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 237,114 |
1802.03438 | Generalized Master-Slave-Splitting Method and Application to
Transmission-Distribution Coordinated Energy Management | Transmission-Distribution coordinated energy management (TDCEM) is recognized as a promising solution to the challenge of high DER penetration, but there is a lack of a distributed computation method that universally and effectively works for the TDCEM. To bridge this gap, a generalized mas-ter-slave-splitting (G-MSS) method is presented in this paper. This method is based on a general-purpose transmis-sion-distribution coordination model called G-TDCM, which thus enables the G-MSS to be applicable to most of the central functions of the TDCEM. In this G-MSS method, a basic heter-ogenous decomposition (HGD) algorithm is first derived from the HGD of the coupling constraints in the optimality conditions of the G-TDCM. Its optimality and convergence properties are then proved. Further, inspired by the conditions for conver-gence, a modified HGD algorithm that utilizes the subsystem's response function is developed and thus converges faster. The distributed G-MSS method is then demonstrated to successfully solve a series of central functions, e.g. power flow, contingency analysis, voltage stability assessment, economic dispatch and optimal power flow, of the TDCEM. The severe issues of over-voltage and erroneous assessment of the system security that are caused by DERs are thus resolved by the G-MSS method with modest computation cost. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 89,968 |
2410.01836 | Temporal Graph Memory Networks For Knowledge Tracing | Tracing a student's knowledge growth given the past exercise answering is a vital objective in automatic tutoring systems to customize the learning experience. Yet, achieving this objective is a non-trivial task as it involves modeling the knowledge state across multiple knowledge components (KCs) while considering their temporal and relational dynamics during the learning process. Knowledge tracing methods have tackled this task by either modeling KCs' temporal dynamics using recurrent models or relational dynamics across KCs and questions using graph models. Albeit, there is a lack of methods that could learn joint embedding between relational and temporal dynamics of the task. Moreover, many methods that count for the impact of a student's forgetting behavior during the learning process use hand-crafted features, limiting their generalization on different scenarios. In this paper, we propose a novel method that jointly models the relational and temporal dynamics of the knowledge state using a deep temporal graph memory network. In addition, we propose a generic technique for representing a student's forgetting behavior using temporal decay constraints on the graph memory module. We demonstrate the effectiveness of our proposed method using multiple knowledge tracing benchmarks while comparing it to state-of-the-art methods. | false | false | false | false | true | false | true | false | false | false | false | false | false | true | false | false | false | false | 493,981 |
2307.08279 | Combiner and HyperCombiner Networks: Rules to Combine Multimodality MR
Images for Prostate Cancer Localisation | One of the distinct characteristics in radiologists' reading of multiparametric prostate MR scans, using reporting systems such as PI-RADS v2.1, is to score individual types of MR modalities, T2-weighted, diffusion-weighted, and dynamic contrast-enhanced, and then combine these image-modality-specific scores using standardised decision rules to predict the likelihood of clinically significant cancer. This work aims to demonstrate that it is feasible for low-dimensional parametric models to model such decision rules in the proposed Combiner networks, without compromising the accuracy of predicting radiologic labels: First, it is shown that either a linear mixture model or a nonlinear stacking model is sufficient to model PI-RADS decision rules for localising prostate cancer. Second, parameters of these (generalised) linear models are proposed as hyperparameters, to weigh multiple networks that independently represent individual image modalities in the Combiner network training, as opposed to end-to-end modality ensemble. A HyperCombiner network is developed to train a single image segmentation network that can be conditioned on these hyperparameters during inference, for much improved efficiency. Experimental results based on data from 850 patients, for the application of automating radiologist labelling multi-parametric MR, compare the proposed combiner networks with other commonly-adopted end-to-end networks. Using the added advantages of obtaining and interpreting the modality combining rules, in terms of the linear weights or odds-ratios on individual image modalities, three clinical applications are presented for prostate cancer segmentation, including modality availability assessment, importance quantification and rule discovery. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 379,745 |
1911.10606 | Functional Bayesian Filter | We present a general nonlinear Bayesian filter for high-dimensional state estimation using the theory of reproducing kernel Hilbert space (RKHS). Applying kernel method and the representer theorem to perform linear quadratic estimation in a functional space, we derive a Bayesian recursive state estimator for a general nonlinear dynamical system in the original input space. Unlike existing nonlinear extensions of Kalman filter where the system dynamics are assumed known, the state-space representation for the Functional Bayesian Filter (FBF) is completely learned from measurement data in the form of an infinite impulse response (IIR) filter or recurrent network in the RKHS, with universal approximation property. Using positive definite kernel function satisfying Mercer's conditions to compute and evolve information quantities, the FBF exploits both the statistical and time-domain information about the signal, extracts higher-order moments, and preserves the properties of covariances without the ill effects due to conventional arithmetic operations. This novel kernel adaptive filtering algorithm is applied to recurrent network training, chaotic time-series estimation and cooperative filtering using Gaussian and non-Gaussian noises, and inverse kinematics modeling. Simulation results show FBF outperforms existing Kalman-based algorithms. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 154,884 |
2108.09859 | Convex Latent Effect Logit Model via Sparse and Low-rank Decomposition | In this paper, we propose a convex formulation for learning logistic regression model (logit) with latent heterogeneous effect on sub-population. In transportation, logistic regression and its variants are often interpreted as discrete choice models under utility theory (McFadden, 2001). Two prominent applications of logit models in the transportation domain are traffic accident analysis and choice modeling. In these applications, researchers often want to understand and capture the individual variation under the same accident or choice scenario. The mixed effect logistic regression (mixed logit) is a popular model employed by transportation researchers. To estimate the distribution of mixed logit parameters, a non-convex optimization problem with nested high-dimensional integrals needs to be solved. Simulation-based optimization is typically applied to solve the mixed logit parameter estimation problem. Despite its popularity, the mixed logit approach for learning individual heterogeneity has several downsides. First, the parametric form of the distribution requires domain knowledge and assumptions imposed by users, although this issue can be addressed to some extent by using a non-parametric approach. Second, the optimization problems arise from parameter estimation for mixed logit and the non-parametric extensions are non-convex, which leads to unstable model interpretation. Third, the simulation size in simulation-assisted estimation lacks finite-sample theoretical guarantees and is chosen somewhat arbitrarily in practice. To address these issues, we are motivated to develop a formulation that models the latent individual heterogeneity while preserving convexity, and avoids the need for simulation-based approximation. Our setup is based on decomposing the parameters into a sparse homogeneous component in the population and low-rank heterogeneous parts for each individual. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 251,722 |
1705.05598 | Learning how to explain neural networks: PatternNet and
PatternAttribution | DeConvNet, Guided BackProp, LRP, were invented to better understand deep neural networks. We show that these methods do not produce the theoretically correct explanation for a linear model. Yet they are used on multi-layer networks with millions of parameters. This is a cause for concern since linear models are simple neural networks. We argue that explanation methods for neural nets should work reliably in the limit of simplicity, the linear models. Based on our analysis of linear models we propose a generalization that yields two explanation techniques (PatternNet and PatternAttribution) that are theoretically sound for linear models and produce improved explanations for deep networks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 73,522 |
2210.07449 | G2A2: An Automated Graph Generator with Attributes and Anomalies | Many data-mining applications use dynamic attributed graphs to represent relational information; but due to security and privacy concerns, there is a dearth of available datasets that can be represented as dynamic attributed graphs. Even when such datasets are available, they do not have ground truth that can be used to train deep-learning models. Thus, we present G2A2, an automated graph generator with attributes and anomalies, which encompasses (1) probabilistic models to generate a dynamic bipartite graph, representing time-evolving connections between two independent sets of entities, (2) realistic injection of anomalies using a novel algorithm that captures the general properties of graph anomalies across domains, and (3) a deep generative model to produce realistic attributes, learned from an existing real-world dataset. Using the maximum mean discrepancy (MMD) metric to evaluate the realism of a G2A2-generated graph against three real-world graphs, G2A2 outperforms Kronecker graph generation by reducing the MMD distance by up to six-fold (6x). | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 323,713 |
2104.10557 | Deep Music Retrieval for Fine-Grained Videos by Exploiting
Cross-Modal-Encoded Voice-Overs | Recently, the witness of the rapidly growing popularity of short videos on different Internet platforms has intensified the need for a background music (BGM) retrieval system. However, existing video-music retrieval methods only based on the visual modality cannot show promising performance regarding videos with fine-grained virtual contents. In this paper, we also investigate the widely added voice-overs in short videos and propose a novel framework to retrieve BGM for fine-grained short videos. In our framework, we use the self-attention (SA) and the cross-modal attention (CMA) modules to explore the intra- and the inter-relationships of different modalities respectively. For balancing the modalities, we dynamically assign different weights to the modal features via a fusion gate. For paring the query and the BGM embeddings, we introduce a triplet pseudo-label loss to constrain the semantics of the modal embeddings. As there are no existing virtual-content video-BGM retrieval datasets, we build and release two virtual-content video datasets HoK400 and CFM400. Experimental results show that our method achieves superior performance and outperforms other state-of-the-art methods with large margins. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | true | 231,623 |
1001.2620 | Discontinuities and hysteresis in quantized average consensus | We consider continuous-time average consensus dynamics in which the agents' states are communicated through uniform quantizers. Solutions to the resulting system are defined in the Krasowskii sense and are proven to converge to conditions of "practical consensus". To cope with undesired chattering phenomena we introduce a hysteretic quantizer, and we study the convergence properties of the resulting dynamics by a hybrid system approach. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 5,403 |
2407.09513 | Aligning Models with Their Realization through Model-based Systems
Engineering | In this paper, we propose a method for aligning models with their realization through the application of model-based systems engineering. Our approach is divided into three steps. (1) Firstly, we leverage domain expertise and the Unified Architecture Framework to establish a reference model that fundamentally describes some domain. (2) Subsequently, we instantiate the reference model as specific models tailored to different scenarios within the domain. (3) Finally, we incorporate corresponding run logic directly into both the reference model and the specific models. In total, we thus provide a practical means to ensure that every implementation result is justified by business demand. We demonstrate our approach using the example of maritime object detection as a specific application (specific model / implementation element) of automatic target recognition as a service reoccurring in various forms (reference model element). Our approach facilitates a more seamless integration of models and implementation, fostering enhanced Business-IT alignment. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | true | 472,607 |
2304.09937 | Stock Price Predictability and the Business Cycle via Machine Learning | We study the impacts of business cycles on machine learning (ML) predictions. Using the S&P 500 index, we find that ML models perform worse during most recessions, and the inclusion of recession history or the risk-free rate does not necessarily improve their performance. Investigating recessions where models perform well, we find that they exhibit lower market volatility than other recessions. This implies that the improved performance is not due to the merit of ML methods but rather factors such as effective monetary policies that stabilized the market. We recommend that ML practitioners evaluate their models during both recessions and expansions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 359,227 |
2404.15297 | Multi-stream Transmission for Directional Modulation Network via
Distributed Multi-UAV-aided Multi-active-IRS | Active intelligent reflecting surface (IRS) is a revolutionary technique for the future 6G networks. The conventional far-field single-IRS-aided directional modulation(DM) networks have only one (no direct path) or two (existing direct path) degrees of freedom (DoFs). This means that there are only one or two streams transmitted simultaneously from base station to user and will seriously limit its rate gain achieved by IRS. How to create multiple DoFs more than two for DM? In this paper, single large-scale IRS is divided to multiple small IRSs and a novel multi-IRS-aided multi-stream DM network is proposed to achieve a point-to-point multi-stream transmission by creating $K$ ($\geq3$) DoFs, where multiple small IRSs are placed distributively via multiple unmanned aerial vehicles (UAVs). The null-space projection, zero-forcing (ZF) and phase alignment are adopted to design the transmit beamforming vector, receive beamforming vector and phase shift matrix (PSM), respectively, called NSP-ZF-PA. Here, $K$ PSMs and their corresponding beamforming vectors are independently optimized. The weighted minimum mean-square error (WMMSE) algorithm is involved in alternating iteration for the optimization variables by introducing the power constraint on IRS, named WMMSE-PC, where the majorization-minimization (MM) algorithm is used to solve the total PSM. To achieve a lower computational complexity, a maximum trace method, called Max-TR-SVD, is proposed by optimize the PSM of all IRSs. Numerical simulation results has shown that the proposed NSP-ZF-PA performs much better than Max-TR-SVD in terms of rate. In particular, the rate of NSP-ZF-PA with sixteen small IRSs is about five times that of NSP-ZF-PA with combining all small IRSs as a single large IRS. Thus, a dramatic rate enhancement may be achieved by multiple distributed IRSs. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 449,032 |
2409.09611 | Integrating Audio Narrations to Strengthen Domain Generalization in
Multimodal First-Person Action Recognition | First-person activity recognition is rapidly growing due to the widespread use of wearable cameras but faces challenges from domain shifts across different environments, such as varying objects or background scenes. We propose a multimodal framework that improves domain generalization by integrating motion, audio, and appearance features. Key contributions include analyzing the resilience of audio and motion features to domain shifts, using audio narrations for enhanced audio-text alignment, and applying consistency ratings between audio and visual narrations to optimize the impact of audio in recognition during training. Our approach achieves state-of-the-art performance on the ARGO1M dataset, effectively generalizing across unseen scenarios and locations. | false | false | true | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 488,394 |
2206.09480 | Predicting Human Performance in Vertical Hierarchical Menu Selection in
Immersive AR Using Hand-gesture and Head-gaze | There are currently limited guidelines on designing user interfaces (UI) for immersive augmented reality (AR) applications. Designers must reflect on their experience designing UI for desktop and mobile applications and conjecture how a UI will influence AR users' performance. In this work, we introduce a predictive model for determining users' performance for a target UI without the subsequent involvement of participants in user studies. The model is trained on participants' responses to objective performance measures such as consumed endurance (CE) and pointing time (PT) using hierarchical drop-down menus. Large variability in the depth and context of the menus is ensured by randomly and dynamically creating the hierarchical drop-down menus and associated user tasks from words contained in the lexical database WordNet. Subjective performance bias is reduced by incorporating the users' non-verbal standard performance WAIS-IV during the model training. The semantic information of the menu is encoded using the Universal Sentence Encoder. We present the results of a user study that demonstrates that the proposed predictive model achieves high accuracy in predicting the CE on hierarchical menus of users with various cognitive abilities. To the best of our knowledge, this is the first work on predicting CE in designing UI for immersive AR applications. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 303,583 |
1701.08661 | Credal Networks under Epistemic Irrelevance | A credal network under epistemic irrelevance is a generalised type of Bayesian network that relaxes its two main building blocks. On the one hand, the local probabilities are allowed to be partially specified. On the other hand, the assessments of independence do not have to hold exactly. Conceptually, these two features turn credal networks under epistemic irrelevance into a powerful alternative to Bayesian networks, offering a more flexible approach to graph-based multivariate uncertainty modelling. However, in practice, they have long been perceived as very hard to work with, both theoretically and computationally. The aim of this paper is to demonstrate that this perception is no longer justified. We provide a general introduction to credal networks under epistemic irrelevance, give an overview of the state of the art, and present several new theoretical results. Most importantly, we explain how these results can be combined to allow for the design of recursive inference methods. We provide numerous concrete examples of how this can be achieved, and use these to demonstrate that computing with credal networks under epistemic irrelevance is most definitely feasible, and in some cases even highly efficient. We also discuss several philosophical aspects, including the lack of symmetry, how to deal with probability zero, the interpretation of lower expectations, the axiomatic status of graphoid properties, and the difference between updating and conditioning. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 67,498 |
1904.08487 | Machine Vision Guided 3D Medical Image Compression for Efficient
Transmission and Accurate Segmentation in the Clouds | Cloud based medical image analysis has become popular recently due to the high computation complexities of various deep neural network (DNN) based frameworks and the increasingly large volume of medical images that need to be processed. It has been demonstrated that for medical images the transmission from local to clouds is much more expensive than the computation in the clouds itself. Towards this, 3D image compression techniques have been widely applied to reduce the data traffic. However, most of the existing image compression techniques are developed around human vision, i.e., they are designed to minimize distortions that can be perceived by human eyes. In this paper we will use deep learning based medical image segmentation as a vehicle and demonstrate that interestingly, machine and human view the compression quality differently. Medical images compressed with good quality w.r.t. human vision may result in inferior segmentation accuracy. We then design a machine vision oriented 3D image compression framework tailored for segmentation using DNNs. Our method automatically extracts and retains image features that are most important to the segmentation. Comprehensive experiments on widely adopted segmentation frameworks with HVSMR 2016 challenge dataset show that our method can achieve significantly higher segmentation accuracy at the same compression rate, or much better compression rate under the same segmentation accuracy, when compared with the existing JPEG 2000 method. To the best of the authors' knowledge, this is the first machine vision guided medical image compression framework for segmentation in the clouds. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 128,074 |
2403.01234 | Active Deep Kernel Learning of Molecular Functionalities: Realizing
Dynamic Structural Embeddings | Exploring molecular spaces is crucial for advancing our understanding of chemical properties and reactions, leading to groundbreaking innovations in materials science, medicine, and energy. This paper explores an approach for active learning in molecular discovery using Deep Kernel Learning (DKL), a novel approach surpassing the limits of classical Variational Autoencoders (VAEs). Employing the QM9 dataset, we contrast DKL with traditional VAEs, which analyze molecular structures based on similarity, revealing limitations due to sparse regularities in latent spaces. DKL, however, offers a more holistic perspective by correlating structure with properties, creating latent spaces that prioritize molecular functionality. This is achieved by recalculating embedding vectors iteratively, aligning with the experimental availability of target properties. The resulting latent spaces are not only better organized but also exhibit unique characteristics such as concentrated maxima representing molecular functionalities and a correlation between predictive uncertainty and error. Additionally, the formation of exclusion regions around certain compounds indicates unexplored areas with potential for groundbreaking functionalities. This study underscores DKL's potential in molecular research, offering new avenues for understanding and discovering molecular functionalities beyond classical VAE limitations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 434,317 |
2011.13372 | Modeling of Online Echo-Chamber Effect Based on the Concept of
Spontaneous Symmetry Breaking | The online echo-chamber effect is a phenomenon in which beliefs that are far from common sense are strengthened within relatively small communities formed within online social networks. Since it is significantly degrading social activities in the real world, we should understand how the echo-chamber effect arises in an engineering framework to realize countermeasure technologies. This paper proposes a model of the online echo-chamber effect by introducing the concept of spontaneous symmetry breaking to the oscillation model framework used for describing online user dynamics. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 208,455 |
1812.02930 | Internet of Things Search Engine: Concepts, Classification, and Open
Issues | This article focuses on the complicated yet still relatively immature area of the Internet of Things Search Engines (IoTSE). It introduces related concepts of IoTSE and a model called meta-path to describe and classify IoTSE systems based on their functionality. Based on these concepts, we have organized the research and development efforts on IoTSE into eight groups and presented the representative works in each group. The concepts and ideas presented in this article are generated from an extensive structured study on over 200 works spanning over one decade of IoTSE research and development. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 115,893 |
1906.03193 | Fighting Quantization Bias With Bias | Low-precision representation of deep neural networks (DNNs) is critical for efficient deployment of deep learning application on embedded platforms, however, converting the network to low precision degrades its performance. Crucially, networks that are designed for embedded applications usually suffer from increased degradation since they have less redundancy. This is most evident for the ubiquitous MobileNet architecture which requires a costly quantization-aware training cycle to achieve acceptable performance when quantized to 8-bits. In this paper, we trace the source of the degradation in MobileNets to a shift in the mean activation value. This shift is caused by an inherent bias in the quantization process which builds up across layers, shifting all network statistics away from the learned distribution. We show that this phenomenon happens in other architectures as well. We propose a simple remedy - compensating for the quantization induced shift by adding a constant to the additive bias term of each channel. We develop two simple methods for estimating the correction constants - one using iterative evaluation of the quantized network and one where the constants are set using a short training phase. Both methods are fast and require only a small amount of unlabeled data, making them appealing for rapid deployment of neural networks. Using the above methods we are able to match the performance of training-based quantization of MobileNets at a fraction of the cost. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 134,301 |
1710.11404 | Reshaping Cellular Networks for the Sky: Major Factors and Feasibility | This paper studies the feasibility of supporting drone operations using existent cellular infrastructure. We propose an analytical framework that includes the effects of base station (BS) height and antenna radiation pattern, drone antenna directivity and various propagation environments. With this framework, we derive an exact expression for the coverage probability of ground and drone users through a practical cell association strategy. Our results show that a carefully designed network can control the radiated interference that is received by the drones, and therefore guarantees a satisfactory quality of service. Moreover, as the network density grows the increasing level of interference can be partially managed by lowering the drone flying altitude. However, even at optimal conditions the drone coverage performance converges to zero considerably fast, suggesting that ultra-dense networks might be poor candidates for serving aerial users. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 83,590 |
1602.08456 | Epidemic Processes over Adaptive State-Dependent Networks | In this paper, we study the dynamics of epidemic processes taking place in adaptive networks of arbitrary topology. We focus our study on the adaptive susceptible-infected-susceptible (ASIS) model, where healthy individuals are allowed to temporarily cut edges connecting them to infected nodes in order to prevent the spread of the infection. In this paper, we derive a closed-form expression for a lower bound on the epidemic threshold of the ASIS model in arbitrary networks with heterogeneous node and edge dynamics. For networks with homogeneous node and edge dynamics, we show that the resulting \blue{lower bound} is proportional to the epidemic threshold of the standard SIS model over static networks, with a proportionality constant that depends on the adaptation rates. Furthermore, based on our results, we propose an efficient algorithm to optimally tune the adaptation rates in order to eradicate epidemic outbreaks in arbitrary networks. We confirm the tightness of the proposed lower bounds with several numerical simulations and compare our optimal adaptation rates with popular centrality measures. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 52,647 |
2201.00203 | NOMA Computation Over Multi-Access Channels for Multimodal Sensing | An improved mean squared error (MSE) minimization solution based on eigenvector decomposition approach is conceived for wideband non-orthogonal multiple-access based computation over multi-access channel (NOMA-CoMAC) framework. This work aims at further developing NOMA-CoMAC for next-generation multimodal sensor networks, where a multimodal sensor monitors several environmental parameters such as temperature, pollution, humidity, or pressure. We demonstrate that our proposed scheme achieves an MSE value approximately 0.7 lower at E_b/N_o = 1 dB in comparison to that for the average sum-channel based method. Moreover, the MSE performance gain of our proposed solution increases even more for larger values of subcarriers and sensor nodes due to the benefit of the diversity gain. This, in return, suggests that our proposed scheme is eminently suitable for multimodal sensor networks. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 273,894 |
2209.03225 | Hardware faults that matter: Understanding and Estimating the safety
impact of hardware faults on object detection DNNs | Object detection neural network models need to perform reliably in highly dynamic and safety-critical environments like automated driving or robotics. Therefore, it is paramount to verify the robustness of the detection under unexpected hardware faults like soft errors that can impact a systems perception module. Standard metrics based on average precision produce model vulnerability estimates at the object level rather than at an image level. As we show in this paper, this does not provide an intuitive or representative indicator of the safety-related impact of silent data corruption caused by bit flips in the underlying memory but can lead to an over- or underestimation of typical fault-induced hazards. With an eye towards safety-related real-time applications, we propose a new metric IVMOD (Image-wise Vulnerability Metric for Object Detection) to quantify vulnerability based on an incorrect image-wise object detection due to false positive (FPs) or false negative (FNs) objects, combined with a severity analysis. The evaluation of several representative object detection models shows that even a single bit flip can lead to a severe silent data corruption event with potentially critical safety implications, with e.g., up to (much greater than) 100 FPs generated, or up to approx. 90% of true positives (TPs) are lost in an image. Furthermore, with a single stuck-at-1 fault, an entire sequence of images can be affected, causing temporally persistent ghost detections that can be mistaken for actual objects (covering up to approx. 83% of the image). Furthermore, actual objects in the scene are continuously missed (up to approx. 64% of TPs are lost). Our work establishes a detailed understanding of the safety-related vulnerability of such critical workloads against hardware faults. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 316,444 |
1212.2490 | On the Convergence of Bound Optimization Algorithms | Many practitioners who use the EM algorithm complain that it is sometimes slow. When does this happen, and what can be done about it? In this paper, we study the general class of bound optimization algorithms - including Expectation-Maximization, Iterative Scaling and CCCP - and their relationship to direct optimization algorithms such as gradient-based methods for parameter learning. We derive a general relationship between the updates performed by bound optimization methods and those of gradient and second-order methods and identify analytic conditions under which bound optimization algorithms exhibit quasi-Newton behavior, and conditions under which they possess poor, first-order convergence. Based on this analysis, we consider several specific algorithms, interpret and analyze their convergence properties and provide some recipes for preprocessing input to these algorithms to yield faster convergence behavior. We report empirical results supporting our analysis and showing that simple data preprocessing can result in dramatically improved performance of bound optimizers in practice. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 20,294 |
1910.06830 | Reversible cyclic codes over $\mathbb{F}_q + u \mathbb{F}_q$ | Let $q$ be a power of a prime $p$. In this paper, we study reversible cyclic codes of arbitrary length over the ring $ R = \mathbb{F}_q + u \mathbb{F}_q$, where $u^2=0 mod q$. First, we find a unique set of generators for cyclic codes over $R$, followed by a classification of reversible cyclic codes with respect to their generators. Also, under certain conditions, it is shown that dual of reversible cyclic code is reversible over $\mathbb{Z}_2+u\mathbb{Z}_2$. Further, to show the importance of these results, some examples of reversible cyclic codes are provided. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 149,459 |
1605.05101 | Recurrent Neural Network for Text Classification with Multi-Task
Learning | Neural network based methods have obtained great progress on a variety of natural language processing tasks. However, in most previous works, the models are learned based on single-task supervised objectives, which often suffer from insufficient training data. In this paper, we use the multi-task learning framework to jointly learn across multiple related tasks. Based on recurrent neural network, we propose three different mechanisms of sharing information to model text with task-specific and shared layers. The entire network is trained jointly on all these tasks. Experiments on four benchmark text classification tasks show that our proposed models can improve the performance of a task with the help of other related tasks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 55,951 |
1101.2719 | CSSF MIMO RADAR: Low-Complexity Compressive Sensing Based MIMO Radar
That Uses Step Frequency | A new approach is proposed, namely CSSF MIMO radar, which applies the technique of step frequency (SF) to compressive sensing (CS) based multi-input multi-output (MIMO) radar. The proposed approach enables high resolution range, angle and Doppler estimation, while transmitting narrowband pulses. The problem of joint angle-Doppler-range estimation is first formulated to fit the CS framework, i.e., as an L1 optimization problem. Direct solution of this problem entails high complexity as it employs a basis matrix whose construction requires discretization of the angle-Doppler-range space. Since high resolution requires fine space discretization, the complexity of joint range, angle and Doppler estimation can be prohibitively high. For the case of slowly moving targets, a technique is proposed that achieves significant complexity reduction by successively estimating angle-range and Doppler in a decoupled fashion and by employing initial estimates obtained via matched filtering to further reduce the space that needs to be digitized. Numerical results show that the combination of CS and SF results in a MIMO radar system that has superior resolution and requires far less data as compared to a system that uses a matched filter with SF. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 8,816 |
2206.10286 | Position-prior Clustering-based Self-attention Module for Knee Cartilage
Segmentation | The morphological changes in knee cartilage (especially femoral and tibial cartilages) are closely related to the progression of knee osteoarthritis, which is expressed by magnetic resonance (MR) images and assessed on the cartilage segmentation results. Thus, it is necessary to propose an effective automatic cartilage segmentation model for longitudinal research on osteoarthritis. In this research, to relieve the problem of inaccurate discontinuous segmentation caused by the limited receptive field in convolutional neural networks, we proposed a novel position-prior clustering-based self-attention module (PCAM). In PCAM, long-range dependency between each class center and feature point is captured by self-attention allowing contextual information re-allocated to strengthen the relative features and ensure the continuity of segmentation result. The clutsering-based method is used to estimate class centers, which fosters intra-class consistency and further improves the accuracy of segmentation results. The position-prior excludes the false positives from side-output and makes center estimation more precise. Sufficient experiments are conducted on OAI-ZIB dataset. The experimental results show that the segmentation performance of combination of segmentation network and PCAM obtains an evident improvement compared to original model, which proves the potential application of PCAM in medical segmentation tasks. The source code is publicly available from link: https://github.com/LeongDong/PCAMNet | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 303,857 |
2005.06175 | Stabilization control of networked mobile robot using past
observation-based preditive filter | This paper addresses the stabilization control problem for networked mobile robot subject to communication delay. A new state estimation filter namely past observation-based predictive filter is developed. This filter enables the prediction of system state from delayed measurement. The state estimator combined with developed control laws ensures the asymptotic stability of the networked system. Simulations with parameters extracted from a real robot system were conducted and results confirmed the correctness as well as applicability of proposed approach. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 176,935 |
2201.02755 | Machine Learning-Based Disease Diagnosis:A Bibliometric Analysis | Machine Learning (ML) has garnered considerable attention from researchers and practitioners as a new and adaptable tool for disease diagnosis. With the advancement of ML and the proliferation of papers and research in this field, a complete examination of Machine Learning-Based Disease Diagnosis (MLBDD) is required. From a bibliometrics standpoint, this article comprehensively studies MLBDD papers from 2012 to 2021. Consequently, with particular keywords, 1710 papers with associate information have been extracted from the Scopus and Web of Science (WOS) database and integrated into the excel datasheet for further analysis. First, we examine the publication structures based on yearly publications and the most productive countries/regions, institutions, and authors. Second, the co-citation networks of countries/regions, institutions, authors, and articles are visualized using R-studio software. They are further examined in terms of citation structure and the most influential ones. This article gives an overview of MLBDD for researchers interested in the subject and conducts a thorough and complete study of MLBDD for those interested in conducting more research in this field. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 274,639 |
2404.01158 | Dialogue with Robots: Proposals for Broadening Participation and
Research in the SLIVAR Community | The ability to interact with machines using natural human language is becoming not just commonplace, but expected. The next step is not just text interfaces, but speech interfaces and not just with computers, but with all machines including robots. In this paper, we chronicle the recent history of this growing field of spoken dialogue with robots and offer the community three proposals, the first focused on education, the second on benchmarks, and the third on the modeling of language when it comes to spoken interaction with robots. The three proposals should act as white papers for any researcher to take and build upon. | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | 443,284 |
cs/9912016 | HMM Specialization with Selective Lexicalization | We present a technique which complements Hidden Markov Models by incorporating some lexicalized states representing syntactically uncommon words. Our approach examines the distribution of transitions, selects the uncommon words, and makes lexicalized states for the words. We performed a part-of-speech tagging experiment on the Brown corpus to evaluate the resultant language model and discovered that this technique improved the tagging accuracy by 0.21% at the 95% level of confidence. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 540,585 |
2204.03359 | ECCV Caption: Correcting False Negatives by Collecting
Machine-and-Human-verified Image-Caption Associations for MS-COCO | Image-Text matching (ITM) is a common task for evaluating the quality of Vision and Language (VL) models. However, existing ITM benchmarks have a significant limitation. They have many missing correspondences, originating from the data construction process itself. For example, a caption is only matched with one image although the caption can be matched with other similar images and vice versa. To correct the massive false negatives, we construct the Extended COCO Validation (ECCV) Caption dataset by supplying the missing associations with machine and human annotators. We employ five state-of-the-art ITM models with diverse properties for our annotation process. Our dataset provides x3.6 positive image-to-caption associations and x8.5 caption-to-image associations compared to the original MS-COCO. We also propose to use an informative ranking-based metric mAP@R, rather than the popular Recall@K (R@K). We re-evaluate the existing 25 VL models on existing and proposed benchmarks. Our findings are that the existing benchmarks, such as COCO 1K R@K, COCO 5K R@K, CxC R@1 are highly correlated with each other, while the rankings change when we shift to the ECCV mAP@R. Lastly, we delve into the effect of the bias introduced by the choice of machine annotator. Source code and dataset are available at https://github.com/naver-ai/eccv-caption | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 290,281 |
2304.12748 | Inverting the Imaging Process by Learning an Implicit Camera Model | Representing visual signals with implicit coordinate-based neural networks, as an effective replacement of the traditional discrete signal representation, has gained considerable popularity in computer vision and graphics. In contrast to existing implicit neural representations which focus on modelling the scene only, this paper proposes a novel implicit camera model which represents the physical imaging process of a camera as a deep neural network. We demonstrate the power of this new implicit camera model on two inverse imaging tasks: i) generating all-in-focus photos, and ii) HDR imaging. Specifically, we devise an implicit blur generator and an implicit tone mapper to model the aperture and exposure of the camera's imaging process, respectively. Our implicit camera model is jointly learned together with implicit scene models under multi-focus stack and multi-exposure bracket supervision. We have demonstrated the effectiveness of our new model on a large number of test images and videos, producing accurate and visually appealing all-in-focus and high dynamic range images. In principle, our new implicit neural camera model has the potential to benefit a wide array of other inverse imaging tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 360,338 |
2002.02620 | Gaussian Variational State Estimation for Nonlinear State-Space Models | In this paper, the problem of state estimation, in the context of both filtering and smoothing, for nonlinear state-space models is considered. Due to the nonlinear nature of the models, the state estimation problem is generally intractable as it involves integrals of general nonlinear functions and the filtered and smoothed state distributions lack closed-form solutions. As such, it is common to approximate the state estimation problem. In this paper, we develop an assumed Gaussian solution based on variational inference, which offers the key advantage of a flexible, but principled, mechanism for approximating the required distributions. Our main contribution lies in a new formulation of the state estimation problem as an optimisation problem, which can then be solved using standard optimisation routines that employ exact first- and second-order derivatives. The resulting state estimation approach involves a minimal number of assumptions and applies directly to nonlinear systems with both Gaussian and non-Gaussian probabilistic models. The performance of our approach is demonstrated on several examples; a challenging scalar system, a model of a simple robotic system, and a target tracking problem using a von Mises-Fisher distribution and outperforms alternative assumed Gaussian approaches to state estimation. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 162,979 |
2011.12101 | Space-time POD-Galerkin approach for parametric flow control | In this contribution we propose reduced order methods to fast and reliably solve parametrized optimal control problems governed by time dependent nonlinear partial differential equations. Our goal is to provide a tool to deal with the time evolution of several nonlinear optimality systems in many-query context, where a system must be analysed for various physical and geometrical features. Optimal control can be used in order to fill the gap between collected data and mathematical model and it is usually related to very time consuming activities: inverse problems, statistics, etc. Standard discretization techniques may lead to unbearable simulations for real applications. We aim at showing how reduced order modelling can solve this issue. We rely on a space-time POD-Galerkin reduction in order to solve the optimal control problem in a low dimensional reduced space in a fast way for several parametric instances. The proposed algorithm is validated with a numerical test based on environmental sciences: a reduced optimal control problem governed by viscous Shallow Waters Equations parametrized not only in the physics features, but also in the geometrical ones. We will show how the reduced model can be useful in order to recover desired velocity and height profiles more rapidly with respect to the standard simulation, not losing accuracy. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 208,056 |
2401.05819 | TAnet: A New Temporal Attention Network for EEG-based Auditory Spatial
Attention Decoding with a Short Decision Window | Auditory spatial attention detection (ASAD) is used to determine the direction of a listener's attention to a speaker by analyzing her/his electroencephalographic (EEG) signals. This study aimed to further improve the performance of ASAD with a short decision window (i.e., <1 s) rather than with long decision windows ranging from 1 to 5 seconds in previous studies. An end-to-end temporal attention network (i.e., TAnet) was introduced in this work. TAnet employs a multi-head attention (MHA) mechanism, which can more effectively capture the interactions among time steps in collected EEG signals and efficiently assign corresponding weights to those EEG time steps. Experiments demonstrated that, compared with the CNN-based method and recent ASAD methods, TAnet provided improved decoding performance in the KUL dataset, with decoding accuracies of 92.4% (decision window 0.1 s), 94.9% (0.25 s), 95.1% (0.3 s), 95.4% (0.4 s), and 95.5% (0.5 s) with short decision windows (i.e., <1 s). As a new ASAD model with a short decision window, TAnet can potentially facilitate the design of EEG-controlled intelligent hearing aids and sound recognition systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 420,923 |
1902.03642 | (q,p)-Wasserstein GANs: Comparing Ground Metrics for Wasserstein GANs | Generative Adversial Networks (GANs) have made a major impact in computer vision and machine learning as generative models. Wasserstein GANs (WGANs) brought Optimal Transport (OT) theory into GANs, by minimizing the $1$-Wasserstein distance between model and data distributions as their objective function. Since then, WGANs have gained considerable interest due to their stability and theoretical framework. We contribute to the WGAN literature by introducing the family of $(q,p)$-Wasserstein GANs, which allow the use of more general $p$-Wasserstein metrics for $p\geq 1$ in the GAN learning procedure. While the method is able to incorporate any cost function as the ground metric, we focus on studying the $l^q$ metrics for $q\geq 1$. This is a notable generalization as in the WGAN literature the OT distances are commonly based on the $l^2$ ground metric. We demonstrate the effect of different $p$-Wasserstein distances in two toy examples. Furthermore, we show that the ground metric does make a difference, by comparing different $(q,p)$ pairs on the MNIST and CIFAR-10 datasets. Our experiments demonstrate that changing the ground metric and $p$ can notably improve on the common $(q,p) = (2,1)$ case. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 121,165 |
2303.16680 | Preventing Object-centric Discovery of Unsound Process Models for Object
Interactions with Loops in Collaborative Systems: Extended Version | Object-centric process discovery (OCPD) constitutes a paradigm shift in process mining. Instead of assuming a single case notion present in the event log, OCPD can handle events without a single case notion, but that are instead related to a collection of objects each having a certain type. The object types constitute multiple, interacting case notions. The output of OCPD is an object-centric Petri net, i.e. a Petri net with object-typed places, that represents the parallel execution of multiple execution flows corresponding to object types. Similar to classical process discovery, where we aim for behaviorally sound process models as a result, in OCPD, we aim for soundness of the resulting object-centric Petri nets. However, the existing OCPD approach can result in violations of soundness. As we will show, one violation arises for multiple interacting object types with loops that arise in collaborative systems. This paper proposes an extended OCPD approach and proves that it does not suffer from this violation of soundness of the resulting object-centric Petri nets. We also show how we prevent the OCPD approach from introducing spurious interactions in the discovered object-centric Petri net. The proposed framework is prototypically implemented. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 354,946 |
0911.2197 | On the relation between plausibility logic and the maximum-entropy
principle: a numerical study | What is the relationship between plausibility logic and the principle of maximum entropy? When does the principle give unreasonable or wrong results? When is it appropriate to use the rule `expectation = average'? Can plausibility logic give the same answers as the principle, and better answers if those of the principle are unreasonable? To try to answer these questions, this study offers a numerical collection of plausibility distributions given by the maximum-entropy principle and by plausibility logic for a set of fifteen simple problems: throwing dice. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 4,919 |
2409.02649 | OpenFact at CheckThat! 2024: Combining Multiple Attack Methods for
Effective Adversarial Text Generation | This paper presents the experiments and results for the CheckThat! Lab at CLEF 2024 Task 6: Robustness of Credibility Assessment with Adversarial Examples (InCrediblAE). The primary objective of this task was to generate adversarial examples in five problem domains in order to evaluate the robustness of widely used text classification methods (fine-tuned BERT, BiLSTM, and RoBERTa) when applied to credibility assessment issues. This study explores the application of ensemble learning to enhance adversarial attacks on natural language processing (NLP) models. We systematically tested and refined several adversarial attack methods, including BERT-Attack, Genetic algorithms, TextFooler, and CLARE, on five datasets across various misinformation tasks. By developing modified versions of BERT-Attack and hybrid methods, we achieved significant improvements in attack effectiveness. Our results demonstrate the potential of modification and combining multiple methods to create more sophisticated and effective adversarial attack strategies, contributing to the development of more robust and secure systems. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 485,782 |
1811.03188 | Solving Jigsaw Puzzles By the Graph Connection Laplacian | We propose a novel mathematical framework to address the problem of automatically solving large jigsaw puzzles. This problem assumes a large image, which is cut into equal square pieces that are arbitrarily rotated and shuffled, and asks to recover the original image given the transformed pieces. The main contribution of this work is a method for recovering the rotations of the pieces when both shuffles and rotations are unknown. A major challenge of this procedure is estimating the graph connection Laplacian without the knowledge of shuffles. A careful combination of our proposed method for estimating rotations with any existing method for estimating shuffles results in a practical solution for the jigsaw puzzle problem. Our theory guarantees, in a clean setting, that our basic idea of recovering rotations is robust to some corruption of the connection graph. Numerical experiments demonstrate the competitive accuracy of this solution, its robustness to corruption and, its computational advantage for large puzzles. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 112,773 |
2310.02842 | Sweeping Heterogeneity with Smart MoPs: Mixture of Prompts for LLM Task
Adaptation | Large Language Models (LLMs) have the ability to solve a variety of tasks, such as text summarization and mathematical questions, just out of the box, but they are often trained with a single task in mind. Due to high computational costs, the current trend is to use prompt instruction tuning to better adjust monolithic, pretrained LLMs for new -- but often individual -- downstream tasks. Thus, how one would expand prompt tuning to handle -- concomitantly -- heterogeneous tasks and data distributions is a widely open question. To address this gap, we suggest the use of \emph{Mixture of Prompts}, or MoPs, associated with smart gating functionality: the latter -- whose design is one of the contributions of this paper -- can identify relevant skills embedded in different groups of prompts and dynamically assign combined experts (i.e., collection of prompts), based on the target task. Additionally, MoPs are empirically agnostic to any model compression technique applied -- for efficiency reasons -- as well as instruction data source and task composition. In practice, MoPs can simultaneously mitigate prompt training "interference" in multi-task, multi-source scenarios (e.g., task and data heterogeneity across sources), as well as possible implications from model approximations. As a highlight, MoPs manage to decrease final perplexity from $\sim20\%$ up to $\sim70\%$, as compared to baselines, in the federated scenario, and from $\sim 3\%$ up to $\sim30\%$ in the centralized scenario. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 397,011 |
1903.03491 | Stable Backward Diffusion Models that Minimise Convex Energies | The inverse problem of backward diffusion is known to be ill-posed and highly unstable. Backward diffusion processes appear naturally in image enhancement and deblurring applications. It is therefore greatly desirable to establish a backward diffusion model which implements a smart stabilisation approach that can be used in combination with an easy to handle numerical scheme. So far, existing stabilisation strategies in literature require sophisticated numerics to solve the underlying initial value problem. We derive a class of space-discrete one-dimensional backward diffusion as gradient descent of energies where we gain stability by imposing range constraints. Interestingly, these energies are even convex. Furthermore, we establish a comprehensive theory for the time-continuous evolution and we show that stability carries over to a simple explicit time discretisation of our model. Finally, we confirm the stability and usefulness of our technique in experiments in which we enhance the contrast of digital greyscale and colour images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 123,747 |
1909.08552 | An Automated Engineering Assistant: Learning Parsers for Technical
Drawings | From a set of technical drawings and expert knowledge, we automatically learn a parser to interpret such a drawing. This enables automatic reasoning and learning on top of a large database of technical drawings. In this work, we develop a similarity based search algorithm to help engineers and designers find or complete designs more easily and flexibly. This is part of an ongoing effort to build an automated engineering assistant. The proposed methods make use of both neural methods to learn to interpret images, and symbolic methods to learn to interpret the structure in the technical drawing and incorporate expert knowledge. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 146,005 |
2408.10711 | Investigating Context Effects in Similarity Judgements in Large Language
Models | Large Language Models (LLMs) have revolutionised the capability of AI models in comprehending and generating natural language text. They are increasingly being used to empower and deploy agents in real-world scenarios, which make decisions and take actions based on their understanding of the context. Therefore researchers, policy makers and enterprises alike are working towards ensuring that the decisions made by these agents align with human values and user expectations. That being said, human values and decisions are not always straightforward to measure and are subject to different cognitive biases. There is a vast section of literature in Behavioural Science which studies biases in human judgements. In this work we report an ongoing investigation on alignment of LLMs with human judgements affected by order bias. Specifically, we focus on a famous human study which showed evidence of order effects in similarity judgements, and replicate it with various popular LLMs. We report the different settings where LLMs exhibit human-like order effect bias and discuss the implications of these findings to inform the design and development of LLM based applications. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 481,991 |
cs/9408103 | A System for Induction of Oblique Decision Trees | This article describes a new system for induction of oblique decision trees. This system, OC1, combines deterministic hill-climbing with two forms of randomization to find a good oblique split (in the form of a hyperplane) at each node of a decision tree. Oblique decision tree methods are tuned especially for domains in which the attributes are numeric, although they can be adapted to symbolic or mixed symbolic/numeric attributes. We present extensive empirical studies, using both real and artificial data, that analyze OC1's ability to construct oblique trees that are smaller and more accurate than their axis-parallel counterparts. We also examine the benefits of randomization for the construction of oblique decision trees. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 540,295 |
2008.10346 | Atomic subgraphs and the statistical mechanics of networks | We develop random graph models where graphs are generated by connecting not only pairs of vertices by edges but also larger subsets of vertices by copies of small atomic subgraphs of arbitrary topology. This allows the for the generation of graphs with extensive numbers of triangles and other network motifs commonly observed in many real world networks. More specifically we focus on maximum entropy ensembles under constraints placed on the counts and distributions of atomic subgraphs and derive general expressions for the entropy of such models. We also present a procedure for combining distributions of multiple atomic subgraphs that enables the construction of models with fewer parameters. Expanding the model to include atoms with edge and vertex labels we obtain a general class of models that can be parametrized in terms of basic building blocks and their distributions that includes many widely used models as special cases. These models include random graphs with arbitrary distributions of subgraphs, random hypergraphs, bipartite models, stochastic block models, models of multilayer networks and their degree corrected and directed versions. We show that the entropy for all these models can be derived from a single expression that is characterized by the symmetry groups of atomic subgraphs. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 192,969 |
1205.4208 | Frameless ALOHA Protocol for Wireless Networks | We propose a novel distributed random access scheme for wireless networks based on slotted ALOHA, motivated by the analogies between successive interference cancellation and iterative belief-propagation decoding on erasure channels. The proposed scheme assumes that each user independently accesses the wireless link in each slot with a predefined probability, resulting in a distribution of user transmissions over slots. The operation bears analogy with rateless codes, both in terms of probability distributions as well as to the fact that the ALOHA frame becomes fluid and adapted to the current contention process. Our aim is to optimize the slot access probability in order to achieve rateless-like distributions, focusing both on the maximization of the resolution probability of user transmissions and the throughput of the scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 16,073 |
2208.01252 | A Novel Transformer Network with Shifted Window Cross-Attention for
Spatiotemporal Weather Forecasting | Earth Observatory is a growing research area that can capitalize on the powers of AI for short time forecasting, a Now-casting scenario. In this work, we tackle the challenge of weather forecasting using a video transformer network. Vision transformer architectures have been explored in various applications, with major constraints being the computational complexity of Attention and the data hungry training. To address these issues, we propose the use of Video Swin-Transformer, coupled with a dedicated augmentation scheme. Moreover, we employ gradual spatial reduction on the encoder side and cross-attention on the decoder. The proposed approach is tested on the Weather4Cast2021 weather forecasting challenge data, which requires the prediction of 8 hours ahead future frames (4 per hour) from an hourly weather product sequence. The dataset was normalized to 0-1 to facilitate using the evaluation metrics across different datasets. The model results in an MSE score of 0.4750 when provided with training data, and 0.4420 during transfer learning without using training data, respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 311,113 |
2412.14166 | MegaSynth: Scaling Up 3D Scene Reconstruction with Synthesized Data | We propose scaling up 3D scene reconstruction by training with synthesized data. At the core of our work is MegaSynth, a procedurally generated 3D dataset comprising 700K scenes - over 50 times larger than the prior real dataset DL3DV - dramatically scaling the training data. To enable scalable data generation, our key idea is eliminating semantic information, removing the need to model complex semantic priors such as object affordances and scene composition. Instead, we model scenes with basic spatial structures and geometry primitives, offering scalability. Besides, we control data complexity to facilitate training while loosely aligning it with real-world data distribution to benefit real-world generalization. We explore training LRMs with both MegaSynth and available real data. Experiment results show that joint training or pre-training with MegaSynth improves reconstruction quality by 1.2 to 1.8 dB PSNR across diverse image domains. Moreover, models trained solely on MegaSynth perform comparably to those trained on real data, underscoring the low-level nature of 3D reconstruction. Additionally, we provide an in-depth analysis of MegaSynth's properties for enhancing model capability, training stability, and generalization. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 518,594 |
1608.05763 | Inference in Probabilistic Logic Programs using Lifted Explanations | In this paper, we consider the problem of lifted inference in the context of Prism-like probabilistic logic programming languages. Traditional inference in such languages involves the construction of an explanation graph for the query and computing probabilities over this graph. When evaluating queries over probabilistic logic programs with a large number of instances of random variables, traditional methods treat each instance separately. For many programs and queries, we observe that explanations can be summarized into substantially more compact structures, which we call lifted explanation graphs. In this paper, we define lifted explanation graphs and operations over them. In contrast to existing lifted inference techniques, our method for constructing lifted explanations naturally generalizes existing methods for constructing explanation graphs. To compute probability of query answers, we solve recurrences generated from the lifted graphs. We show examples where the use of our technique reduces the asymptotic complexity of inference. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 60,020 |
2406.02962 | Docs2KG: Unified Knowledge Graph Construction from Heterogeneous
Documents Assisted by Large Language Models | Even for a conservative estimate, 80% of enterprise data reside in unstructured files, stored in data lakes that accommodate heterogeneous formats. Classical search engines can no longer meet information seeking needs, especially when the task is to browse and explore for insight formulation. In other words, there are no obvious search keywords to use. Knowledge graphs, due to their natural visual appeals that reduce the human cognitive load, become the winning candidate for heterogeneous data integration and knowledge representation. In this paper, we introduce Docs2KG, a novel framework designed to extract multimodal information from diverse and heterogeneous unstructured documents, including emails, web pages, PDF files, and Excel files. Dynamically generates a unified knowledge graph that represents the extracted key information, Docs2KG enables efficient querying and exploration of document data lakes. Unlike existing approaches that focus on domain-specific data sources or pre-designed schemas, Docs2KG offers a flexible and extensible solution that can adapt to various document structures and content types. The proposed framework unifies data processing supporting a multitude of downstream tasks with improved domain interpretability. Docs2KG is publicly accessible at https://docs2kg.ai4wa.com, and a demonstration video is available at https://docs2kg.ai4wa.com/Video. | false | false | false | false | true | true | false | false | true | false | false | false | false | false | false | false | false | false | 461,022 |
2211.07771 | Edge2Vec: A High Quality Embedding for the Jigsaw Puzzle Problem | Pairwise compatibility measure (CM) is a key component in solving the jigsaw puzzle problem (JPP) and many of its recently proposed variants. With the rapid rise of deep neural networks (DNNs), a trade-off between performance (i.e., accuracy) and computational efficiency has become a very significant issue. Whereas an end-to-end DNN-based CM model exhibits high performance, it becomes virtually infeasible on very large puzzles, due to its highly intensive computation. On the other hand, exploiting the concept of embeddings to alleviate significantly the computational efficiency, has resulted in degraded performance, according to recent studies. This paper derives an advanced CM model (based on modified embeddings and a new loss function, called hard batch triplet loss) for closing the above gap between speed and accuracy; namely a CM model that achieves SOTA results in terms of performance and efficiency combined. We evaluated our newly derived CM on three commonly used datasets, and obtained a reconstruction improvement of 5.8% and 19.5% for so-called Type-1 and Type-2 problem variants, respectively, compared to best known results due to previous CMs. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 330,364 |
1606.04707 | A conceptual approach to model co-evolution of urban structures | Urban structures encompass settlements, characterized by the spatial distribution of built-up areas, but also transportation structures, to connect these built-up areas. These two structures are very different in their origin and function, fulfilling complementary needs: (i) to access space, and (ii) to occupy space. Their evolution cannot be understood by looking at the dynamics of urban aggregations and transportation systems separately. Instead, existing built-up areas feed back on the further development of transportation structures, and the availability of the latter feeds back on the future growth of urban aggregations. To model this co-evolution, we propose an agent-based approach that builds on existing agent-based models for the evolution of trail systems and of urban settlements. The key element in these separate approaches is a generalized communication of agents by means of an adaptive landscape. This landscape is only generated by the agents, but once it exists, it feeds back on their further actions. The emerging trail system or urban aggregation results as a self-organized structure from these collective interactions. In our co-evolutionary approach, we couple these two separate models by means of meta-agents that represent humans with their different demands for housing and mobility. We characterize our approach as a statistical ensemble approach, which allows to capture the potential of urban evolution in a bottom-up manner, but can be validated against empirical observations. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 57,302 |
2307.01930 | Learning ECG Signal Features Without Backpropagation Using Linear Laws | This paper introduces LLT-ECG, a novel method for electrocardiogram (ECG) signal classification that leverages concepts from theoretical physics to automatically generate features from time series data. Unlike traditional deep learning approaches, LLT-ECG operates in a forward manner, eliminating the need for backpropagation and hyperparameter tuning. By identifying linear laws that capture shared patterns within specific classes, the proposed method constructs a compact and verifiable representation, enhancing the effectiveness of downstream classifiers. We demonstrate LLT-ECG's state-of-the-art performance on real-world ECG datasets from PhysioNet, underscoring its potential for medical applications where speed and verifiability are crucial. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 377,515 |
1901.11015 | GeNet: Deep Representations for Metagenomics | We introduce GeNet, a method for shotgun metagenomic classification from raw DNA sequences that exploits the known hierarchical structure between labels for training. We provide a comparison with state-of-the-art methods Kraken and Centrifuge on datasets obtained from several sequencing technologies, in which dataset shift occurs. We show that GeNet obtains competitive precision and good recall, with orders of magnitude less memory requirements. Moreover, we show that a linear model trained on top of representations learned by GeNet achieves recall comparable to state-of-the-art methods on the aforementioned datasets, and achieves over 90% accuracy in a challenging pathogen detection problem. This provides evidence of the usefulness of the representations learned by GeNet for downstream biological tasks. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 120,152 |
2108.00802 | Coalitional Control for Self-Organizing Agents | Coalitional control is concerned with the management of multi-agent systems where cooperation cannot be taken for granted (due to, e.g., market competition, logistics). This paper proposes a model predictive control (MPC) framework aimed at large-scale dynamically-coupled systems whose individual components, possessing a limited model of the system, are controlled independently, pursuing possibly competing objectives. The emergence of cooperating clusters of controllers is contemplated through an autonomous negotiation protocol, based on the characterization as a coalitional game of the benefit derived by a broader feedback and the alignment of the individual objectives. Specific mechanisms for the cooperative benefit redistribution that relax the cognitive requirements of the game are employed to compensate for possible local cost increases due to cooperation. As a result, the structure of the overall MPC feedback can be adapted online to the degree of interaction between different parts of the system, while satisfying the individual interests of the agents. A wide-area control application for the power grid with the objective of minimizing frequency deviations and undesired inter-area power transfers is used as study case. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 248,838 |
2110.03382 | Analysis of the influence of political polarization in the vaccination
stance: the Brazilian COVID-19 scenario | The outbreak of COVID-19 had a huge global impact, and non-scientific beliefs and political polarization have significantly influenced the population's behavior. In this context, COVID vaccines were made available in an unprecedented time, but a high level of hesitance has been observed that can undermine community immunization. Traditionally, anti-vaccination attitudes are more related to conspiratorial thinking rather than political bias. In Brazil, a country with an exemplar tradition in large-scale vaccination programs, all COVID-related topics have also been discussed under a strong political bias. In this paper, we use a multidimensional analysis framework to understand if anti/pro-vaccination stances expressed by Brazilians in social media are influenced by political polarization. The analysis framework incorporates techniques to automatically infer from users their political orientation, topic modeling to discover their concerns, network analysis to characterize their social behavior, and the characterization of information sources and external influence. Our main findings confirm that anti/pro stances are biased by political polarization, right and left, respectively. While a significant proportion of pro-vaxxers display haste for an immunization program and criticize the government's actions, the anti-vaxxers distrust a vaccine developed in a record time. Anti-vaccination stance is also related to prejudice against China (anti-sinovaxxers), revealing conspiratorial theories related to communism. All groups display an "echo chamber behavior, revealing they are not open to distinct views. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 259,479 |
2212.07777 | $\ell$-Complementary Subspaces and Codes in Finite Bilinear Spaces | We consider (symmetric, non-degenerate) bilinear spaces over a finite field and investigate the properties of their $\ell$-complementary subspaces, i.e., the subspaces that intersect their dual in dimension $\ell$. This concept generalizes that of a totally isotropic subspace and, in the context of coding theory, specializes to the notions of self-orthogonal, self-dual and linear-complementary-dual (LCD) codes. In this paper, we focus on the enumerative and asymptotic combinatorics of all these objects, giving formulas for their numbers and describing their typical behavior (rather than the behavior of a single object). For example, we give a closed formula for the average weight distribution of an $\ell$-complementary code in the Hamming metric, generalizing a result by Pless and Sloane on the aggregate weight enumerator of binary self-dual codes. Our results also show that self-orthogonal codes, despite being very sparse in the set of codes of the same dimension over a large field, asymptotically behave quite similarly to a typical, not necessarily self-orthogonal, code. In particular, we prove that most self-orthogonal codes are MDS over a large field by computing the asymptotic proportion of the non-MDS ones for growing field size. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 336,523 |
2307.07074 | State-Robust Observability Measures for Sensor Selection in Nonlinear
Dynamic Systems | This paper explores the problem of selecting sensor nodes for a general class of nonlinear dynamical networks. In particular, we study the problem by utilizing altered definitions of observability and open-loop lifted observers. The approach is performed by discretizing the system's dynamics using the implicit Runge-Kutta method and by introducing a state-averaged observability measure. The observability measure is computed for a number of perturbed initial states in the vicinity of the system's true initial state. The sensor node selection problem is revealed to retain the submodular and modular properties of the original problem. This allows the problem to be solved efficiently using a greedy algorithm with a guaranteed performance bound while showing an augmented robustness to unknown or uncertain initial conditions. The validity of this approach is numerically demonstrated on a $H_{2}/O_{2}$ combustion reaction network. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 379,272 |
1210.2629 | Optimization in Differentiable Manifolds in Order to Determine the
Method of Construction of Prehistoric Wall-Paintings | In this paper a general methodology is introduced for the determination of potential prototype curves used for the drawing of prehistoric wall-paintings. The approach includes a) preprocessing of the wall-paintings contours to properly partition them, according to their curvature, b) choice of prototype curves families, c) analysis and optimization in 4-manifold for a first estimation of the form of these prototypes, d) clustering of the contour parts and the prototypes, to determine a minimal number of potential guides, e) further optimization in 4-manifold, applied to each cluster separately, in order to determine the exact functional form of the potential guides, together with the corresponding drawn contour parts. The introduced methodology simultaneously deals with two problems: a) the arbitrariness in data-points orientation and b) the determination of one proper form for a prototype curve that optimally fits the corresponding contour data. Arbitrariness in orientation has been dealt with a novel curvature based error, while the proper forms of curve prototypes have been exhaustively determined by embedding curvature deformations of the prototypes into 4-manifolds. Application of this methodology to celebrated wall-paintings excavated at Tyrins, Greece and the Greek island of Thera, manifests it is highly probable that these wall-paintings had been drawn by means of geometric guides that correspond to linear spirals and hyperbolae. These geometric forms fit the drawings' lines with an exceptionally low average error, less than 0.39mm. Hence, the approach suggests the existence of accurate realizations of complicated geometric entities, more than 1000 years before their axiomatic formulation in Classical Ages. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 19,031 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.