id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2112.13608 | An Empirical Study of Adder Neural Networks for Object Detection | Adder neural networks (AdderNets) have shown impressive performance on image classification with only addition operations, which are more energy efficient than traditional convolutional neural networks built with multiplications. Compared with classification, there is a strong demand on reducing the energy consumption of modern object detectors via AdderNets for real-world applications such as autonomous driving and face detection. In this paper, we present an empirical study of AdderNets for object detection. We first reveal that the batch normalization statistics in the pre-trained adder backbone should not be frozen, since the relatively large feature variance of AdderNets. Moreover, we insert more shortcut connections in the neck part and design a new feature fusion architecture for avoiding the sparse features of adder layers. We present extensive ablation studies to explore several design choices of adder detectors. Comparisons with state-of-the-arts are conducted on COCO and PASCAL VOC benchmarks. Specifically, the proposed Adder FCOS achieves a 37.8\% AP on the COCO val set, demonstrating comparable performance to that of the convolutional counterpart with an about $1.4\times$ energy reduction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 273,306 |
2011.12799 | StyleSpace Analysis: Disentangled Controls for StyleGAN Image Generation | We explore and analyze the latent style space of StyleGAN2, a state-of-the-art architecture for image generation, using models pretrained on several different datasets. We first show that StyleSpace, the space of channel-wise style parameters, is significantly more disentangled than the other intermediate latent spaces explored by previous works. Next, we describe a method for discovering a large collection of style channels, each of which is shown to control a distinct visual attribute in a highly localized and disentangled manner. Third, we propose a simple method for identifying style channels that control a specific attribute, using a pretrained classifier or a small number of example images. Manipulation of visual attributes via these StyleSpace controls is shown to be better disentangled than via those proposed in previous works. To show this, we make use of a newly proposed Attribute Dependency metric. Finally, we demonstrate the applicability of StyleSpace controls to the manipulation of real images. Our findings pave the way to semantically meaningful and well-disentangled image manipulations via simple and intuitive interfaces. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 208,271 |
2111.02717 | Facial Emotion Recognition using Deep Residual Networks in Real-World
Environments | Automatic affect recognition using visual cues is an important task towards a complete interaction between humans and machines. Applications can be found in tutoring systems and human computer interaction. A critical step towards that direction is facial feature extraction. In this paper, we propose a facial feature extractor model trained on an in-the-wild and massively collected video dataset provided by the RealEyes company. The dataset consists of a million labelled frames and 2,616 thousand subjects. As temporal information is important to the emotion recognition domain, we utilise LSTM cells to capture the temporal dynamics in the data. To show the favourable properties of our pre-trained model on modelling facial affect, we use the RECOLA database, and compare with the current state-of-the-art approach. Our model provides the best results in terms of concordance correlation coefficient. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 264,954 |
2409.02335 | What Do You See in Common? Learning Hierarchical Prototypes over
Tree-of-Life to Discover Evolutionary Traits | A grand challenge in biology is to discover evolutionary traits - features of organisms common to a group of species with a shared ancestor in the tree of life (also referred to as phylogenetic tree). With the growing availability of image repositories in biology, there is a tremendous opportunity to discover evolutionary traits directly from images in the form of a hierarchy of prototypes. However, current prototype-based methods are mostly designed to operate over a flat structure of classes and face several challenges in discovering hierarchical prototypes, including the issue of learning over-specific features at internal nodes. To overcome these challenges, we introduce the framework of Hierarchy aligned Commonality through Prototypical Networks (HComP-Net). We empirically show that HComP-Net learns prototypes that are accurate, semantically consistent, and generalizable to unseen species in comparison to baselines on birds, butterflies, and fishes datasets. The code and datasets are available at https://github.com/Imageomics/HComPNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 485,654 |
2410.02284 | Correlation and Navigation in the Vocabulary Key Representation Space of
Language Models | Language model (LM) decoding is based on the next-token prediction (NTP) probability distribution. For neural LMs (e.g., Transformer-based), NTP distribution is essentially a softmax-regularized dot product between an encoded input context (query) and fixed vocabulary representations (keys). In this paper, we study the effect of the key distribution on the NTP distribution, with a focus on whether the similarity between keys will trigger spurious correlations in NTP. Through knowledge-probing tasks, we show that in the NTP distribution, the few top-ranked tokens are typically accurate. However, the middle-ranked prediction is highly biased towards the tokens that are distributionally (not necessarily semantically) similar to these top ones. For instance, if "P" is predicted as the top-1 token, "A"-"Z" will all be ranked high in NTP, no matter whether they can lead to correct decoding results. This hurts the sampling diversity and makes the sampling of correct, long-tail results hopeless and noisy. We attempt to alleviate this issue via a novel in-context method that iteratively pushes the query representation away from explored regions. Specifically, we include the explored decoding results in the context and prompt the LM to generate something else, which encourages the LM to produce a query representation that has small dot products with explored keys. Experiments on knowledge-probing tasks show that our method leads to efficient navigation away from explored keys to correct new keys. We further extend our method to open-ended and chain-of-thought (for reasoning) generation. Experiment results show that ICN contributes to better generation diversity and improved self-consistency voting performance. Finally, we discuss potential training issues caused by the fixed key space together with the challenges and possible ways to address them in future research. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 494,209 |
1703.03101 | Robust MPC for tracking of nonholonomic robots with additive
disturbances | In this paper, two robust model predictive control (MPC) schemes are proposed for tracking control of nonholonomic systems with bounded disturbances: tube-MPC and nominal robust MPC (NRMPC). In tube-MPC, the control signal consists of a control action and a nonlinear feedback law based on the deviation of the actual states from the states of a nominal system. It renders the actual trajectory within a tube centered along the optimal trajectory of the nominal system. Recursive feasibility and input-to-state stability are established and the constraints are ensured by tightening the input domain and the terminal region. While in NRMPC, an optimal control sequence is obtained by solving an optimization problem based on the current state, and the first portion of this sequence is applied to the real system in an open-loop manner during each sampling period. The state of nominal system model is updated by the actual state at each step, which provides additional a feedback. By introducing a robust state constraint and tightening the terminal region, recursive feasibility and input-to-state stability are guaranteed. Simulation results demonstrate the effectiveness of both strategies proposed. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 69,672 |
2410.23682 | CubiXMusashi: Fusion of Wire-Driven CubiX and Musculoskeletal Humanoid
Musashi toward Unlimited Performance | Humanoids exhibit a wide variety in terms of joint configuration, actuators, and degrees of freedom, resulting in different achievable movements and tasks for each type. Particularly, musculoskeletal humanoids are developed to closely emulate human body structure and movement functions, consisting of a skeletal framework driven by numerous muscle actuators. The redundant arrangement of muscles relative to the skeletal degrees of freedom has been used to represent the flexible and complex body movements observed in humans. However, due to this flexible body and high degrees of freedom, modeling, simulation, and control become extremely challenging, limiting the feasible movements and tasks. In this study, we integrate the musculoskeletal humanoid Musashi with the wire-driven robot CubiX, capable of connecting to the environment, to form CubiXMusashi. This combination addresses the shortcomings of traditional musculoskeletal humanoids and enables movements beyond the capabilities of other humanoids. CubiXMusashi connects to the environment with wires and drives by winding them, successfully achieving movements such as pull-up, rising from a lying pose, and mid-air kicking, which are difficult for Musashi alone. This concept demonstrates that various humanoids, not limited to musculoskeletal humanoids, can mitigate their physical constraints and acquire new abilities by connecting to the environment and driving through wires. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 504,136 |
2408.01753 | Opinion Dynamics with Set-Based Confidence: Convergence Criteria and
Periodic Solutions | This paper introduces a new multidimensional extension of the Hegselmann-Krause (HK) opinion dynamics model, where opinion proximity is not determined by a norm or metric. Instead, each agent trusts opinions within the Minkowski sum $\xi+\mathcal{O}$, where $\xi$ is the agent's current opinion and $\mathcal{O}$ is the confidence set defining acceptable deviations. During each iteration, agents update their opinions by simultaneously averaging the trusted opinions. Unlike traditional HK systems, where $\mathcal{O}$ is a ball in some norm, our model allows the confidence set to be non-convex and even unbounded. We demonstrate that the new model, referred to as SCOD (Set-based Confidence Opinion Dynamics), can exhibit properties absent in the conventional HK model. Some solutions may converge to non-equilibrium points in the state space, while others oscillate periodically. These ``pathologies'' disappear if the set $\mathcal{O}$ is symmetric and contains zero in its interior: similar to the usual HK model, SCOD then converges in a finite number of iterations to one of the equilibrium points. The latter property is also preserved if one agent is "stubborn" and resists changing their opinion, yet still influences the others; however, two stubborn agents can lead to oscillations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | 478,365 |
1908.11645 | EBPC: Extended Bit-Plane Compression for Deep Neural Network Inference
and Training Accelerators | In the wake of the success of convolutional neural networks in image classification, object recognition, speech recognition, etc., the demand for deploying these compute-intensive ML models on embedded and mobile systems with tight power and energy constraints at low cost, as well as for boosting throughput in data centers, is growing rapidly. This has sparked a surge of research into specialized hardware accelerators. Their performance is typically limited by I/O bandwidth, power consumption is dominated by I/O transfers to off-chip memory, and on-chip memories occupy a large part of the silicon area. We introduce and evaluate a novel, hardware-friendly, and lossless compression scheme for the feature maps present within convolutional neural networks. We present hardware architectures and synthesis results for the compressor and decompressor in 65nm. With a throughput of one 8-bit word/cycle at 600MHz, they fit into 2.8kGE and 3.0kGE of silicon area, respectively - together the size of less than seven 8-bit multiply-add units at the same throughput. We show that an average compression ratio of 5.1x for AlexNet, 4x for VGG-16, 2.4x for ResNet-34 and 2.2x for MobileNetV2 can be achieved - a gain of 45-70% over existing methods. Our approach also works effectively for various number formats, has a low frame-to-frame variance on the compression ratio, and achieves compression factors for gradient map compression during training that are even better than for inference. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 143,435 |
2406.01304 | CodeR: Issue Resolving with Multi-Agent and Task Graphs | GitHub issue resolving recently has attracted significant attention from academia and industry. SWE-bench is proposed to measure the performance in resolving issues. In this paper, we propose CodeR, which adopts a multi-agent framework and pre-defined task graphs to Repair & Resolve reported bugs and add new features within code Repository. On SWE-bench lite, CodeR is able to solve 28.33% of issues, when submitting only once for each issue. We examine the performance impact of each design of CodeR and offer insights to advance this research direction. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | 460,258 |
2403.06716 | Emergency Response Inference Mapping (ERIMap): A Bayesian Network-based
Method for Dynamic Observation Processing in Spatially Distributed
Emergencies | In emergencies, high stake decisions often have to be made under time pressure and strain. In order to support such decisions, information from various sources needs to be collected and processed rapidly. The information available tends to be temporally and spatially variable, uncertain, and sometimes conflicting, leading to potential biases in decisions. Currently, there is a lack of systematic approaches for information processing and situation assessment which meet the particular demands of emergency situations. To address this gap, we present a Bayesian network-based method called ERIMap that is tailored to the complex information-scape during emergencies. The method enables the systematic and rapid processing of heterogeneous and potentially uncertain observations and draws inferences about key variables of an emergency. It thereby reduces complexity and cognitive load for decision makers. The output of the ERIMap method is a dynamically evolving and spatially resolved map of beliefs about key variables of an emergency that is updated each time a new observation becomes available. The method is illustrated in a case study in which an emergency response is triggered by an accident causing a gas leakage on a chemical plant site. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 436,572 |
2106.15433 | Semantic Reasoning from Model-Agnostic Explanations | With the wide adoption of black-box models, instance-based \emph{post hoc} explanation tools, such as LIME and SHAP became increasingly popular. These tools produce explanations, pinpointing contributions of key features associated with a given prediction. However, the obtained explanations remain at the raw feature level and are not necessarily understandable by a human expert without extensive domain knowledge. We propose ReEx (Reasoning with Explanations), a method applicable to explanations generated by arbitrary instance-level explainers, such as SHAP. By using background knowledge in the form of ontologies, ReEx generalizes instance explanations in a least general generalization-like manner. The resulting symbolic descriptions are specific for individual classes and offer generalizations based on the explainer's output. The derived semantic explanations are potentially more informative, as they describe the key attributes in the context of more general background knowledge, e.g., at the biological process level. We showcase ReEx's performance on nine biological data sets, showing that compact, semantic explanations can be obtained and are more informative than generic ontology mappings that link terms directly to feature names. ReEx is offered as a simple-to-use Python library and is compatible with tools such as SHAP and similar. To our knowledge, this is one of the first methods that directly couples semantic reasoning with contemporary model explanation methods. This paper is a preprint. Full version's doi is: 10.1109/SAMI50585.2021.9378668 | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 243,768 |
2305.03954 | Learning Action Embeddings for Off-Policy Evaluation | Off-policy evaluation (OPE) methods allow us to compute the expected reward of a policy by using the logged data collected by a different policy. OPE is a viable alternative to running expensive online A/B tests: it can speed up the development of new policies, and reduces the risk of exposing customers to suboptimal treatments. However, when the number of actions is large, or certain actions are under-explored by the logging policy, existing estimators based on inverse-propensity scoring (IPS) can have a high or even infinite variance. Saito and Joachims (arXiv:2202.06317v2 [cs.LG]) propose marginalized IPS (MIPS) that uses action embeddings instead, which reduces the variance of IPS in large action spaces. MIPS assumes that good action embeddings can be defined by the practitioner, which is difficult to do in many real-world applications. In this work, we explore learning action embeddings from logged data. In particular, we use intermediate outputs of a trained reward model to define action embeddings for MIPS. This approach extends MIPS to more applications, and in our experiments improves upon MIPS with pre-defined embeddings, as well as standard baselines, both on synthetic and real-world data. Our method does not make assumptions about the reward model class, and supports using additional action information to further improve the estimates. The proposed approach presents an appealing alternative to DR for combining the low variance of DM with the low bias of IPS. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 362,578 |
2101.11275 | ASBSO: An Improved Brain Storm Optimization With Flexible Search Length
and Memory-Based Selection | Brain storm optimization (BSO) is a newly proposed population-based optimization algorithm, which uses a logarithmic sigmoid transfer function to adjust its search range during the convergent process. However, this adjustment only varies with the current iteration number and lacks of flexibility and variety which makes a poor search effciency and robustness of BSO. To alleviate this problem, an adaptive step length structure together with a success memory selection strategy is proposed to be incorporated into BSO. This proposed method, adaptive step length based on memory selection BSO, namely ASBSO, applies multiple step lengths to modify the generation process of new solutions, thus supplying a flexible search according to corresponding problems and convergent periods. The novel memory mechanism, which is capable of evaluating and storing the degree of improvements of solutions, is used to determine the selection possibility of step lengths. A set of 57 benchmark functions are used to test ASBSO's search ability, and four real-world problems are adopted to show its application value. All these test results indicate the remarkable improvement in solution quality, scalability, and robustness of ASBSO. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | 217,220 |
1301.4753 | Pattern Matching for Self- Tuning of MapReduce Jobs | In this paper, we study CPU utilization time patterns of several MapReduce applications. After extracting running patterns of several applications, they are saved in a reference database to be later used to tweak system parameters to efficiently execute unknown applications in future. To achieve this goal, CPU utilization patterns of new applications are compared with the already known ones in the reference database to find/predict their most probable execution patterns. Because of different patterns lengths, the Dynamic Time Warping (DTW) is utilized for such comparison; a correlation analysis is then applied to DTWs outcomes to produce feasible similarity patterns. Three real applications (WordCount, Exim Mainlog parsing and Terasort) are used to evaluate our hypothesis in tweaking system parameters in executing similar applications. Results were very promising and showed effectiveness of our approach on pseudo-distributed MapReduce platforms. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 21,274 |
1610.07418 | Statistical Machine Translation for Indian Languages: Mission Hindi | This paper discusses Centre for Development of Advanced Computing Mumbai's (CDACM) submission to the NLP Tools Contest on Statistical Machine Translation in Indian Languages (ILSMT) 2014 (collocated with ICON 2014). The objective of the contest was to explore the effectiveness of Statistical Machine Translation (SMT) for Indian language to Indian language and English-Hindi machine translation. In this paper, we have proposed that suffix separation and word splitting for SMT from agglutinative languages to Hindi significantly improves over the baseline (BL). We have also shown that the factored model with reordering outperforms the phrase-based SMT for English-Hindi (\enhi). We report our work on all five pairs of languages, namely Bengali-Hindi (\bnhi), Marathi-Hindi (\mrhi), Tamil-Hindi (\tahi), Telugu-Hindi (\tehi), and \enhi for Health, Tourism, and General domains. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 62,784 |
2003.10822 | Pre-processing Image using Brightening, CLAHE and RETINEX | This paper focuses on finding the most optimal pre-processing methods considering three common algorithms for image enhancement: Brightening, CLAHE and Retinex. For the purpose of image training in general, these methods will be combined to find out the most optimal method for image enhancement. We have carried out the research on the different permutation of three methods: Brightening, CLAHE and Retinex. The evaluation is based on Canny Edge detection applied to all processed images. Then the sharpness of objects will be justified by true positive pixels number in comparison between images. After using different number combinations pre-processing functions on images, CLAHE proves to be the most effective in edges improvement, Brightening does not show much effect on the edges enhancement, and the Retinex even reduces the sharpness of images and shows little contribution on images enhancement. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 169,446 |
2109.04127 | Word-Level Coreference Resolution | Recent coreference resolution models rely heavily on span representations to find coreference links between word spans. As the number of spans is $O(n^2)$ in the length of text and the number of potential links is $O(n^4)$, various pruning techniques are necessary to make this approach computationally feasible. We propose instead to consider coreference links between individual words rather than word spans and then reconstruct the word spans. This reduces the complexity of the coreference model to $O(n^2)$ and allows it to consider all potential mentions without pruning any of them out. We also demonstrate that, with these changes, SpanBERT for coreference resolution will be significantly outperformed by RoBERTa. While being highly efficient, our model performs competitively with recent coreference resolution systems on the OntoNotes benchmark. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 254,289 |
2008.06640 | Automatic Storage Structure Selection for hybrid Workload | In the use of database systems, the design of the storage engine and data model directly affects the performance of the database when performing queries. Therefore, the users of the database need to select the storage engine and design data model according to the workload encountered. However, in a hybrid workload, the query set of the database is dynamically changing, and the design of its optimal storage structure is also changing. Motivated by this, we propose an automatic storage structure selection system based on learning cost, which is used to dynamically select the optimal storage structure of the database under hybrid workloads. In the system, we introduce a machine learning method to build a cost model for the storage engine, and a column-oriented data layout generation algorithm. Experimental results show that the proposed system can choose the optimal combination of storage engine and data model according to the current workload, which greatly improves the performance of the default storage structure. And the system is designed to be compatible with different storage engines for easy use in practical applications. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | true | false | 191,848 |
2005.07930 | HVS-Based Perceptual Color Compression of Image Data | In perceptual image coding applications, the main objective is to decrease, as much as possible, Bits Per Pixel (BPP) while avoiding noticeable distortions in the reconstructed image. In this paper, we propose a novel perceptual image coding technique, named Perceptual Color Compression (PCC). PCC is based on a novel model related to Human Visual System (HVS) spectral sensitivity and CIELAB Just Noticeable Color Difference (JNCD). We utilize this modeling to capitalize on the inability of the HVS to perceptually differentiate photons in very similar wavelength bands (e.g., distinguishing very similar shades of a particular color or different colors that look similar). The proposed PCC technique can be used with RGB (4:4:4) image data of various bit depths and spatial resolutions. In the evaluations, we compare the proposed PCC technique with a set of reference methods including Versatile Video Coding (VVC) and High Efficiency Video Coding (HEVC) in addition to two other recently proposed algorithms. Our PCC method attains considerable BPP reductions compared with all four reference techniques including, on average, 52.6% BPP reductions compared with VVC (VVC in All Intra still image coding mode). Regarding image perceptual reconstruction quality, PCC achieves a score of SSIM = 0.99 in all tests in addition to a score of MS-SSIM = 0.99 in all but one test. Moreover, MOS = 5 is attained in 75% of subjective evaluation assessments conducted. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 177,443 |
2005.02498 | Plasticity without phenomenology: a first step | A novel, concurrent multiscale approach to meso/macroscale plasticity is demonstrated. It utilizes a carefully designed coupling of a partial differential equation (pde) based theory of dislocation mediated crystal plasticity with time-averaged inputs from microscopic Dislocation Dynamics (DD), adapting a state-of-the-art mathematical coarse-graining scheme. The stress-strain response of mesoscopic samples at realistic, slow, loading rates up to appreciable values of strain is obtained, with significant speed-up in compute time compared to conventional DD. Effects of crystal orientation, loading rate, and the ratio of the initial mobile to sessile dislocation density on the macroscopic response, for both load and displacement controlled simulations are demonstrated. These results are obtained without using any phenomenological constitutive assumption, except for thermal activation which is not a part of microscopic DD. The results also demonstrate the effect of the internal stresses on the collective behavior of dislocations, manifesting, in a set of examples, as a Stage I to Stage II hardening transition. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 175,885 |
1903.04722 | Progressive Generative Adversarial Binary Networks for Music Generation | Recent improvements in generative adversarial network (GAN) training techniques prove that progressively training a GAN drastically stabilizes the training and improves the quality of outputs produced. Adding layers after the previous ones have converged has proven to help in better overall convergence and stability of the model as well as reducing the training time by a sufficient amount. Thus we use this training technique to train the model progressively in the time and pitch domain i.e. starting from a very small time value and pitch range we gradually expand the matrix sizes until the end result is a completely trained model giving outputs having tensor sizes [4 (bar) x 96 (time steps) x 84 (pitch values) x 8 (tracks)]. As proven in previously proposed models deterministic binary neurons also help in improving the results. Thus we make use of a layer of deterministic binary neurons at the end of the generator to get binary valued outputs instead of fractional values existing between 0 and 1. | false | false | true | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 124,034 |
2111.10769 | Design of an Novel Spectrum Sensing Scheme Based on Long Short-Term
Memory and Experimental Validation | Spectrum sensing allows cognitive radio systems to detect relevant signals in despite the presence of severe interference. Most of the existing spectrum sensing techniques use a particular signal-noise model with certain assumptions and derive certain detection performance. To deal with this uncertainty, learning based approaches are being adopted and more recently deep learning based tools have become popular. Here, we propose an approach of spectrum sensing which is based on long short term memory (LSTM) which is a critical element of deep learning networks (DLN). Use of LSTM facilitates implicit feature learning from spectrum data. The DLN is trained using several features and the performance of the proposed sensing technique is validated with the help of an empirical testbed setup using Adalm Pluto. The testbed is trained to acquire the primary signal of a real world radio broadcast taking place using FM. Experimental data show that even at low signal to noise ratio, our approach performs well in terms of detection and classification accuracies, as compared to current spectrum sensing methods. | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | false | false | 267,439 |
2309.06420 | Verifiable Reinforcement Learning Systems via Compositionality | We propose a framework for verifiable and compositional reinforcement learning (RL) in which a collection of RL subsystems, each of which learns to accomplish a separate subtask, are composed to achieve an overall task. The framework consists of a high-level model, represented as a parametric Markov decision process, which is used to plan and analyze compositions of subsystems, and of the collection of low-level subsystems themselves. The subsystems are implemented as deep RL agents operating under partial observability. By defining interfaces between the subsystems, the framework enables automatic decompositions of task specifications, e.g., reach a target set of states with a probability of at least 0.95, into individual subtask specifications, i.e. achieve the subsystem's exit conditions with at least some minimum probability, given that its entry conditions are met. This in turn allows for the independent training and testing of the subsystems. We present theoretical results guaranteeing that if each subsystem learns a policy satisfying its subtask specification, then their composition is guaranteed to satisfy the overall task specification. Conversely, if the subtask specifications cannot all be satisfied by the learned policies, we present a method, formulated as the problem of finding an optimal set of parameters in the high-level model, to automatically update the subtask specifications to account for the observed shortcomings. The result is an iterative procedure for defining subtask specifications, and for training the subsystems to meet them. Experimental results demonstrate the presented framework's novel capabilities in environments with both full and partial observability, discrete and continuous state and action spaces, as well as deterministic and stochastic dynamics. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 391,413 |
1109.2415 | Convergence Rates of Inexact Proximal-Gradient Methods for Convex
Optimization | We consider the problem of optimizing the sum of a smooth convex function and a non-smooth convex function using proximal-gradient methods, where an error is present in the calculation of the gradient of the smooth term or in the proximity operator with respect to the non-smooth term. We show that both the basic proximal-gradient method and the accelerated proximal-gradient method achieve the same convergence rate as in the error-free case, provided that the errors decrease at appropriate rates.Using these rates, we perform as well as or better than a carefully chosen fixed error level on a set of structured sparsity problems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 12,120 |
1211.0122 | On Rational-Interpolation Based List-Decoding and List-Decoding Binary
Goppa Codes | We derive the Wu list-decoding algorithm for Generalised Reed-Solomon (GRS) codes by using Gr\"obner bases over modules and the Euclidean algorithm (EA) as the initial algorithm instead of the Berlekamp-Massey algorithm (BMA). We present a novel method for constructing the interpolation polynomial fast. We give a new application of the Wu list decoder by decoding irreducible binary Goppa codes up to the binary Johnson radius. Finally, we point out a connection between the governing equations of the Wu algorithm and the Guruswami-Sudan algorithm (GSA), immediately leading to equality in the decoding range and a duality in the choice of parameters needed for decoding, both in the case of GRS codes and in the case of Goppa codes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 19,510 |
1302.6330 | An event-based model for contracts | We introduce a basic model for contracts. Our model extends event structures with a new relation, which faithfully captures the circular dependencies among contract clauses. We establish whether an agreement exists which respects all the contracts at hand (i.e. all the dependencies can be resolved), and we detect the obligations of each participant. The main technical contribution is a correspondence between our model and a fragment of the contract logic PCL. More precisely, we show that the reachable events are exactly those which correspond to provable atoms in the logic. Despite of this strong correspondence, our model improves previous work on PCL by exhibiting a finer-grained notion of culpability, which takes into account the legitimate orderings of events. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | true | 22,369 |
1709.00657 | Detection of Moving Object in Dynamic Background Using Gaussian
Max-Pooling and Segmentation Constrained RPCA | Due to its efficiency and stability, Robust Principal Component Analysis (RPCA) has been emerging as a promising tool for moving object detection. Unfortunately, existing RPCA based methods assume static or quasi-static background, and thereby they may have trouble in coping with the background scenes that exhibit a persistent dynamic behavior. In this work, we shall introduce two techniques to fill in the gap. First, instead of using the raw pixel-value as features that are brittle in the presence of dynamic background, we devise a so-called Gaussian max-pooling operator to estimate a "stable-value" for each pixel. Those stable-values are robust to various background changes and can therefore distinguish effectively the foreground objects from the background. Then, to obtain more accurate results, we further propose a Segmentation Constrained RPCA (SC-RPCA) model, which incorporates the temporal and spatial continuity in images into RPCA. The inference process of SC-RPCA is a group sparsity constrained nuclear norm minimization problem, which is convex and easy to solve. Experimental results on seven videos from the CDCNET 2014 database show the superior performance of the proposed method. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 79,944 |
1404.6369 | Applying machine learning to the problem of choosing a heuristic to
select the variable ordering for cylindrical algebraic decomposition | Cylindrical algebraic decomposition(CAD) is a key tool in computational algebraic geometry, particularly for quantifier elimination over real-closed fields. When using CAD, there is often a choice for the ordering placed on the variables. This can be important, with some problems infeasible with one variable ordering but easy with another. Machine learning is the process of fitting a computer model to a complex function based on properties learned from measured data. In this paper we use machine learning (specifically a support vector machine) to select between heuristics for choosing a variable ordering, outperforming each of the separate heuristics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 32,583 |
1912.12170 | Mitigating large adversarial perturbations on X-MAS (X minus Moving
Averaged Samples) | We propose the scheme that mitigates the adversarial perturbation $\epsilon$ on the adversarial example $X_{adv}$ ($=$ $X$ $\pm$ $\epsilon$, $X$ is a benign sample) by subtracting the estimated perturbation $\hat{\epsilon}$ from $X$ $+$ $\epsilon$ and adding $\hat{\epsilon}$ to $X$ $-$ $\epsilon$. The estimated perturbation $\hat{\epsilon}$ comes from the difference between $X_{adv}$ and its moving-averaged outcome $W_{avg}*X_{adv}$ where $W_{avg}$ is $N \times N$ moving average kernel that all the coefficients are one. Usually, the adjacent samples of an image are close to each other such that we can let $X$ $\approx$ $W_{avg}*X$ (naming this relation after X-MAS[X minus Moving Averaged Samples]). By doing that, we can make the estimated perturbation $\hat{\epsilon}$ falls within the range of $\epsilon$. The scheme is also extended to do the multi-level mitigation by configuring the mitigated adversarial example $X_{adv}$ $\pm$ $\hat{\epsilon}$ as a new adversarial example to be mitigated. The multi-level mitigation gets $X_{adv}$ closer to $X$ with a smaller (i.e. mitigated) perturbation than original unmitigated perturbation by setting the moving averaged adversarial sample $W_{avg} * X_{adv}$ (which has the smaller perturbation than $X_{adv}$ if $X$ $\approx$ $W_{avg}*X$) as the boundary condition that the multi-level mitigation cannot cross over (i.e. decreasing $\epsilon$ cannot go below and increasing $\epsilon$ cannot go beyond). With the multi-level mitigation, we can get high prediction accuracies even in the adversarial example having a large perturbation (i.e. $\epsilon$ $>$ $16$). The proposed scheme is evaluated with adversarial examples crafted by the FGSM (Fast Gradient Sign Method) based attacks on ResNet-50 trained with ImageNet dataset. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | false | 158,785 |
1109.3798 | Charge-Balanced Minimum-Power Controls for Spiking Neuron Oscillators | In this paper, we study the optimal control of phase models for spiking neuron oscillators. We focus on the design of minimum-power current stimuli that elicit spikes in neurons at desired times. We furthermore take the charge-balanced constraint into account because in practice undesirable side effects may occur due to the accumulation of electric charge resulting from external stimuli. Charge-balanced minimum-power controls are derived for a general phase model using the maximum principle, where the cases with unbounded and bounded control amplitude are examined. The latter is of practical importance since phase models are more accurate for weak forcing. The developed optimal control strategies are then applied to both mathematically ideal and experimentally observed phase models to demonstrate their applicability, including the phase model for the widely studied Hodgkin-Huxley equations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 12,213 |
2109.07471 | Data-Driven Theory-guided Learning of Partial Differential Equations
using SimultaNeous Basis Function Approximation and Parameter Estimation
(SNAPE) | The measured spatiotemporal response of various physical processes is utilized to infer the governing partial differential equations (PDEs). We propose SimultaNeous Basis Function Approximation and Parameter Estimation (SNAPE), a technique of parameter estimation of PDEs that is robust against high levels of noise nearly 100 %, by simultaneously fitting basis functions to the measured response and estimating the parameters of both ordinary and partial differential equations. The domain knowledge of the general multidimensional process is used as a constraint in the formulation of the optimization framework. SNAPE not only demonstrates its applicability on various complex dynamic systems that encompass wide scientific domains including Schr\"odinger equation, chaotic duffing oscillator, and Navier-Stokes equation but also estimates an analytical approximation to the process response. The method systematically combines the knowledge of well-established scientific theories and the concepts of data science to infer the properties of the process from the observed data. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 255,539 |
2007.14461 | Modeling Behaviour to Predict User State: Self-Reports as Ground Truth | Methods that detect user states such as emotions are useful for interactive systems. In this position paper, we argue for model-based approaches that are trained on user behaviour and self-reported user state as ground truths. In an application context, they record behaviour, extract relevant features, and use the models to predict user states. We describe how this approach can be implemented and discuss its benefits in comparison to solely self-reports in an application and to models of behaviour without the selfreport ground truths. Finally, we discuss shortcomings of this approach by considering its drawbacks and limitations. | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 189,400 |
cmp-lg/9601011 | Parsing with Typed Feature Structures | In this paper we provide for parsing with respect to grammars expressed in a general TFS-based formalism, a restriction of ALE. Our motivation being the design of an abstract (WAM-like) machine for the formalism, we consider parsing as a computational process and use it as an operational semantics to guide the design of the control structures for the abstract machine. We emphasize the notion of abstract typed feature structures (AFSs) that encode the essential information of TFSs and define unification over AFSs rather than over TFSs. We then introduce an explicit construct of multi-rooted feature structures (MRSs) that naturally extend TFSs and use them to represent phrasal signs as well as grammar rules. We also employ abstractions of MRSs and give the mathematical foundations needed for manipulating them. We formally define grammars and the languages they generate, and then describe a model for computation that corresponds to bottom-up chart parsing: grammars written in the TFS-based formalism are executed by the parser. We show that the computation is correct with respect to the independent definition. Finally, we discuss the class of grammars for which computations terminate and prove that termination can be guaranteed for off-line parsable grammars. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,492 |
1601.01607 | NodIO, a JavaScript framework for volunteer-based evolutionary
algorithms : first results | JavaScript is an interpreted language mainly known for its inclusion in web browsers, making them a container for rich Internet based applications. This has inspired its use, for a long time, as a tool for evolutionary algorithms, mainly so in browser-based volunteer computing environments. Several libraries have also been published so far and are in use. However, the last years have seen a resurgence of interest in the language, becoming one of the most popular and thus spawning the improvement of its implementations, which are now the foundation of many new client-server applications. We present such an application for running distributed volunteer-based evolutionary algorithm experiments, and we make a series of measurements to establish the speed of JavaScript in evolutionary algorithms that can serve as a baseline for comparison with other distributed computing experiments. These experiments use different integer and floating point problems, and prove that the speed of JavaScript is actually competitive with other languages commonly used by the evolutionary algorithm practitioner. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | true | 50,762 |
1910.03053 | Graph Few-shot Learning via Knowledge Transfer | Towards the challenging problem of semi-supervised node classification, there have been extensive studies. As a frontier, Graph Neural Networks (GNNs) have aroused great interest recently, which update the representation of each node by aggregating information of its neighbors. However, most GNNs have shallow layers with a limited receptive field and may not achieve satisfactory performance especially when the number of labeled nodes is quite small. To address this challenge, we innovatively propose a graph few-shot learning (GFL) algorithm that incorporates prior knowledge learned from auxiliary graphs to improve classification accuracy on the target graph. Specifically, a transferable metric space characterized by a node embedding and a graph-specific prototype embedding function is shared between auxiliary graphs and the target, facilitating the transfer of structural knowledge. Extensive experiments and ablation studies on four real-world graph datasets demonstrate the effectiveness of our proposed model. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 148,395 |
1603.03984 | An efficient Exact-PGA algorithm for constant curvature manifolds | Manifold-valued datasets are widely encountered in many computer vision tasks. A non-linear analog of the PCA, called the Principal Geodesic Analysis (PGA) suited for data lying on Riemannian manifolds was reported in literature a decade ago. Since the objective function in PGA is highly non-linear and hard to solve efficiently in general, researchers have proposed a linear approximation. Though this linear approximation is easy to compute, it lacks accuracy especially when the data exhibits a large variance. Recently, an alternative called exact PGA was proposed which tries to solve the optimization without any linearization. For general Riemannian manifolds, though it gives better accuracy than the original (linearized) PGA, for data that exhibit large variance, the optimization is not computationally efficient. In this paper, we propose an efficient exact PGA for constant curvature Riemannian manifolds (CCM-EPGA). CCM-EPGA differs significantly from existing PGA algorithms in two aspects, (i) the distance between a given manifold-valued data point and the principal submanifold is computed analytically and thus no optimization is required as in existing methods. (ii) Unlike the existing PGA algorithms, the descent into codimension-1 submanifolds does not require any optimization but is accomplished through the use of the Rimeannian inverse Exponential map and the parallel transport operations. We present theoretical and experimental results for constant curvature Riemannian manifolds depicting favorable performance of CCM-EPGA compared to existing PGA algorithms. We also present data reconstruction from principal components and directions which has not been presented in literature in this setting. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 53,175 |
2101.00001 | Etat de l'art sur l'application des bandits multi-bras | The Multi-armed bandit offer the advantage to learn and exploit the already learnt knowledge at the same time. This capability allows this approach to be applied in different domains, going from clinical trials where the goal is investigating the effects of different experimental treatments while minimizing patient losses, to adaptive routing where the goal is to minimize the delays in a network. This article provides a review of the recent results on applying bandit to real-life scenario and summarize the state of the art for each of these fields. Different techniques has been proposed to solve this problem setting, like epsilon-greedy, Upper confident bound (UCB) and Thompson Sampling (TS). We are showing here how this algorithms were adapted to solve the different problems of exploration exploitation. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 213,933 |
2502.08512 | Measuring Diversity in Synthetic Datasets | Large language models (LLMs) are widely adopted to generate synthetic datasets for various natural language processing (NLP) tasks, such as text classification and summarization. However, accurately measuring the diversity of these synthetic datasets-an aspect crucial for robust model performance-remains a significant challenge. In this paper, we introduce DCScore, a novel method for measuring synthetic dataset diversity from a classification perspective. Specifically, DCScore formulates diversity evaluation as a sample classification task, leveraging mutual relationships among samples. We further provide theoretical verification of the diversity-related axioms satisfied by DCScore, highlighting its role as a principled diversity evaluation method. Experimental results on synthetic datasets reveal that DCScore enjoys a stronger correlation with multiple diversity pseudo-truths of evaluated datasets, underscoring its effectiveness. Moreover, both empirical and theoretical evidence demonstrate that DCScore substantially reduces computational costs compared to existing approaches. Code is available at: https://github.com/BlueWhaleLab/DCScore. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 533,037 |
2303.02695 | Understanding Bugs in Multi-Language Deep Learning Frameworks | Deep learning frameworks (DLFs) have been playing an increasingly important role in this intelligence age since they act as a basic infrastructure for an increasingly wide range of AIbased applications. Meanwhile, as multi-programming-language (MPL) software systems, DLFs are inevitably suffering from bugs caused by the use of multiple programming languages (PLs). Hence, it is of paramount significance to understand the bugs (especially the bugs involving multiple PLs, i.e., MPL bugs) of DLFs, which can provide a foundation for preventing, detecting, and resolving bugs in the development of DLFs. To this end, we manually analyzed 1497 bugs in three MPL DLFs, namely MXNet, PyTorch, and TensorFlow. First, we classified bugs in these DLFs into 12 types (e.g., algorithm design bugs and memory bugs) according to their bug labels and characteristics. Second, we further explored the impacts of different bug types on the development of DLFs, and found that deployment bugs and memory bugs negatively impact the development of DLFs in different aspects the most. Third, we found that 28.6%, 31.4%, and 16.0% of bugs in MXNet, PyTorch, and TensorFlow are MPL bugs, respectively; the PL combination of Python and C/C++ is most used in fixing more than 92% MPL bugs in all DLFs. Finally, the code change complexity of MPL bug fixes is significantly greater than that of single-programming-language (SPL) bug fixes in all the three DLFs, while in PyTorch MPL bug fixes have longer open time and greater communication complexity than SPL bug fixes. These results provide insights for bug management in DLFs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 349,454 |
2409.05006 | HelmetPoser: A Helmet-Mounted IMU Dataset for Data-Driven Estimation of
Human Head Motion in Diverse Conditions | Helmet-mounted wearable positioning systems are crucial for enhancing safety and facilitating coordination in industrial, construction, and emergency rescue environments. These systems, including LiDAR-Inertial Odometry (LIO) and Visual-Inertial Odometry (VIO), often face challenges in localization due to adverse environmental conditions such as dust, smoke, and limited visual features. To address these limitations, we propose a novel head-mounted Inertial Measurement Unit (IMU) dataset with ground truth, aimed at advancing data-driven IMU pose estimation. Our dataset captures human head motion patterns using a helmet-mounted system, with data from ten participants performing various activities. We explore the application of neural networks, specifically Long Short-Term Memory (LSTM) and Transformer networks, to correct IMU biases and improve localization accuracy. Additionally, we evaluate the performance of these methods across different IMU data window dimensions, motion patterns, and sensor types. We release a publicly available dataset, demonstrate the feasibility of advanced neural network approaches for helmet-based localization, and provide evaluation metrics to establish a baseline for future studies in this field. Data and code can be found at https://lqiutong.github.io/HelmetPoser.github.io/. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 486,598 |
1801.00070 | Sum of squares certificates for stability of planar, homogeneous, and
switched systems | We show that existence of a global polynomial Lyapunov function for a homogeneous polynomial vector field or a planar polynomial vector field (under a mild condition) implies existence of a polynomial Lyapunov function that is a sum of squares (sos) and that the negative of its derivative is also a sum of squares. This result is extended to show that such sos-based certificates of stability are guaranteed to exist for all stable switched linear systems. For this class of systems, we further show that if the derivative inequality of the Lyapunov function has an sos certificate, then the Lyapunov function itself is automatically a sum of squares. These converse results establish cases where semidefinite programming is guaranteed to succeed in finding proofs of Lyapunov inequalities. Finally, we demonstrate some merits of replacing the sos requirement on a polynomial Lyapunov function with an sos requirement on its top homogeneous component. In particular, we show that this is a weaker algebraic requirement in addition to being cheaper to impose computationally. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 87,502 |
2205.10635 | SplitPlace: AI Augmented Splitting and Placement of Large-Scale Neural
Networks in Mobile Edge Environments | In recent years, deep learning models have become ubiquitous in industry and academia alike. Deep neural networks can solve some of the most complex pattern-recognition problems today, but come with the price of massive compute and memory requirements. This makes the problem of deploying such large-scale neural networks challenging in resource-constrained mobile edge computing platforms, specifically in mission-critical domains like surveillance and healthcare. To solve this, a promising solution is to split resource-hungry neural networks into lightweight disjoint smaller components for pipelined distributed processing. At present, there are two main approaches to do this: semantic and layer-wise splitting. The former partitions a neural network into parallel disjoint models that produce a part of the result, whereas the latter partitions into sequential models that produce intermediate results. However, there is no intelligent algorithm that decides which splitting strategy to use and places such modular splits to edge nodes for optimal performance. To combat this, this work proposes a novel AI-driven online policy, SplitPlace, that uses Multi-Armed-Bandits to intelligently decide between layer and semantic splitting strategies based on the input task's service deadline demands. SplitPlace places such neural network split fragments on mobile edge devices using decision-aware reinforcement learning for efficient and scalable computing. Moreover, SplitPlace fine-tunes its placement engine to adapt to volatile environments. Our experiments on physical mobile-edge environments with real-world workloads show that SplitPlace can significantly improve the state-of-the-art in terms of average response time, deadline violation rate, inference accuracy, and total reward by up to 46, 69, 3 and 12 percent respectively. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 297,789 |
2306.17177 | Leveraging ChatGPT As Text Annotation Tool For Sentiment Analysis | Sentiment analysis is a well-known natural language processing task that involves identifying the emotional tone or polarity of a given piece of text. With the growth of social media and other online platforms, sentiment analysis has become increasingly crucial for businesses and organizations seeking to monitor and comprehend customer feedback as well as opinions. Supervised learning algorithms have been popularly employed for this task, but they require human-annotated text to create the classifier. To overcome this challenge, lexicon-based tools have been used. A drawback of lexicon-based algorithms is their reliance on pre-defined sentiment lexicons, which may not capture the full range of sentiments in natural language. ChatGPT is a new product of OpenAI and has emerged as the most popular AI product. It can answer questions on various topics and tasks. This study explores the use of ChatGPT as a tool for data labeling for different sentiment analysis tasks. It is evaluated on two distinct sentiment analysis datasets with varying purposes. The results demonstrate that ChatGPT outperforms other lexicon-based unsupervised methods with significant improvements in overall accuracy. Specifically, compared to the best-performing lexical-based algorithms, ChatGPT achieves a remarkable increase in accuracy of 20% for the tweets dataset and approximately 25% for the Amazon reviews dataset. These findings highlight the exceptional performance of ChatGPT in sentiment analysis tasks, surpassing existing lexicon-based approaches by a significant margin. The evidence suggests it can be used for annotation on different sentiment analysis events and taskss. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 376,620 |
2412.17282 | LMD-PGN: Cross-Modal Knowledge Distillation from First-Person-View
Images to Third-Person-View BEV Maps for Universal Point Goal Navigation | Point goal navigation (PGN) is a mapless navigation approach that trains robots to visually navigate to goal points without relying on pre-built maps. Despite significant progress in handling complex environments using deep reinforcement learning, current PGN methods are designed for single-robot systems, limiting their generalizability to multi-robot scenarios with diverse platforms. This paper addresses this limitation by proposing a knowledge transfer framework for PGN, allowing a teacher robot to transfer its learned navigation model to student robots, including those with unknown or black-box platforms. We introduce a novel knowledge distillation (KD) framework that transfers first-person-view (FPV) representations (view images, turning/forward actions) to universally applicable third-person-view (TPV) representations (local maps, subgoals). The state is redefined as reconstructed local maps using SLAM, while actions are mapped to subgoals on a predefined grid. To enhance training efficiency, we propose a sampling-efficient KD approach that aligns training episodes via a noise-robust local map descriptor (LMD). Although validated on 2D wheeled robots, this method can be extended to 3D action spaces, such as drones. Experiments conducted in Habitat-Sim demonstrate the feasibility of the proposed framework, requiring minimal implementation effort. This study highlights the potential for scalable and cross-platform PGN solutions, expanding the applicability of embodied AI systems in multi-robot scenarios. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 519,904 |
1809.07225 | Deterministic limit of temporal difference reinforcement learning for
stochastic games | Reinforcement learning in multiagent systems has been studied in the fields of economic game theory, artificial intelligence and statistical physics by developing an analytical understanding of the learning dynamics (often in relation to the replicator dynamics of evolutionary game theory). However, the majority of these analytical studies focuses on repeated normal form games, which only have a single environmental state. Environmental dynamics, i.e., changes in the state of an environment affecting the agents' payoffs has received less attention, lacking a universal method to obtain deterministic equations from established multistate reinforcement learning algorithms. In this work we present a novel methodological extension, separating the interaction from the adaptation time scale, to derive the deterministic limit of a general class of reinforcement learning algorithms, called temporal difference learning. This form of learning is equipped to function in more realistic multistate environments by using the estimated value of future environmental states to adapt the agent's behavior. We demonstrate the potential of our method with the three well established learning algorithms Q learning, SARSA learning and Actor-Critic learning. Illustrations of their dynamics on two multiagent, multistate environments reveal a wide range of different dynamical regimes, such as convergence to fixed points, limit cycles, and even deterministic chaos. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 108,237 |
2105.05217 | Representation Learning via Global Temporal Alignment and
Cycle-Consistency | We introduce a weakly supervised method for representation learning based on aligning temporal sequences (e.g., videos) of the same process (e.g., human action). The main idea is to use the global temporal ordering of latent correspondences across sequence pairs as a supervisory signal. In particular, we propose a loss based on scoring the optimal sequence alignment to train an embedding network. Our loss is based on a novel probabilistic path finding view of dynamic time warping (DTW) that contains the following three key features: (i) the local path routing decisions are contrastive and differentiable, (ii) pairwise distances are cast as probabilities that are contrastive as well, and (iii) our formulation naturally admits a global cycle consistency loss that verifies correspondences. For evaluation, we consider the tasks of fine-grained action classification, few shot learning, and video synchronization. We report significant performance increases over previous methods. In addition, we report two applications of our temporal alignment framework, namely 3D pose reconstruction and fine-grained audio/visual retrieval. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 234,750 |
2203.01668 | Translational Lung Imaging Analysis Through Disentangled Representations | The development of new treatments often requires clinical trials with translational animal models using (pre)-clinical imaging to characterize inter-species pathological processes. Deep Learning (DL) models are commonly used to automate retrieving relevant information from the images. Nevertheless, they typically suffer from low generability and explainability as a product of their entangled design, resulting in a specific DL model per animal model. Consequently, it is not possible to take advantage of the high capacity of DL to discover statistical relationships from inter-species images. To alleviate this problem, in this work, we present a model capable of extracting disentangled information from images of different animal models and the mechanisms that generate the images. Our method is located at the intersection between deep generative models, disentanglement and causal representation learning. It is optimized from images of pathological lung infected by Tuberculosis and is able: a) from an input slice, infer its position in a volume, the animal model to which it belongs, the damage present and even more, generate a mask covering the whole lung (similar overlap measures to the nnU-Net), b) generate realistic lung images by setting the above variables and c) generate counterfactual images, namely, healthy versions of a damaged input slice. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 283,473 |
1801.00444 | Common Throughput Maximization in UAV-Enabled OFDMA Systems with Delay
Consideration | The use of unmanned aerial vehicles (UAVs) as communication platforms is of great practical significance in future wireless networks, especially for on-demand deployment in temporary events and emergency situations. Although prior works have shown the performance improvement by exploiting the UAV's mobility, they mainly focus on delay-tolerant applications. As delay requirements fundamentally limit the UAV's mobility, it remains unknown whether the UAV is able to provide any performance gain in delay-constrained communication scenarios. Motivated by this, we study in this paper a UAV-enabled orthogonal frequency division multiple access (OFDMA) network where a UAV is dispatched as a mobile base station (BS) to serve a group of users on the ground. We consider a minimum-rate ratio (MRR) for each user, defined as the minimum instantaneous rate required over the average achievable throughput, to flexibly adjust the percentage of its delay-constrained data traffic. Under a given set of constraints on the users' MRRs, we aim to maximize the minimum average throughput of all users by jointly optimizing the UAV trajectory and OFDMA resource allocation. First, we show that the max-min throughput in general decreases as the users' MRR constraints become more stringent, which reveals a fundamental throughput-delay tradeoff in UAV-enabled communications. Next, we propose an iterative parameter-assisted block coordinate descent method to optimize the UAV trajectory and OFDMA resource allocation alternately, by applying the successive convex optimization and the Lagrange duality, respectively. Furthermore, an efficient and systematic UAV trajectory initialization scheme is proposed based on a simple circular trajectory. Finally, simulation results are provided to verify our theoretical findings and demonstrate the effectiveness of our proposed designs. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 87,566 |
2308.13726 | Dynamic Mode Decomposition for data-driven analysis and reduced-order
modelling of ExB plasmas: I. Extraction of spatiotemporally coherent patterns | In this two-part article, we evaluate the utility and the generalizability of the Dynamic Mode Decomposition (DMD) algorithm for data-driven analysis and reduced-order modelling of plasma dynamics in cross-field ExB configurations. The DMD algorithm is an interpretable data-driven method that finds a best-fit linear model describing the time evolution of spatiotemporally coherent structures (patterns) in data. We have applied the DMD to extensive high-fidelity datasets generated using a particle-in-cell (PIC) code based on a cost-efficient reduced-order PIC scheme. In this part, we first provide an overview of the concept of DMD and its underpinning Proper Orthogonal and Singular Value Decomposition methods. Two of the main DMD variants are next introduced. We then present and discuss the results of the DMD application in terms of the identification and extraction of the dominant spatiotemporal modes from high-fidelity data over a range of simulation conditions. We demonstrate that the DMD variant based on variable projection optimization (OPT-DMD) outperforms the basic DMD method in identification of the modes underlying the data, leading to notably more reliable reconstruction of the ground-truth. Furthermore, we show in multiple test cases that the discrete frequency spectrum of OPT-DMD-extracted modes is consistent with the temporal spectrum from the Fast Fourier Transform of the data. This observation implies that the OPT-DMD augments the conventional spectral analyses by being able to uniquely reveal the spatial structure of the dominant modes in the frequency spectra, thus, yielding more accessible, comprehensive information on the spatiotemporal characteristics of the plasma phenomena. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 388,026 |
2110.14196 | From Image to Imuge: Immunized Image Generation | We introduce Imuge, an image tamper resilient generative scheme for image self-recovery. The traditional manner of concealing image content within the image are inflexible and fragile to diverse digital attack, i.e. image cropping and JPEG compression. To address this issue, we jointly train a U-Net backboned encoder, a tamper localization network and a decoder for image recovery. Given an original image, the encoder produces a visually indistinguishable immunized image. At the recipient's side, the verifying network localizes the malicious modifications, and the original content can be approximately recovered by the decoder, despite the presence of the attacks. Several strategies are proposed to boost the training efficiency. We demonstrate that our method can recover the details of the tampered regions with a high quality despite the presence of various kinds of attacks. Comprehensive ablation studies are conducted to validate our network designs. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 263,456 |
2308.09599 | Language-Guided Diffusion Model for Visual Grounding | Visual grounding (VG) tasks involve explicit cross-modal alignment, as semantically corresponding image regions are to be located for the language phrases provided. Existing approaches complete such visual-text reasoning in a single-step manner. Their performance causes high demands on large-scale anchors and over-designed multi-modal fusion modules based on human priors, leading to complicated frameworks that may be difficult to train and overfit to specific scenarios. Even worse, such once-for-all reasoning mechanisms are incapable of refining boxes continuously to enhance query-region matching. In contrast, in this paper, we formulate an iterative reasoning process by denoising diffusion modeling. Specifically, we propose a language-guided diffusion framework for visual grounding, LG-DVG, which trains the model to progressively reason queried object boxes by denoising a set of noisy boxes with the language guide. To achieve this, LG-DVG gradually perturbs query-aligned ground truth boxes to noisy ones and reverses this process step by step, conditional on query semantics. Extensive experiments for our proposed framework on five widely used datasets validate the superior performance of solving visual grounding, a cross-modal alignment task, in a generative way. The source codes are available at https://github.com/iQua/vgbase/tree/main/examples/DiffusionVG. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 386,354 |
2402.03289 | Make Every Move Count: LLM-based High-Quality RTL Code Generation Using
MCTS | Existing large language models (LLMs) for register transfer level code generation face challenges like compilation failures and suboptimal power, performance, and area (PPA) efficiency. This is due to the lack of PPA awareness in conventional transformer decoding algorithms. In response, we present an automated transformer decoding algorithm that integrates Monte Carlo tree-search for lookahead, guiding the transformer to produce compilable, functionally correct, and PPA-optimized code. Empirical evaluation with a fine-tuned language model on RTL codesets shows that our proposed technique consistently generates functionally correct code compared to prompting-only methods and effectively addresses the PPA-unawareness drawback of naive large language models. For the largest design generated by the state-of-the-art LLM (16-bit adder), our technique can achieve a 31.8% improvement in the area-delay product. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 426,941 |
2307.06401 | Deterministic Multi-sensor Measurement-adaptive Birth using Labeled
Random Finite Sets | Measurement-adaptive track initiation remains a critical design requirement of many practical multi-target tracking systems. For labeled random finite sets multi-object filters, prior work has been established to construct a labeled multi-object birth density using measurements from multiple sensors. A truncation procedure has also been provided that leverages a stochastic Gibbs sampler to truncate the birth density for scalability. In this work, we introduce a deterministic herded Gibbs sampling truncation solution for efficient multi-sensor adaptive track initialization. Removing the stochastic behavior of the track initialization procedure without impacting average tracking performance enables a more robust tracking solution more suitable for safety-critical applications. Simulation results for linear sensing scenarios are provided to verify performance. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 379,056 |
2105.04356 | Coconut trees detection and segmentation in aerial imagery using mask
region-based convolution neural network | Food resources face severe damages under extraordinary situations of catastrophes such as earthquakes, cyclones, and tsunamis. Under such scenarios, speedy assessment of food resources from agricultural land is critical as it supports aid activity in the disaster hit areas. In this article, a deep learning approach is presented for the detection and segmentation of coconut tress in aerial imagery provided through the AI competition organized by the World Bank in collaboration with OpenAerialMap and WeRobotics. Maked Region-based Convolutional Neural Network approach was used identification and segmentation of coconut trees. For the segmentation task, Mask R-CNN model with ResNet50 and ResNet1010 based architectures was used. Several experiments with different configuration parameters were performed and the best configuration for the detection of coconut trees with more than 90% confidence factor was reported. For the purpose of evaluation, Microsoft COCO dataset evaluation metric namely mean average precision (mAP) was used. An overall 91% mean average precision for coconut trees detection was achieved. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 234,484 |
2206.05794 | SGD and Weight Decay Secretly Minimize the Rank of Your Neural Network | We investigate the inherent bias of Stochastic Gradient Descent (SGD) toward learning low-rank weight matrices during the training of deep neural networks. Our results demonstrate that training with mini-batch SGD and weight decay induces a bias toward rank minimization in the weight matrices. Specifically, we show both theoretically and empirically that this bias becomes more pronounced with smaller batch sizes, higher learning rates, or stronger weight decay. Additionally, we predict and empirically confirm that weight decay is essential for this bias to occur. Unlike previous literature, our analysis does not rely on assumptions about the data, convergence, or optimality of the weight matrices, making it applicable to a wide range of neural network architectures of any width or depth. Finally, we empirically explore the connection between this bias and generalization, finding that it has a marginal effect on the test performance. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,138 |
2309.01523 | A Blackbox Model Is All You Need to Breach Privacy: Smart Grid
Forecasting Models as a Use Case | This paper investigates the potential privacy risks associated with forecasting models, with specific emphasis on their application in the context of smart grids. While machine learning and deep learning algorithms offer valuable utility, concerns arise regarding their exposure of sensitive information. Previous studies have focused on classification models, overlooking risks associated with forecasting models. Deep learning based forecasting models, such as Long Short Term Memory (LSTM), play a crucial role in several applications including optimizing smart grid systems but also introduce privacy risks. Our study analyzes the ability of forecasting models to leak global properties and privacy threats in smart grid systems. We demonstrate that a black box access to an LSTM model can reveal a significant amount of information equivalent to having access to the data itself (with the difference being as low as 1% in Area Under the ROC Curve). This highlights the importance of protecting forecasting models at the same level as the data. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 389,715 |
2111.11046 | FRT-PAD: Effective Presentation Attack Detection Driven by Face Related
Task | The robustness and generalization ability of Presentation Attack Detection (PAD) methods is critical to ensure the security of Face Recognition Systems (FRSs). However, in a real scenario, Presentation Attacks (PAs) are various and it is hard to predict the Presentation Attack Instrument (PAI) species that will be used by the attacker. Existing PAD methods are highly dependent on the limited training set and cannot generalize well to unknown PAI species. Unlike this specific PAD task, other face related tasks trained by huge amount of real faces (e.g. face recognition and attribute editing) can be effectively adopted into different application scenarios. Inspired by this, we propose to trade position of PAD and face related work in a face system and apply the free acquired prior knowledge from face related tasks to solve face PAD, so as to improve the generalization ability in detecting PAs. The proposed method, first introduces task specific features from other face related task, then, we design a Cross-Modal Adapter using a Graph Attention Network (GAT) to re-map such features to adapt to PAD task. Finally, face PAD is achieved by using the hierarchical features from a CNN-based PA detector and the re-mapped features. The experimental results show that the proposed method can achieve significant improvements in the complicated and hybrid datasets, when compared with the state-of-the-art methods. In particular, when training on the datasets OULU-NPU, CASIA-FASD, and Idiap Replay-Attack, we obtain HTER (Half Total Error Rate) of 5.48% for the testing dataset MSU-MFSD, outperforming the baseline by 7.39%. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 267,531 |
1703.08864 | Learning Simpler Language Models with the Differential State Framework | Learning useful information across long time lags is a critical and difficult problem for temporal neural models in tasks such as language modeling. Existing architectures that address the issue are often complex and costly to train. The Differential State Framework (DSF) is a simple and high-performing design that unifies previously introduced gated neural models. DSF models maintain longer-term memory by learning to interpolate between a fast-changing data-driven representation and a slowly changing, implicitly stable state. This requires hardly any more parameters than a classical, simple recurrent network. Within the DSF framework, a new architecture is presented, the Delta-RNN. In language modeling at the word and character levels, the Delta-RNN outperforms popular complex architectures, such as the Long Short Term Memory (LSTM) and the Gated Recurrent Unit (GRU), and, when regularized, performs comparably to several state-of-the-art baselines. At the subword level, the Delta-RNN's performance is comparable to that of complex gated architectures. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 70,658 |
2308.05342 | Metacognitive Prompting Improves Understanding in Large Language Models | In Large Language Models (LLMs), there have been consistent advancements in task-specific performance, largely influenced by effective prompt design. Recent advancements in prompting have enhanced reasoning in logic-intensive tasks for LLMs, yet the nuanced understanding abilities of these models, crucial for processing and interpreting complex information, remain underexplored. In this study, we introduce Metacognitive Prompting (MP), a strategy inspired by human introspective reasoning processes. Using MP, LLMs undergo a systematic series of structured, self-aware evaluations, drawing on both their vast inherent knowledge and new insights. We conduct extensive experiments on four prevalent LLMs: Llama2, PaLM2, GPT-3.5, and GPT-4, across ten natural language understanding (NLU) datasets from GLUE, SuperGLUE, BLUE, and LexGLUE benchmarks. Additionally, we compare our method with chain-of-thought prompting and its advanced versions. The results show that GPT-4 consistently excels across all tasks, while other models have shown significant progress in some tasks when used in conjunction with MP. Furthermore, MP consistently outperforms existing prompting methods in both general and domain-specific NLU tasks. This study underscores the potential to amplify the understanding abilities of LLMs and highlights the benefits of mirroring human introspective reasoning in NLU tasks. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 384,755 |
1810.00818 | RGB-D Object Detection and Semantic Segmentation for Autonomous
Manipulation in Clutter | Autonomous robotic manipulation in clutter is challenging. A large variety of objects must be perceived in complex scenes, where they are partially occluded and embedded among many distractors, often in restricted spaces. To tackle these challenges, we developed a deep-learning approach that combines object detection and semantic segmentation. The manipulation scenes are captured with RGB-D cameras, for which we developed a depth fusion method. Employing pretrained features makes learning from small annotated robotic data sets possible. We evaluate our approach on two challenging data sets: one captured for the Amazon Picking Challenge 2016, where our team NimbRo came in second in the Stowing and third in the Picking task, and one captured in disaster-response scenarios. The experiments show that object detection and semantic segmentation complement each other and can be combined to yield reliable object perception. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 109,264 |
2005.12022 | Learning to Charge RF-Energy Harvesting Devices in WiFi Networks | In this paper, we consider a solar-powered Access Point (AP) that is tasked with supporting both non-energy harvesting or legacy data users such as laptops, and devices with Radio Frequency (RF)-energy harvesting and sensing capabilities. We propose two solutions that enable the AP to manage its harvested energy via transmit power control and also ensure devices perform sensing tasks frequently. Advantageously, our solutions are suitable for current wireless networks and do not require perfect channel gain information or non-causal energy arrival at devices. The first solution uses a deep Q-network (DQN) whilst the second solution uses Model Predictive Control (MPC) to control the AP's transmit power. Our results show that our DQN and MPC solutions improve energy efficiency and user satisfaction by respectively 16% to 35%, and 10% to 42% as compared to competing algorithms. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | true | 178,625 |
2004.07085 | Neural Status Registers | Standard Neural Networks can learn mathematical operations, but they do not extrapolate. Extrapolation means that the model can apply to larger numbers, well beyond those observed during training. Recent architectures tackle arithmetic operations and can extrapolate; however, the equally important problem of quantitative reasoning remains unaddressed. In this work, we propose a novel architectural element, the Neural Status Register (NSR), for quantitative reasoning over numbers. Our NSR relaxes the discrete bit logic of physical status registers to continuous numbers and allows end-to-end learning with gradient descent. Experiments show that the NSR achieves solutions that extrapolate to numbers many orders of magnitude larger than those in the training set. We successfully train the NSR on number comparisons, piecewise discontinuous functions, counting in sequences, recurrently finding minimums, finding shortest paths in graphs, and comparing digits in images. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 172,687 |
2403.11854 | denoiSplit: a method for joint microscopy image splitting and
unsupervised denoising | In this work, we present denoiSplit, a method to tackle a new analysis task, i.e. the challenge of joint semantic image splitting and unsupervised denoising. This dual approach has important applications in fluorescence microscopy, where semantic image splitting has important applications but noise does generally hinder the downstream analysis of image content. Image splitting involves dissecting an image into its distinguishable semantic structures. We show that the current state-of-the-art method for this task struggles in the presence of image noise, inadvertently also distributing the noise across the predicted outputs. The method we present here can deal with image noise by integrating an unsupervised denoising subtask. This integration results in improved semantic image unmixing, even in the presence of notable and realistic levels of imaging noise. A key innovation in denoiSplit is the use of specifically formulated noise models and the suitable adjustment of KL-divergence loss for the high-dimensional hierarchical latent space we are training. We showcase the performance of denoiSplit across multiple tasks on real-world microscopy images. Additionally, we perform qualitative and quantitative evaluations and compare the results to existing benchmarks, demonstrating the effectiveness of using denoiSplit: a single Variational Splitting Encoder-Decoder (VSE) Network using two suitable noise models to jointly perform semantic splitting and denoising. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 438,895 |
2006.07116 | NAS-Bench-NLP: Neural Architecture Search Benchmark for Natural Language
Processing | Neural Architecture Search (NAS) is a promising and rapidly evolving research area. Training a large number of neural networks requires an exceptional amount of computational power, which makes NAS unreachable for those researchers who have limited or no access to high-performance clusters and supercomputers. A few benchmarks with precomputed neural architectures performances have been recently introduced to overcome this problem and ensure more reproducible experiments. However, these benchmarks are only for the computer vision domain and, thus, are built from the image datasets and convolution-derived architectures. In this work, we step outside the computer vision domain by leveraging the language modeling task, which is the core of natural language processing (NLP). Our main contribution is as follows: we have provided search space of recurrent neural networks on the text datasets and trained 14k architectures within it; we have conducted both intrinsic and extrinsic evaluation of the trained models using datasets for semantic relatedness and language understanding evaluation; finally, we have tested several NAS algorithms to demonstrate how the precomputed results can be utilized. We believe that our results have high potential of usage for both NAS and NLP communities. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 181,693 |
1902.07580 | Where Do Human Heuristics Come From? | Human decision-making deviates from the optimal solution, that maximizes cumulative rewards, in many situations. Here we approach this discrepancy from the perspective of bounded rationality and our goal is to provide a justification for such seemingly sub-optimal strategies. More specifically we investigate the hypothesis, that humans do not know optimal decision-making algorithms in advance, but instead employ a learned, resource-bounded approximation. The idea is formalized through combining a recently proposed meta-learning model based on Recurrent Neural Networks with a resource-bounded objective. The resulting approach is closely connected to variational inference and the Minimum Description Length principle. Empirical evidence is obtained from a two-armed bandit task. Here we observe patterns in our family of models that resemble differences between individual human participants. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 122,013 |
2408.01084 | Adaptive Contrastive Decoding in Retrieval-Augmented Generation for
Handling Noisy Contexts | When using large language models (LLMs) in knowledge-intensive tasks, such as open-domain question answering, external context can bridge the gap between external knowledge and the LLMs' parametric knowledge. Recent research has been developed to amplify contextual knowledge over the parametric knowledge of LLMs with contrastive decoding approaches. While these approaches could yield truthful responses when relevant context is provided, they are prone to vulnerabilities when faced with noisy contexts. We extend the scope of previous studies to encompass noisy contexts and propose adaptive contrastive decoding (ACD) to leverage contextual influence effectively. ACD demonstrates improvements in open-domain question answering tasks compared to baselines, especially in robustness by remaining undistracted by noisy contexts in retrieval-augmented generation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 478,096 |
2305.10467 | Analysing Biomedical Knowledge Graphs using Prime Adjacency Matrices | Most phenomena related to biomedical tasks are inherently complex, and in many cases, are expressed as signals on biomedical Knowledge Graphs (KGs). In this work, we introduce the use of a new representation framework, the Prime Adjacency Matrix (PAM) for biomedical KGs, which allows for very efficient network analysis. PAM utilizes prime numbers to enable representing the whole KG with a single adjacency matrix and the fast computation of multiple properties of the network. We illustrate the applicability of the framework in the biomedical domain by working on different biomedical knowledge graphs and by providing two case studies: one on drug-repurposing for COVID-19 and one on important metapath extraction. We show that we achieve better results than the original proposed workflows, using very simple methods that require no training, in considerably less time. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 365,088 |
2502.05424 | SAMGPT: Text-free Graph Foundation Model for Multi-domain Pre-training
and Cross-domain Adaptation | Graphs are able to model interconnected entities in many online services, supporting a wide range of applications on the Web. This raises an important question: How can we train a graph foundational model on multiple source domains and adapt to an unseen target domain? A major obstacle is that graphs from different domains often exhibit divergent characteristics. Some studies leverage large language models to align multiple domains based on textual descriptions associated with the graphs, limiting their applicability to text-attributed graphs. For text-free graphs, a few recent works attempt to align different feature distributions across domains, while generally neglecting structural differences. In this work, we propose a novel Structure Alignment framework for text-free Multi-domain Graph Pre-Training and cross-domain adaptation (SAMGPT). It is designed to learn multi-domain knowledge from graphs originating in multiple source domains, which can then be adapted to address applications in an unseen target domain. Specifically, we introduce a set of structure tokens to harmonize structure-based aggregation across source domains during the pre-training phase. Next, for cross-domain adaptation, we design dual prompts, namely, holistic prompts and specific prompts, which adapt unified multi-domain structural knowledge and fine-grained, domain-specific information, respectively, to a target domain. Finally, we conduct comprehensive experiments on seven public datasets to evaluate and analyze the effectiveness of SAMGPT. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 531,598 |
2308.03570 | Partial identification of kernel based two sample tests with mismeasured
data | Nonparametric two-sample tests such as the Maximum Mean Discrepancy (MMD) are often used to detect differences between two distributions in machine learning applications. However, the majority of existing literature assumes that error-free samples from the two distributions of interest are available.We relax this assumption and study the estimation of the MMD under $\epsilon$-contamination, where a possibly non-random $\epsilon$ proportion of one distribution is erroneously grouped with the other. We show that under $\epsilon$-contamination, the typical estimate of the MMD is unreliable. Instead, we study partial identification of the MMD, and characterize sharp upper and lower bounds that contain the true, unknown MMD. We propose a method to estimate these bounds, and show that it gives estimates that converge to the sharpest possible bounds on the MMD as sample size increases, with a convergence rate that is faster than alternative approaches. Using three datasets, we empirically validate that our approach is superior to the alternatives: it gives tight bounds with a low false coverage rate. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 384,083 |
2209.04430 | Investigation of a Machine learning methodology for the SKA pulsar
search pipeline | The SKA pulsar search pipeline will be used for real time detection of pulsars. Modern radio telescopes such as SKA will be generating petabytes of data in their full scale of operation. Hence experience-based and data-driven algorithms become indispensable for applications such as candidate detection. Here we describe our findings from testing a state of the art object detection algorithm called Mask R-CNN to detect candidate signatures in the SKA pulsar search pipeline. We have trained the Mask R-CNN model to detect candidate images. A custom annotation tool was developed to mark the regions of interest in large datasets efficiently. We have successfully demonstrated this algorithm by detecting candidate signatures on a simulation dataset. The paper presents details of this work with a highlight on the future prospects. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 316,783 |
2305.17873 | The Digital Divide in Process Safety: Quantitative Risk Analysis of
Human-AI Collaboration | Digital technologies have dramatically accelerated the digital transformation in process industries, boosted new industrial applications, upgraded the production system, and enhanced operational efficiency. In contrast, the challenges and gaps between human and artificial intelligence (AI) have become more and more prominent, whereas the digital divide in process safety is aggregating. The study attempts to address the following questions: (i)What is AI in the process safety context? (ii)What is the difference between AI and humans in process safety? (iii)How do AI and humans collaborate in process safety? (iv)What are the challenges and gaps in human-AI collaboration? (v)How to quantify the risk of human-AI collaboration in process safety? Qualitative risk analysis based on brainstorming and literature review, and quantitative risk analysis based on layer of protection analysis (LOPA) and Bayesian network (BN), were applied to explore and model. The importance of human reliability should be stressed in the digital age, not usually to increase the reliability of AI, and human-centered AI design in process safety needs to be propagated. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 368,781 |
2406.19248 | Staggered Quantizers for Perfect Perceptual Quality: A Connection
between Quantizers with Common Randomness and Without | The rate-distortion-perception (RDP) framework has attracted significant recent attention due to its application in neural compression. It is important to understand the underlying mechanism connecting procedures with common randomness and those without. Different from previous efforts, we study this problem from a quantizer design perspective. By analyzing an idealized setting, we provide an interpretation of the advantage of dithered quantization in the RDP setting, which further allows us to make a conceptual connection between randomized (dithered) quantizers and quantizers without common randomness. This new understanding leads to a new procedure for RDP coding based on staggered quantizers. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 468,349 |
2412.11744 | Conditional Diffusion Models Based Conditional Independence Testing | Conditional independence (CI) testing is a fundamental task in modern statistics and machine learning. The conditional randomization test (CRT) was recently introduced to test whether two random variables, $X$ and $Y$, are conditionally independent given a potentially high-dimensional set of random variables, $Z$. The CRT operates exceptionally well under the assumption that the conditional distribution $X|Z$ is known. However, since this distribution is typically unknown in practice, accurately approximating it becomes crucial. In this paper, we propose using conditional diffusion models (CDMs) to learn the distribution of $X|Z$. Theoretically and empirically, it is shown that CDMs closely approximate the true conditional distribution. Furthermore, CDMs offer a more accurate approximation of $X|Z$ compared to GANs, potentially leading to a CRT that performs better than those based on GANs. To accommodate complex dependency structures, we utilize a computationally efficient classifier-based conditional mutual information (CMI) estimator as our test statistic. The proposed testing procedure performs effectively without requiring assumptions about specific distribution forms or feature dependencies, and is capable of handling mixed-type conditioning sets that include both continuous and discrete variables. Theoretical analysis shows that our proposed test achieves a valid control of the type I error. A series of experiments on synthetic data demonstrates that our new test effectively controls both type-I and type-II errors, even in high dimensional scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 517,561 |
1908.07902 | Energy Management of Airport Service Electric Vehicles to Match
Renewable Generation through Rollout Approach | Traditional diesel-based airport service vehicles are characterized by a heavy-duty, high-usage-frequency nature and a high carbon intensity per vehicle per hour. Transforming these vehicles into electric vehicles would reduce CO2 emissions and potentially save energy costs in the context of rising fuel prices, if a proper energy management of airport service electric vehicles (ASEVs) is performed. To perform such an energy management, this paper proposes a new customized rollout approach, as a near-optimal control method for a new ASEV dynamics model, which models the ASEV states, their transitions over time, and how control decisions affect them. The rollout approach yields a near-optimal control strategy for the ASEVs to transport luggage and to charge batteries, with the objective to minimize the operation cost, which incentivizes the charging of the ASEVs to match renewable generation. Case studies demonstrate that the rollout approach effectively overcomes the "curse of dimensionality". On both typical summer and winter days, the rollout algorithm results in a total cost approximately 10% less than that of the underlying "greedy charging" heuristic, which charges a battery whenever its state of charge is not the maximum. The rollout algorithm is proven to be adaptive towards flight schedule changes at short notice. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 142,412 |
2208.11484 | An End-to-End OCR Framework for Robust Arabic-Handwriting Recognition
using a Novel Transformers-based Model and an Innovative 270 Million-Words
Multi-Font Corpus of Classical Arabic with Diacritics | This research is the second phase in a series of investigations on developing an Optical Character Recognition (OCR) of Arabic historical documents and examining how different modeling procedures interact with the problem. The first research studied the effect of Transformers on our custom-built Arabic dataset. One of the downsides of the first research was the size of the training data, a mere 15000 images from our 30 million images, due to lack of resources. Also, we add an image enhancement layer, time and space optimization, and Post-Correction layer to aid the model in predicting the correct word for the correct context. Notably, we propose an end-to-end text recognition approach using Vision Transformers as an encoder, namely BEIT, and vanilla Transformer as a decoder, eliminating CNNs for feature extraction and reducing the model's complexity. The experiments show that our end-to-end model outperforms Convolutions Backbones. The model attained a CER of 4.46%. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 314,444 |
2409.11474 | A generalized non-hourglass updated Lagrangian formulation for SPH solid
dynamics | Hourglass modes, characterized by zigzag particle and stress distributions, are a common numerical instability encountered when simulating solid materials with updated Lagrangian smoother particle hydrodynamics (ULSPH). While recent solutions have effectively addressed this issue in elastic materials using an essentially non-hourglass formulation, extending these solutions to plastic materials with more complex constitutive equations has proven challenging due to the need to express shear forces in the form of a velocity Laplacian. To address this, a generalized non-hourglass formulation is proposed within the ULSPH framework, suitable for both elastic and plastic materials. Specifically, a penalty force is introduced into the momentum equation to resolve the disparity between the linearly predicted and actual velocities of neighboring particle pairs, thereby mitigating the hourglass issue. The stability, convergence, and accuracy of the proposed method are validated through a series of classical elastic and plastic cases, with a dual-criterion time-stepping scheme to improve computational efficiency. The results show that the present method not only matches or even surpasses the performance of the recent essentially non-hourglass formulation in elastic cases but also performs well in plastic scenarios. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 489,170 |
2310.01767 | Differentially Encoded Observation Spaces for Perceptive Reinforcement
Learning | Perceptive deep reinforcement learning (DRL) has lead to many recent breakthroughs for complex AI systems leveraging image-based input data. Applications of these results range from super-human level video game agents to dexterous, physically intelligent robots. However, training these perceptive DRL-enabled systems remains incredibly compute and memory intensive, often requiring huge training datasets and large experience replay buffers. This poses a challenge for the next generation of field robots that will need to be able to learn on the edge in order to adapt to their environments. In this paper, we begin to address this issue through differentially encoded observation spaces. By reinterpreting stored image-based observations as a video, we leverage lossless differential video encoding schemes to compress the replay buffer without impacting training performance. We evaluate our approach with three state-of-the-art DRL algorithms and find that differential image encoding reduces the memory footprint by as much as 14.2x and 16.7x across tasks from the Atari 2600 benchmark and the DeepMind Control Suite (DMC) respectively. These savings also enable large-scale perceptive DRL that previously required paging between flash and RAM to be run entirely in RAM, improving the latency of DMC tasks by as much as 32%. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 396,562 |
2407.13974 | Continual Learning for Remote Physiological Measurement: Minimize
Forgetting and Simplify Inference | Remote photoplethysmography (rPPG) has gained significant attention in recent years for its ability to extract physiological signals from facial videos. While existing rPPG measurement methods have shown satisfactory performance in intra-dataset and cross-dataset scenarios, they often overlook the incremental learning scenario, where training data is presented sequentially, resulting in the issue of catastrophic forgetting. Meanwhile, most existing class incremental learning approaches are unsuitable for rPPG measurement. In this paper, we present a novel method named ADDP to tackle continual learning for rPPG measurement. We first employ adapter to efficiently finetune the model on new tasks. Then we design domain prototypes that are more applicable to rPPG signal regression than commonly used class prototypes. Based on these prototypes, we propose a feature augmentation strategy to consolidate the past knowledge and an inference simplification strategy to convert potentially forgotten tasks into familiar ones for the model. To evaluate ADDP and enable fair comparisons, we create the first continual learning protocol for rPPG measurement. Comprehensive experiments demonstrate the effectiveness of our method for rPPG continual learning. Source code is available at \url{https://github.com/MayYoY/rPPGDIL} | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 474,587 |
2410.10756 | Use Random Selection for Now: Investigation of Few-Shot Selection
Strategies in LLM-based Text Augmentation for Classification | The generative large language models (LLMs) are increasingly used for data augmentation tasks, where text samples are paraphrased (or generated anew) and then used for classifier fine-tuning. Existing works on augmentation leverage the few-shot scenarios, where samples are given to LLMs as part of prompts, leading to better augmentations. Yet, the samples are mostly selected randomly and a comprehensive overview of the effects of other (more ``informed'') sample selection strategies is lacking. In this work, we compare sample selection strategies existing in few-shot learning literature and investigate their effects in LLM-based textual augmentation. We evaluate this on in-distribution and out-of-distribution classifier performance. Results indicate, that while some ``informed'' selection strategies increase the performance of models, especially for out-of-distribution data, it happens only seldom and with marginal performance increases. Unless further advances are made, a default of random sample selection remains a good option for augmentation practitioners. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,214 |
1609.06261 | Power-Domain Non-Orthogonal Multiple Access (NOMA) in 5G Systems:
Potentials and Challenges | Non-orthogonal multiple access (NOMA) is one of the promising radio access techniques for performance enhancement in next-generation cellular communications. Compared to orthogonal frequency division multiple access (OFDMA), which is a well-known high-capacity orthogonal multiple access (OMA) technique, NOMA offers a set of desirable benefits, including greater spectrum efficiency. There are different types of NOMA techniques, including power-domain and code-domain. This paper primarily focuses on power-domain NOMA that utilizes superposition coding (SC) at the transmitter and successive interference cancellation (SIC) at the receiver. Various researchers have demonstrated that NOMA can be used effectively to meet both network-level and user-experienced data rate requirements of fifth-generation (5G) technologies. From that perspective, this paper comprehensively surveys the recent progress of NOMA in 5G systems, reviewing the state-of-the-art capacity analysis, power allocation strategies, user fairness, and user-pairing schemes in NOMA. In addition, this paper discusses how NOMA performs when it is integrated with various proven wireless communications techniques, such as cooperative communications, multiple input multiple output (MIMO), beamforming, space time coding, and network coding, among others. Furthermore, this paper discusses several important issues on NOMA implementation and provides some avenues for future research. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 61,260 |
1809.00543 | Learning Vision-based Cohesive Flight in Drone Swarms | This paper presents a data-driven approach to learning vision-based collective behavior from a simple flocking algorithm. We simulate a swarm of quadrotor drones and formulate the controller as a regression problem in which we generate 3D velocity commands directly from raw camera images. The dataset is created by simultaneously acquiring omnidirectional images and computing the corresponding control command from the flocking algorithm. We show that a convolutional neural network trained on the visual inputs of the drone can learn not only robust collision avoidance but also coherence of the flock in a sample-efficient manner. The neural controller effectively learns to localize other agents in the visual input, which we show by visualizing the regions with the most influence on the motion of an agent. This weakly supervised saliency map can be computed efficiently and may be used as a prior for subsequent detection and relative localization of other agents. We remove the dependence on sharing positions among flock members by taking only local visual information into account for control. Our work can therefore be seen as the first step towards a fully decentralized, vision-based flock without the need for communication or visual markers to aid detection of other agents. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | true | false | false | false | 106,601 |
1311.1869 | Optimization, Learning, and Games with Predictable Sequences | We provide several applications of Optimistic Mirror Descent, an online learning algorithm based on the idea of predictable sequences. First, we recover the Mirror Prox algorithm for offline optimization, prove an extension to Holder-smooth functions, and apply the results to saddle-point type problems. Next, we prove that a version of Optimistic Mirror Descent (which has a close relation to the Exponential Weights algorithm) can be used by two strongly-uncoupled players in a finite zero-sum matrix game to converge to the minimax equilibrium at the rate of O((log T)/T). This addresses a question of Daskalakis et al 2011. Further, we consider a partial information version of the problem. We then apply the results to convex programming and exhibit a simple algorithm for the approximate Max Flow problem. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 28,261 |
2108.11845 | Consistent Relative Confidence and Label-Free Model Selection for
Convolutional Neural Networks | In this paper, we are concerned with image classification with deep convolutional neural networks (CNNs). We focus on the following question: given a set of candidate CNN models, how to select the right one with the best generalization property for the current task? Current model selection methods all require access to a batch of labeled data for computing a pre-specified performance metric, such as the cross-entropy loss, the classification error rate and the negative log-likelihood. In many practical cases, labels are not available in time as labeling itself is a time-consuming and expensive task. To this end, we propose an approach to CNN model selection using only unlabeled data. We develop this method based on a principle termed consistent relative confidence. Experimental results on benchmark datasets demonstrate the effectiveness and efficiency of our method. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 252,313 |
2109.05375 | From Instantaneous Schedulability to Worst Case Schedulability: A
Significant Moment Approach | The method of significant moment analysis has been employed to derive instantaneous schedulability tests for real-time systems. However, the instantaneous schedulability can only be checked within a finite time window. On the other hand, worst-case schedulability guarantees schedulability of systems for infinite time. This paper derives the classical worst-case schedulability conditions for preemptive periodic systems starting from instantaneous schedulability, hence unifying the two notions of schedulability. The results provide a rigorous justification on the critical time instants being the worst case for scheduling of preemptive periodic systems. The paper also show that the critical time instant is not the only worst case moments. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 254,767 |
2208.02070 | Efficient Fine-Tuning of Compressed Language Models with Learners | Fine-tuning BERT-based models is resource-intensive in memory, computation, and time. While many prior works aim to improve inference efficiency via compression techniques, e.g., pruning, these works do not explicitly address the computational challenges of training to downstream tasks. We introduce Learner modules and priming, novel methods for fine-tuning that exploit the overparameterization of pre-trained language models to gain benefits in convergence speed and resource utilization. Learner modules navigate the double bind of 1) training efficiently by fine-tuning a subset of parameters, and 2) training effectively by ensuring quick convergence and high metric scores. Our results on DistilBERT demonstrate that learners perform on par with or surpass the baselines. Learners train 7x fewer parameters than state-of-the-art methods on GLUE. On CoLA, learners fine-tune 20% faster, and have significantly lower resource utilization. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 311,372 |
2104.03842 | RNN Transducer Models For Spoken Language Understanding | We present a comprehensive study on building and adapting RNN transducer (RNN-T) models for spoken language understanding(SLU). These end-to-end (E2E) models are constructed in three practical settings: a case where verbatim transcripts are available, a constrained case where the only available annotations are SLU labels and their values, and a more restrictive case where transcripts are available but not corresponding audio. We show how RNN-T SLU models can be developed starting from pre-trained automatic speech recognition (ASR) systems, followed by an SLU adaptation step. In settings where real audio data is not available, artificially synthesized speech is used to successfully adapt various SLU models. When evaluated on two SLU data sets, the ATIS corpus and a customer call center data set, the proposed models closely track the performance of other E2E models and achieve state-of-the-art results. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 229,192 |
1312.1683 | Face Recognition using Hough Peaks extracted from the significant blocks
of the Gradient Image | This paper proposes a new technique for automatic face recognition using integrated peaks of the Hough transformed significant blocks of the binary gradient image. In this approach firstly the gradient of an image is calculated and a threshold is set to obtain a binary gradient image, which is less sensitive to noise and illumination changes. Secondly, significant blocks are extracted from the absolute gradient image, to extract pertinent information with the idea of dimension reduction. Finally the best fitted Hough peaks are extracted from the Hough transformed significant blocks for efficient face recognition. Then these Hough peaks are concatenated together, which are used as feature in classification process. The efficiency of the proposed method is demonstrated by the experiment on 1100 images from the FRAV2D face database, 2200 images from the FERET database, where the images vary in pose, expression, illumination and scale and 400 images from the ORL face database, where the images slightly vary in pose. Our method has shown 93.3%, 88.5% and 99% recognition accuracy for the FRAV2D, FERET and the ORL database respectively. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 28,876 |
2408.10599 | Vision Calorimeter: Migrating Visual Object Detector to High-energy
Particle Images | In high-energy physics, accurately estimating the kinematic parameters (position and momentum) of anti-neutrons ($\bar{n}$) is essential for exploring the fundamental governing principles. However, this process is particularly challenging when using an electromagnetic calorimeter (EMC) as the energy detector, due to their limited accuracy and efficiency in interacting with $\bar{n}$. To address this issue, we propose Vision Calorimeter (ViC), a data-driven framework which migrates visual object detection techniques to high-energy particle images. To accommodate the unique characteristics of particle images, we introduce the heat-conduction operator (HCO) into both the backbone and the head of the conventional object detector and conduct significant structural improvements. HCO enjoys the advantage of both radial prior and global attention, as it is inspired by physical heat conduction which naturally aligns with the pattern of particle incidence. Implemented via the Discrete Cosine Transform (DCT), HCO extracts frequency-domain features, bridging the distribution gap between the particle images and the natural images on which visual object detectors are pre-trained. Experimental results demonstrate that ViC significantly outperforms traditional approaches, reducing the incident position prediction error by 46.16% (from 17.31$^{\circ}$ to 9.32$^{\circ}$) and providing the first baseline result with an incident momentum regression error of 21.48%. This study underscores ViC's great potential as a general-purpose particle parameter estimator in high-energy physics. Code is available at https://github.com/yuhongtian17/ViC. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,930 |
2209.08801 | Sequence-to-Set Generative Models | In this paper, we propose a sequence-to-set method that can transform any sequence generative model based on maximum likelihood to a set generative model where we can evaluate the utility/probability of any set. An efficient importance sampling algorithm is devised to tackle the computational challenge of learning our sequence-to-set model. We present GRU2Set, which is an instance of our sequence-to-set method and employs the famous GRU model as the sequence generative model. To further obtain permutation invariant representation of sets, we devise the SetNN model which is also an instance of the sequence-to-set model. A direct application of our models is to learn an order/set distribution from a collection of e-commerce orders, which is an essential step in many important operational decisions such as inventory arrangement for fast delivery. Based on the intuition that small-sized sets are usually easier to learn than large sets, we propose a size-bias trick that can help learn better set distributions with respect to the $\ell_1$-distance evaluation metric. Two e-commerce order datasets, TMALL and HKTVMALL, are used to conduct extensive experiments to show the effectiveness of our models. The experimental results demonstrate that our models can learn better set/order distributions from order data than the baselines. Moreover, no matter what model we use, applying the size-bias trick can always improve the quality of the set distribution learned from data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 318,275 |
2206.14672 | Not Cheating on the Turing Test: Towards Grounded Language Learning in
Artificial Intelligence | Recent hype surrounding the increasing sophistication of language processing models has renewed optimism regarding machines achieving a human-like command of natural language. Research in the area of natural language understanding (NLU) in artificial intelligence claims to have been making great strides in this area, however, the lack of conceptual clarity/consistency in how 'understanding' is used in this and other disciplines makes it difficult to discern how close we actually are. In this interdisciplinary research thesis, I integrate insights from cognitive science/psychology, philosophy of mind, and cognitive linguistics, and evaluate it against a critical review of current approaches in NLU to explore the basic requirements--and remaining challenges--for developing artificially intelligent systems with human-like capacities for language use and comprehension. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 305,364 |
2012.12105 | Warped Gaussian Processes in Remote Sensing Parameter Estimation and
Causal Inference | This paper introduces warped Gaussian processes (WGP) regression in remote sensing applications. WGP models output observations as a parametric nonlinear transformation of a GP. The parameters of such prior model are then learned via standard maximum likelihood. We show the good performance of the proposed model for the estimation of oceanic chlorophyll content from multispectral data, vegetation parameters (chlorophyll, leaf area index, and fractional vegetation cover) from hyperspectral data, and in the detection of the causal direction in a collection of 28 bivariate geoscience and remote sensing causal problems. The model consistently performs better than the standard GP and the more advanced heteroscedastic GP model, both in terms of accuracy and more sensible confidence intervals. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 212,826 |
2106.10944 | Hard hat wearing detection based on head keypoint localization | In recent years, a lot of attention is paid to deep learning methods in the context of vision-based construction site safety systems, especially regarding personal protective equipment. However, despite all this attention, there is still no reliable way to establish the relationship between workers and their hard hats. To answer this problem a combination of deep learning, object detection and head keypoint localization, with simple rule-based reasoning is proposed in this article. In tests, this solution surpassed the previous methods based on the relative bounding box position of different instances, as well as direct detection of hard hat wearers and non-wearers. The results show that the conjunction of novel deep learning methods with humanly-interpretable rule-based systems can result in a solution that is both reliable and can successfully mimic manual, on-site supervision. This work is the next step in the development of fully autonomous construction site safety systems and shows that there is still room for improvement in this area. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | true | false | false | 242,223 |
2305.06162 | Interpretable multimodal sentiment analysis based on textual modality
descriptions by using large-scale language models | Multimodal sentiment analysis is an important area for understanding the user's internal states. Deep learning methods were effective, but the problem of poor interpretability has gradually gained attention. Previous works have attempted to use attention weights or vector distributions to provide interpretability. However, their explanations were not intuitive and can be influenced by different trained models. This study proposed a novel approach to provide interpretability by converting nonverbal modalities into text descriptions and by using large-scale language models for sentiment predictions. This provides an intuitive approach to directly interpret what models depend on with respect to making decisions from input texts, thus significantly improving interpretability. Specifically, we convert descriptions based on two feature patterns for the audio modality and discrete action units for the facial modality. Experimental results on two sentiment analysis tasks demonstrated that the proposed approach maintained, or even improved effectiveness for sentiment analysis compared to baselines using conventional features, with the highest improvement of 2.49% on the F1 score. The results also showed that multimodal descriptions have similar characteristics on fusing modalities as those of conventional fusion methods. The results demonstrated that the proposed approach is interpretable and effective for multimodal sentiment analysis. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | 363,435 |
2303.02131 | Spacetime-Efficient Low-Depth Quantum State Preparation with
Applications | We propose a novel deterministic method for preparing arbitrary quantum states. When our protocol is compiled into CNOT and arbitrary single-qubit gates, it prepares an $N$-dimensional state in depth $O(\log(N))$ and spacetime allocation (a metric that accounts for the fact that oftentimes some ancilla qubits need not be active for the entire circuit) $O(N)$, which are both optimal. When compiled into the $\{\mathrm{H,S,T,CNOT}\}$ gate set, we show that it requires asymptotically fewer quantum resources than previous methods. Specifically, it prepares an arbitrary state up to error $\epsilon$ with optimal depth of $O(\log(N) + \log (1/\epsilon))$ and spacetime allocation $O(N\log(\log(N)/\epsilon))$, improving over $O(\log(N)\log(\log (N)/\epsilon))$ and $O(N\log(N/\epsilon))$, respectively. We illustrate how the reduced spacetime allocation of our protocol enables rapid preparation of many disjoint states with only constant-factor ancilla overhead -- $O(N)$ ancilla qubits are reused efficiently to prepare a product state of $w$ $N$-dimensional states in depth $O(w + \log(N))$ rather than $O(w\log(N))$, achieving effectively constant depth per state. We highlight several applications where this ability would be useful, including quantum machine learning, Hamiltonian simulation, and solving linear systems of equations. We provide quantum circuit descriptions of our protocol, detailed pseudocode, and gate-level implementation examples using Braket. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 349,230 |
2409.06961 | Control Pneumatic Soft Bending Actuator with Feedforward Hysteresis
Compensation by Pneumatic Physical Reservoir Computing | The nonlinearities of soft robots bring control challenges like hysteresis but also provide them with computational capacities. This paper introduces a fuzzy pneumatic physical reservoir computing (FPRC) model for feedforward hysteresis compensation in motion tracking control of soft actuators. Our method utilizes a pneumatic bending actuator as a physical reservoir with nonlinear computing capacities to control another pneumatic bending actuator. The FPRC model employs a Takagi-Sugeno (T-S) fuzzy logic to process outputs from the physical reservoir. The proposed FPRC model shows equivalent training performance to an Echo State Network (ESN) model, whereas it exhibits better test accuracies with significantly reduced execution time. Experiments validate the FPRC model's effectiveness in controlling the bending motion of a pneumatic soft actuator with open-loop and closed-loop control system setups. The proposed FPRC model's robustness against environmental disturbances has also been experimentally verified. To the authors' knowledge, this is the first implementation of a physical system in the feedforward hysteresis compensation model for controlling soft actuators. This study is expected to advance physical reservoir computing in nonlinear control applications and extend the feedforward hysteresis compensation methods for controlling soft actuators. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 487,335 |
2411.17937 | Spatio-temporal Causal Learning for Streamflow Forecasting | Streamflow plays an essential role in the sustainable planning and management of national water resources. Traditional hydrologic modeling approaches simulate streamflow by establishing connections across multiple physical processes, such as rainfall and runoff. These data, inherently connected both spatially and temporally, possess intrinsic causal relations that can be leveraged for robust and accurate forecasting. Recently, spatio-temporal graph neural networks (STGNNs) have been adopted, excelling in various domains, such as urban traffic management, weather forecasting, and pandemic control, and they also promise advances in streamflow management. However, learning causal relationships directly from vast observational data is theoretically and computationally challenging. In this study, we employ a river flow graph as prior knowledge to facilitate the learning of the causal structure and then use the learned causal graph to predict streamflow at targeted sites. The proposed model, Causal Streamflow Forecasting (CSF) is tested in a real-world study in the Brazos River basin in Texas. Our results demonstrate that our method outperforms regular spatio-temporal graph neural networks and achieves higher computational efficiency compared to traditional simulation methods. By effectively integrating river flow graphs with STGNNs, this research offers a novel approach to streamflow prediction, showcasing the potential of combining advanced neural network techniques with domain-specific knowledge for enhanced performance in hydrologic modeling. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 511,662 |
1706.01443 | Types of Cognition and its Implications for future High-Level Cognitive
Machines | This work summarizes part of current knowledge on High-level Cognitive process and its relation with biological hardware. Thus, it is possible to identify some paradoxes which could impact the development of future technologies and artificial intelligence: we may make a High-level Cognitive Machine, sacrificing the principal attribute of a machine, its accuracy. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 74,804 |
1812.08655 | Surrogate-assisted Bayesian inversion for landscape and basin evolution
models | The complex and computationally expensive nature of landscape evolution models pose significant challenges in the inference and optimisation of unknown parameters. Bayesian inference provides a methodology for estimation and uncertainty quantification of unknown model parameters. In our previous work, we developed parallel tempering Bayeslands as a framework for parameter estimation and uncertainty quantification for the Badlands landscape evolution model. Parallel tempering Bayeslands features high-performance computing with dozens of processing cores running in parallel to enhance computational efficiency. Although we use parallel computing, the procedure remains computationally challenging since thousands of samples need to be drawn and evaluated. \textcolor{black}{In large-scale landscape and basin evolution problems, a single model evaluation can take from several minutes to hours, and in some instances, even days. Surrogate-assisted optimisation has been used for several computationally expensive engineering problems which motivate its use in optimisation and inference of complex geoscientific models.} The use of surrogate models can speed up parallel tempering Bayeslands by developing computationally inexpensive models to mimic expensive ones. In this paper, we apply surrogate-assisted parallel tempering where that surrogate mimics a landscape evolution model by estimating the likelihood function from the model. \textcolor{black}{We employ a neural network-based surrogate model that learns from the history of samples generated. } The entire framework is developed in a parallel computing infrastructure to take advantage of parallelism. The results show that the proposed methodology is effective in lowering the overall computational cost significantly while retaining the quality of solutions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | true | 117,026 |
1103.1124 | Fluid flow analysis in a rough fracture (type II) using complex networks
and lattice Boltzmann method | Complexity of fluid flow in a rough fracture is induced by the complex configurations of opening areas between the fracture planes. In this study, we model fluid flow in an evolvable real rock joint structure, which under certain normal load is sheared. In an experimental study, information regarding about apertures of the rock joint during consecutive 20 mm displacements and fluid flow (permeability) in different pressure heads have been recorded by a scanner laser. Our aim in this study is to simulate the fluid flow in the mentioned complex geometries using the lattice Boltzmann method (LBM), while the characteristics of the aperture field will be compared with the modeled fluid flow permeability To characterize the aperture, we use a new concept in the graph theory, namely: complex networks and motif analysis of the corresponding networks. In this approach, the similar aperture profile along the fluid flow direction is mapped in to a network space. The modeled permeability using the LBM shows good correlation with the experimental measured values. Furthermore, the two main characters of the obtained networks, i.e., characteristic length and number of edges show the same evolutionary trend with the modeled permeability values. Analysis of motifs through the obtained networks showed the most transient sub-graphs are much more frequent in residual stages. This coincides with nearly stable fluid flow and high permeability values. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 9,491 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.