id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2306.05779
Transformer-based Time-to-Event Prediction for Chronic Kidney Disease Deterioration
Deep-learning techniques, particularly the transformer model, have shown great potential in enhancing the prediction performance of longitudinal health records. While previous methods have mainly focused on fixed-time risk prediction, time-to-event prediction (also known as survival analysis) is often more appropriate for clinical scenarios. Here, we present a novel deep-learning architecture we named STRAFE, a generalizable survival analysis transformer-based architecture for electronic health records. The performance of STRAFE was evaluated using a real-world claim dataset of over 130,000 individuals with stage 3 chronic kidney disease (CKD) and was found to outperform other time-to-event prediction algorithms in predicting the exact time of deterioration to stage 5. Additionally, STRAFE was found to outperform binary outcome algorithms in predicting fixed-time risk, possibly due to its ability to train on censored data. We show that STRAFE predictions can improve the positive predictive value of high-risk patients by 3-fold, demonstrating possible usage to improve targeting for intervention programs. Finally, we suggest a novel visualization approach to predictions on a per-patient basis. In conclusion, STRAFE is a cutting-edge time-to-event prediction algorithm that has the potential to enhance risk predictions in large claims datasets.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
372,332
2109.08027
Robust Stability Analysis of an Uncertain Aircraft Model with Scalar Parametric Uncertainty
A robust controller is specified, and the stability bounds of the uncertain closed-loop system are determined using the small gain, circle, positive real, and Popov criteria. A graphical approach is employed in order to demonstrate the ease with which the above robustness tests can be carried out on a problem of practical interest. A significant improvement in stability bounds is observed as the analysis moves from the small gain test to the circle, positive real, and Popov tests. In particular, small gain analysis results in the most conservative robust stability bounds, while Popov analysis yields significantly less conservative bounds. This is because traditional small gain type tests allow the uncertainty to be arbitrarily time-varying, whereas Popov analysis restricts the uncertainty to be constant, real parametric uncertainty. Therefore, the results reported here indicate the conservatism associated with small gain analysis, and the effectiveness of Popov analysis, in gauging robust stability in the presence of constant, real parametric uncertainty.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
255,745
1408.5634
An application of topological graph clustering to protein function prediction
We use a semisupervised learning algorithm based on a topological data analysis approach to assign functional categories to yeast proteins using similarity graphs. This new approach to analyzing biological networks yields results that are as good as or better than state of the art existing approaches.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
35,568
1105.0673
Mark My Words! Linguistic Style Accommodation in Social Media
The psycholinguistic theory of communication accommodation accounts for the general observation that participants in conversations tend to converge to one another's communicative behavior: they coordinate in a variety of dimensions including choice of words, syntax, utterance length, pitch and gestures. In its almost forty years of existence, this theory has been empirically supported exclusively through small-scale or controlled laboratory studies. Here we address this phenomenon in the context of Twitter conversations. Undoubtedly, this setting is unlike any other in which accommodation was observed and, thus, challenging to the theory. Its novelty comes not only from its size, but also from the non real-time nature of conversations, from the 140 character length restriction, from the wide variety of social relation types, and from a design that was initially not geared towards conversation at all. Given such constraints, it is not clear a priori whether accommodation is robust enough to occur given the constraints of this new environment. To investigate this, we develop a probabilistic framework that can model accommodation and measure its effects. We apply it to a large Twitter conversational dataset specifically developed for this task. This is the first time the hypothesis of linguistic style accommodation has been examined (and verified) in a large scale, real world setting. Furthermore, when investigating concepts such as stylistic influence and symmetry of accommodation, we discover a complexity of the phenomenon which was never observed before. We also explore the potential relation between stylistic influence and network features commonly associated with social status.
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
10,236
2410.01821
NFDIcore 2.0: A BFO-Compliant Ontology for Multi-Domain Research Infrastructures
This paper presents NFDIcore 2.0, an ontology compliant with the Basic Formal Ontology (BFO) designed to represent the diverse research communities of the National Research Data Infrastructure (NFDI) in Germany. NFDIcore ensures the interoperability across various research disciplines, thereby facilitating cross-domain research. Each domain's individual requirements are addressed through specific ontology modules. This paper discusses lessons learned during the ontology development and mapping process, supported by practical validation through use cases in diverse research domains. The originality of NFDIcore lies in its adherence to BFO, the use of SWRL rules for efficient knowledge discovery, and its modular, extensible design tailored to meet the needs of heterogeneous research domains.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
493,973
2310.04081
A Deeply Supervised Semantic Segmentation Method Based on GAN
In recent years, the field of intelligent transportation has witnessed rapid advancements, driven by the increasing demand for automation and efficiency in transportation systems. Traffic safety, one of the tasks integral to intelligent transport systems, requires accurately identifying and locating various road elements, such as road cracks, lanes, and traffic signs. Semantic segmentation plays a pivotal role in achieving this task, as it enables the partition of images into meaningful regions with accurate boundaries. In this study, we propose an improved semantic segmentation model that combines the strengths of adversarial learning with state-of-the-art semantic segmentation techniques. The proposed model integrates a generative adversarial network (GAN) framework into the traditional semantic segmentation model, enhancing the model's performance in capturing complex and subtle features in transportation images. The effectiveness of our approach is demonstrated by a significant boost in performance on the road crack dataset compared to the existing methods, \textit{i.e.,} SEGAN. This improvement can be attributed to the synergistic effect of adversarial learning and semantic segmentation, which leads to a more refined and accurate representation of road structures and conditions. The enhanced model not only contributes to better detection of road cracks but also to a wide range of applications in intelligent transportation, such as traffic sign recognition, vehicle detection, and lane segmentation.
false
true
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
397,530
1909.09011
Optimal Policies of Advanced Sleep Modes for Energy-Efficient 5G networks
We study in this paper optimal control strategy for Advanced Sleep Modes (ASM) in 5G networks. ASM correspond to different levels of sleep modes ranging from deactivation of some components of the base station for several micro-seconds to switching off of almost all of them for one second or more. ASMs are made possible in 5G networks thanks to the definition of so-called lean carrier radio access which allows for configurable signaling periodicities. We model such a system using Markov Decision Processes (MDP) and find optimal sleep policy in terms of a trade-off between saved power consumption versus additional incurred delay for user traffic which has to wait for the network components to be woken-up and serve it. Eventually, for the system not to oscillate between sleep levels, we add a switching component in the cost function and show its impact on the energy reduction versus delay trade-off.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
146,123
1907.08646
Fair quantile regression
Quantile regression is a tool for learning conditional distributions. In this paper we study quantile regression in the setting where a protected attribute is unavailable when fitting the model. This can lead to "unfair'' quantile estimators for which the effective quantiles are very different for the subpopulations defined by the protected attribute. We propose a procedure for adjusting the estimator on a heldout sample where the protected attribute is available. The main result of the paper is an empirical process analysis showing that the adjustment leads to a fair estimator for which the target quantiles are brought into balance, in a statistical sense that we call $\sqrt{n}$-fairness. We illustrate the ideas and adjustment procedure on a dataset of 200,000 live births, where the objective is to characterize the dependence of the birth weights of the babies on demographic attributes of the birth mother; the protected attribute is the mother's race.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
139,148
2412.09378
From Bench to Bedside: A Review of Clinical Trials in Drug Discovery and Development
Clinical trials are an indispensable part of the drug development process, bridging the gap between basic research and clinical application. During the development of new drugs, clinical trials are used not only to evaluate the safety and efficacy of the drug but also to explore its dosage, treatment regimens, and potential side effects. This review discusses the various stages of clinical trials, including Phase I (safety assessment), Phase II (preliminary efficacy evaluation), Phase III (large-scale validation), and Phase IV (post-marketing surveillance), highlighting the characteristics of each phase and their interrelationships. Additionally, the paper addresses the major challenges encountered in clinical trials, such as ethical issues, subject recruitment difficulties, diversity and representativeness concerns, and proposes strategies for overcoming these challenges. With the advancement of technology, innovative technologies such as artificial intelligence, big data, and digitalization are gradually transforming clinical trial design and implementation, improving trial efficiency and data quality. The article also looks forward to the future of clinical trials, particularly the impact of emerging therapies such as gene therapy and immunotherapy on trial design, as well as the importance of regulatory reforms and global collaboration. In conclusion, the core role of clinical trials in drug development will continue to drive the progress of innovative drug development and clinical treatment.
false
false
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
516,456
2011.09906
Towards Learning Controllable Representations of Physical Systems
Learned representations of dynamical systems reduce dimensionality, potentially supporting downstream reinforcement learning (RL). However, no established methods predict a representation's suitability for control and evaluation is largely done via downstream RL performance, slowing representation design. Towards a principled evaluation of representations for control, we consider the relationship between the true state and the corresponding representations, proposing that ideally each representation corresponds to a unique true state. This motivates two metrics: temporal smoothness and high mutual information between true state/representation. These metrics are related to established representation objectives, and studied on Lagrangian systems where true state, information requirements, and statistical properties of the state can be formalized for a broad class of systems. These metrics are shown to predict reinforcement learning performance in a simulated peg-in-hole task when comparing variants of autoencoder-based representations.
false
false
false
false
false
false
true
true
false
false
true
false
false
false
false
false
false
false
207,358
2408.08507
More basis reduction for linear codes: backward reduction, BKZ, slide reduction, and more
We expand on recent exciting work of Debris-Alazard, Ducas, and van Woerden [Transactions on Information Theory, 2022], which introduced the notion of basis reduction for codes, in analogy with the extremely successful paradigm of basis reduction for lattices. We generalize DDvW's LLL algorithm and size-reduction algorithm from codes over $\mathbb{F}_2$ to codes over $\mathbb{F}_q$, and we further develop the theory of proper bases. We then show how to instantiate for codes the BKZ and slide-reduction algorithms, which are the two most important generalizations of the LLL algorithm for lattices. Perhaps most importantly, we show a new and very efficient basis-reduction algorithm for codes, called full backward reduction. This algorithm is quite specific to codes and seems to have no analogue in the lattice setting. We prove that this algorithm finds vectors as short as LLL does in the worst case (i.e., within the Griesmer bound) and does so in less time. We also provide both heuristic and empirical evidence that it outperforms LLL in practice, and we give a variant of the algorithm that provably outperforms LLL (in some sense) for random codes. Finally, we explore the promise and limitations of basis reduction for codes. In particular, we show upper and lower bounds on how ``good'' of a basis a code can have, and we show two additional illustrative algorithms that demonstrate some of the promise and the limitations of basis reduction for codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
481,018
2402.05535
Batch-Schedule-Execute: On Optimizing Concurrent Deterministic Scheduling for Blockchains (Extended Version)
Executing smart contracts is a compute and storage-intensive task, which currently dominates modern blockchain's performance. Given that computers are becoming increasingly multicore, concurrency is an attractive approach to improve programs' execution runtime. A unique challenge of blockchains is that all replicas (miners or validators) must execute all smart contracts in the same logical order to maintain the semantics of State Machine Replication (SMR). In this work, we study the maximal level of parallelism attainable when focusing on the conflict graph between transactions packaged in the same block. This exposes a performance vulnerability that block creators may exploit against existing blockchain concurrency solutions, which rely on a total ordering phase for maintaining consistency amongst all replicas. To facilitate the formal aspects of our study, we develop a novel generic framework for Active State Machine Replication (ASMR) that is strictly serializable. We introduce the concept of graph scheduling and the definition of the minimal latency scheduling problem, which we prove to be NP-hard. We show that the restricted version of this problem for homogeneous transactions is equivalent to the classic Graph Vertex Coloring Problem, yet show that the heterogeneous case is more complex. We discuss the practical implications of these results.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
427,894
2010.05426
Throughput Analysis of Small Cell Networks under D-TDD and FFR
Dynamic time-division duplex (D-TDD) has emerged as an effective solution to accommodate the unaligned downlink and uplink traffic in small cell networks. However, the flexibility of traffic configuration also introduces additional inter-cell interference. In this letter, we study the effectiveness of applying fractional frequency reuse (FFR) as an interference coordination technique for D-TDD small cell networks. We derive the analytical expressions of downlink and uplink mean packet throughput (MPT), then study a network parameter optimization problem to maximize MPT while guaranteeing each user's throughput. Numerical results corroborate the benefits of the proposed FFR-based D-TDD in terms of improving throughput.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
200,123
2204.04088
Stochastic Gradient-based Fast Distributed Multi-Energy Management for an Industrial Park with Temporally-Coupled Constraints
Contemporary industrial parks are challenged by the growing concerns about high cost and low efficiency of energy supply. Moreover, in the case of uncertain supply/demand, how to mobilize delay-tolerant elastic loads and compensate real-time inelastic loads to match multi-energy generation/storage and minimize energy cost is a key issue. Since energy management is hardly to be implemented offline without knowing statistical information of random variables, this paper presents a systematic online energy cost minimization framework to fulfill the complementary utilization of multi-energy with time-varying generation, demand and price. Specifically to achieve charging/discharging constraints due to storage and short-term energy balancing, a fast distributed algorithm based on stochastic gradient with two-timescale implementation is proposed to ensure online implementation. To reduce the peak loads, an incentive mechanism is implemented by estimating users' willingness to shift. Analytical results on parameter setting are also given to guarantee feasibility and optimality of the proposed design. Numerical results show that when the bid-ask spread of electricity is small enough, the proposed algorithm can achieve the close-to-optimal cost asymptotically.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
290,534
2005.08230
Graph Density-Aware Losses for Novel Compositions in Scene Graph Generation
Scene graph generation (SGG) aims to predict graph-structured descriptions of input images, in the form of objects and relationships between them. This task is becoming increasingly useful for progress at the interface of vision and language. Here, it is important - yet challenging - to perform well on novel (zero-shot) or rare (few-shot) compositions of objects and relationships. In this paper, we identify two key issues that limit such generalization. Firstly, we show that the standard loss used in this task is unintentionally a function of scene graph density. This leads to the neglect of individual edges in large sparse graphs during training, even though these contain diverse few-shot examples that are important for generalization. Secondly, the frequency of relationships can create a strong bias in this task, such that a blind model predicting the most frequent relationship achieves good performance. Consequently, some state-of-the-art models exploit this bias to improve results. We show that such models can suffer the most in their ability to generalize to rare compositions, evaluating two different models on the Visual Genome dataset and its more recent, improved version, GQA. To address these issues, we introduce a density-normalized edge loss, which provides more than a two-fold improvement in certain generalization metrics. Compared to other works in this direction, our enhancements require only a few lines of code and no added computational cost. We also highlight the difficulty of accurately evaluating models using existing metrics, especially on zero/few shots, and introduce a novel weighted metric.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
177,561
2405.00518
Graph-Based Multivariate Multiscale Dispersion Entropy: Efficient Implementation and Applications to Real-World Network Data
We introduce Multivariate Multiscale Graph-based Dispersion Entropy (mvDEG), a novel, computationally efficient method for analyzing multivariate time series data in graph and complex network frameworks, and demonstrate its application in real-world data. mvDEG effectively combines temporal dynamics with topological relationships, offering enhanced analysis compared to traditional nonlinear entropy methods. Its efficacy is established through testing on synthetic signals, such as uncorrelated and correlated noise, showcasing its adeptness in discerning various levels of dependency and complexity. The robustness of mvDEG is further validated with real-world datasets, effectively differentiating various two-phase flow regimes and capturing distinct dynamics in weather data analysis. An important advancement of mvDEG is its computational efficiency. Our optimized algorithm displays a computational time that grows linearly with the number of vertices or nodes, in contrast to the exponential growth observed in classical methods. This efficiency is achieved through refined matrix power calculations that exploit matrix and Kronecker product properties, making our method faster than the state of the art. The significant acceleration in computational time positions mvDEG as a transformative tool for extensive and real-time applications, setting a new benchmark in the analysis of time series recorded at distributed locations and opening avenues for innovative applications.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
450,953
2501.07849
Unveiling Provider Bias in Large Language Models for Code Generation
Large Language Models (LLMs) have emerged as the new recommendation engines, outperforming traditional methods in both capability and scope, particularly in code generation applications. Our research reveals a novel provider bias in LLMs, namely without explicit input prompts, these models show systematic preferences for services from specific providers in their recommendations (e.g., favoring Google Cloud over Microsoft Azure). This bias holds significant implications for market dynamics and societal equilibrium, potentially promoting digital monopolies. It may also deceive users and violate their expectations, leading to various consequences. This paper presents the first comprehensive empirical study of provider bias in LLM code generation. We develop a systematic methodology encompassing an automated pipeline for dataset generation, incorporating 6 distinct coding task categories and 30 real-world application scenarios. Our analysis encompasses over 600,000 LLM-generated responses across seven state-of-the-art models, utilizing approximately 500 million tokens (equivalent to \$5,000+ in computational costs). The study evaluates both the generated code snippets and their embedded service provider selections to quantify provider bias. Additionally, we conduct a comparative analysis of seven debiasing prompting techniques to assess their efficacy in mitigating these biases. Our findings demonstrate that LLMs exhibit significant provider preferences, predominantly favoring services from Google and Amazon, and can autonomously modify input code to incorporate their preferred providers without users' requests. Notably, we observe discrepancies between providers recommended in conversational contexts versus those implemented in generated code. The complete dataset and analysis results are available in our repository.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
true
524,532
2501.02441
A Statistical Hypothesis Testing Framework for Data Misappropriation Detection in Large Language Models
Large Language Models (LLMs) are rapidly gaining enormous popularity in recent years. However, the training of LLMs has raised significant privacy and legal concerns, particularly regarding the inclusion of copyrighted materials in their training data without proper attribution or licensing, which falls under the broader issue of data misappropriation. In this article, we focus on a specific problem of data misappropriation detection, namely, to determine whether a given LLM has incorporated data generated by another LLM. To address this issue, we propose embedding watermarks into the copyrighted training data and formulating the detection of data misappropriation as a hypothesis testing problem. We develop a general statistical testing framework, construct a pivotal statistic, determine the optimal rejection threshold, and explicitly control the type I and type II errors. Furthermore, we establish the asymptotic optimality properties of the proposed tests, and demonstrate its empirical effectiveness through intensive numerical experiments.
false
false
false
false
true
false
true
false
true
false
false
false
true
false
false
false
false
false
522,485
2306.04640
ModuleFormer: Modularity Emerges from Mixture-of-Experts
Large Language Models (LLMs) have achieved remarkable results. However, existing models are expensive to train and deploy, and it is also difficult to expand their knowledge beyond pre-training data without forgetting previous knowledge. This paper proposes a new neural network architecture, ModuleFormer, that leverages modularity to improve the efficiency and flexibility of large language models. ModuleFormer is based on the Sparse Mixture of Experts (SMoE). Unlike the previous SMoE-based modular language model, which requires domain-labeled data to learn domain-specific experts, ModuleFormer can induce modularity from uncurated data with its new load balancing and concentration losses. ModuleFormer is a modular architecture that includes two different types of modules: new stick-breaking attention heads and feedforward experts. Different modules are sparsely activated conditions on the input token during training and inference. In our experiment, we found that the modular architecture enables three important abilities for large pre-trained language models: 1) Efficiency, since ModuleFormer only activates a subset of its modules for each input token, thus it could achieve the same performance as dense LLMs with more than two times throughput; 2) Extendability, ModuleFormer is more immune to catastrophic forgetting than dense LLMs and can be easily extended with new modules to learn new knowledge that is not included in the training data; 3) Specialisation, finetuning ModuleFormer could specialize a subset of modules to the finetuning task and the task-unrelated modules could be easily pruned for a lightweight deployment.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
371,831
2305.15007
Quaternion-based non-singular terminal sliding mode control for a satellite-mounted space manipulator
In this paper, a robust control solution for a satellite equipped with a robotic manipulator is presented. First, the dynamic model of the system is derived based on quaternions to describe the evolution of the attitude of the base satellite. Then, a non-singular terminal sliding mode controller that employs quaternions for attitude control, is proposed for concurrently handling all the degrees of freedom of the space manipulator. Moreover, an additional adaptive term is embedded in the controller to estimate the upper bounds of disturbances and uncertainties. The result is a resilient solution able to withstand unmodelled dynamics and interactions. Lyapunov theory is used to prove the stability of the controller and numerical simulations allow assessing performance and fuel efficiency.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
367,390
1911.03955
Distributed Recursive Filtering for Spatially Interconnected Systems with Randomly Occurred Missing Measurements
This paper proposed a distributed filter for spatially interconnected systems (SISs), which considers missing measurements in the sensors of sub-systems. An SIS is established by many similar sub-systems that directly interact or communicate with connective neighbors. Despite that the interactions are simple and tractable, the overall SIS can perform rich and complex behaviors. In actual projects, sensors of sub-systems in a sensor network may break down sometimes, which causes parts of the measurements unavailable unexpectedly. In this work, distributed characteristics of SISs are described by Andrea model and the losses of measurements are assumed to occur with known probabilities. Experimental results confirm that, this filtering method can be effectively employed for the state estimation of SISs, when missing measurements occur.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
152,842
2210.10781
Generalization Properties of Decision Trees on Real-valued and Categorical Features
We revisit binary decision trees from the perspective of partitions of the data. We introduce the notion of partitioning function, and we relate it to the growth function and to the VC dimension. We consider three types of features: real-valued, categorical ordinal and categorical nominal, with different split rules for each. For each feature type, we upper bound the partitioning function of the class of decision stumps before extending the bounds to the class of general decision tree (of any fixed structure) using a recursive approach. Using these new results, we are able to find the exact VC dimension of decision stumps on examples of $\ell$ real-valued features, which is given by the largest integer $d$ such that $2\ell \ge \binom{d}{\lfloor\frac{d}{2}\rfloor}$. Furthermore, we show that the VC dimension of a binary tree structure with $L_T$ leaves on examples of $\ell$ real-valued features is in $O(L_T \log(L_T\ell))$. Finally, we elaborate a pruning algorithm based on these results that performs better than the cost-complexity and reduced-error pruning algorithms on a number of data sets, with the advantage that no cross-validation is required.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
325,063
1507.07984
A constrained optimization perspective on actor critic algorithms and application to network routing
We propose a novel actor-critic algorithm with guaranteed convergence to an optimal policy for a discounted reward Markov decision process. The actor incorporates a descent direction that is motivated by the solution of a certain non-linear optimization problem. We also discuss an extension to incorporate function approximation and demonstrate the practicality of our algorithms on a network routing application.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
45,520
1904.08626
Ontology-based Design of Experiments on Big Data Solutions
Big data solutions are designed to cope with data of huge Volume and wide Variety, that need to be ingested at high Velocity and have potential Veracity issues, challenging characteristics that are usually referred to as the "4Vs of Big Data". In order to evaluate possibly complex big data solutions, stress tests require to assess a large number of combinations of sub-components jointly with the possible big data variations. A formalization of the Design of Experiments (DoE) on big data solutions is aimed at ensuring the reproducibility of the experiments, facilitating their partitioning in sub-experiments and guaranteeing the consistency of their outcomes in a global assessment. In this paper, an ontology-based approach is proposed to support the evaluation of a big data system in two ways. Firstly, the approach formalizes a decomposition and recombination of the big data solution, allowing for the aggregation of component evaluation results at inter-component level. Secondly, existing work on DoE is translated into an ontology for supporting the selection of experiments. The proposed ontology-based approach offers the possibility to combine knowledge from the evaluation domain and the application domain. It exploits domain and inter-domain specific restrictions on the factor combinations in order to reduce the number of experiments. Contrary to existing approaches, the proposed use of ontologies is not limited to the assertional description and exploitation of past experiments but offers richer terminological descriptions for the development of a DoE from scratch. As an application example, a maritime big data solution to the problem of detecting and predicting vessel suspicious behaviour through mobility analysis is selected. The article is concluded with a sketch of future works.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
128,131
2306.16551
Analysis of LiDAR Configurations on Off-road Semantic Segmentation Performance
This paper investigates the impact of LiDAR configuration shifts on the performance of 3D LiDAR point cloud semantic segmentation models, a topic not extensively studied before. We explore the effect of using different LiDAR channels when training and testing a 3D LiDAR point cloud semantic segmentation model, utilizing Cylinder3D for the experiments. A Cylinder3D model is trained and tested on simulated 3D LiDAR point cloud datasets created using the Mississippi State University Autonomous Vehicle Simulator (MAVS) and 32, 64 channel 3D LiDAR point clouds of the RELLIS-3D dataset collected in a real-world off-road environment. Our experimental results demonstrate that sensor and spatial domain shifts significantly impact the performance of LiDAR-based semantic segmentation models. In the absence of spatial domain changes between training and testing, models trained and tested on the same sensor type generally exhibited better performance. Moreover, higher-resolution sensors showed improved performance compared to those with lower-resolution ones. However, results varied when spatial domain changes were present. In some cases, the advantage of a sensor's higher resolution led to better performance both with and without sensor domain shifts. In other instances, the higher resolution resulted in overfitting within a specific domain, causing a lack of generalization capability and decreased performance when tested on data with different sensor configurations.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
376,394
1904.10158
Decision Making for Autonomous Vehicles at Unsignalized Intersection in Presence of Malicious Vehicles
In this paper, we investigate the decision making of autonomous vehicles in an unsignalized intersection in presence of malicious vehicles, which are vehicles that do not respect the law by not using the proper rules of the right of way. Each vehicle computes its control input as a Nash equilibrium of a game determined by the priority order based on its own belief: each of non-malicious vehicle bases its order on the law, while a malicious one considers itself as having priority. To illustrate our method, we provide numerical simulations, with different scenarios given by different cases of malicious vehicles.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
128,568
1610.09345
Fault Detection in IEEE 14-Bus Power System with DG Penetration Using Wavelet Transform
Wavelet transform is proposed in this paper for detection of islanding and fault disturbances distributed generation (DG) based power system. An IEEE 14-bus system with DG penetration is considered for the detection of disturbances under different operating conditions. The power system is a hybrid combination of photovoltaic, and wind energy system connected to different buses with different level of penetration. The voltage signal is retrieved at the point of common coupling (PCC) and processed through wavelet transform to detect the disturbances. Further, energy and standard deviation (STD) as performance indices are evaluated and compared with a suitable threshold in order to analyze a disturbance condition. Again, a comparative analysis between the existing and proposed detection is studied to prove the better performance of wavelet transform.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
63,043
1502.06434
ANN Model to Predict Stock Prices at Stock Exchange Markets
Stock exchanges are considered major players in financial sectors of many countries. Most Stockbrokers, who execute stock trade, use technical, fundamental or time series analysis in trying to predict stock prices, so as to advise clients. However, these strategies do not usually guarantee good returns because they guide on trends and not the most likely price. It is therefore necessary to explore improved methods of prediction. The research proposes the use of Artificial Neural Network that is feedforward multi-layer perceptron with error backpropagation and develops a model of configuration 5:21:21:1 with 80% training data in 130,000 cycles. The research develops a prototype and tests it on 2008-2012 data from stock markets e.g. Nairobi Securities Exchange and New York Stock Exchange, where prediction results show MAPE of between 0.71% and 2.77%. Validation done with Encog and Neuroph realized comparable results. The model is thus capable of prediction on typical stock markets.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
40,491
2407.03277
Evaluating Automatic Metrics with Incremental Machine Translation Systems
We introduce a dataset comprising commercial machine translations, gathered weekly over six years across 12 translation directions. Since human A/B testing is commonly used, we assume commercial systems improve over time, which enables us to evaluate machine translation (MT) metrics based on their preference for more recent translations. Our study not only confirms several prior findings, such as the advantage of neural metrics over non-neural ones, but also explores the debated issue of how MT quality affects metric reliability--an investigation that smaller datasets in previous research could not sufficiently explore. Overall, our research demonstrates the dataset's value as a testbed for metric evaluation. We release our code at https://github.com/gjwubyron/Evo
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
470,098
2205.07999
An Exponentially Increasing Step-size for Parameter Estimation in Statistical Models
Using gradient descent (GD) with fixed or decaying step-size is a standard practice in unconstrained optimization problems. However, when the loss function is only locally convex, such a step-size schedule artificially slows GD down as it cannot explore the flat curvature of the loss function. To overcome that issue, we propose to exponentially increase the step-size of the GD algorithm. Under homogeneous assumptions on the loss function, we demonstrate that the iterates of the proposed \emph{exponential step size gradient descent} (EGD) algorithm converge linearly to the optimal solution. Leveraging that optimization insight, we then consider using the EGD algorithm for solving parameter estimation under both regular and non-regular statistical models whose loss function becomes locally convex when the sample size goes to infinity. We demonstrate that the EGD iterates reach the final statistical radius within the true parameter after a logarithmic number of iterations, which is in stark contrast to a \emph{polynomial} number of iterations of the GD algorithm in non-regular statistical models. Therefore, the total computational complexity of the EGD algorithm is \emph{optimal} and exponentially cheaper than that of the GD for solving parameter estimation in non-regular statistical models while being comparable to that of the GD in regular statistical settings. To the best of our knowledge, it resolves a long-standing gap between statistical and algorithmic computational complexities of parameter estimation in non-regular statistical models. Finally, we provide targeted applications of the general theory to several classes of statistical models, including generalized linear models with polynomial link functions and location Gaussian mixture models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
296,784
1810.10065
Statistical mechanics of low-rank tensor decomposition
Often, large, high dimensional datasets collected across multiple modalities can be organized as a higher order tensor. Low-rank tensor decomposition then arises as a powerful and widely used tool to discover simple low dimensional structures underlying such data. However, we currently lack a theoretical understanding of the algorithmic behavior of low-rank tensor decompositions. We derive Bayesian approximate message passing (AMP) algorithms for recovering arbitrarily shaped low-rank tensors buried within noise, and we employ dynamic mean field theory to precisely characterize their performance. Our theory reveals the existence of phase transitions between easy, hard and impossible inference regimes, and displays an excellent match with simulations. Moreover, it reveals several qualitative surprises compared to the behavior of symmetric, cubic tensor decomposition. Finally, we compare our AMP algorithm to the most commonly used algorithm, alternating least squares (ALS), and demonstrate that AMP significantly outperforms ALS in the presence of noise.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
111,186
2111.02298
STC speaker recognition systems for the NIST SRE 2021
This paper presents a description of STC Ltd. systems submitted to the NIST 2021 Speaker Recognition Evaluation for both fixed and open training conditions. These systems consists of a number of diverse subsystems based on using deep neural networks as feature extractors. During the NIST 2021 SRE challenge we focused on the training of the state-of-the-art deep speaker embeddings extractors like ResNets and ECAPA networks by using additive angular margin based loss functions. Additionally, inspired by the recent success of the wav2vec 2.0 features in automatic speech recognition we explored the effectiveness of this approach for the speaker verification filed. According to our observation the fine-tuning of the pretrained large wav2vec 2.0 model provides our best performing systems for open track condition. Our experiments with wav2vec 2.0 based extractors for the fixed condition showed that unsupervised autoregressive pretraining with Contrastive Predictive Coding loss opens the door to training powerful transformer-based extractors from raw speech signals. For video modality we developed our best solution with RetinaFace face detector and deep ResNet face embeddings extractor trained on large face image datasets. The final results for primary systems were obtained by different configurations of subsystems fusion on the score level followed by score calibration.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
264,826
2307.14539
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models
We introduce new jailbreak attacks on vision language models (VLMs), which use aligned LLMs and are resilient to text-only jailbreak attacks. Specifically, we develop cross-modality attacks on alignment where we pair adversarial images going through the vision encoder with textual prompts to break the alignment of the language model. Our attacks employ a novel compositional strategy that combines an image, adversarially targeted towards toxic embeddings, with generic prompts to accomplish the jailbreak. Thus, the LLM draws the context to answer the generic prompt from the adversarial image. The generation of benign-appearing adversarial images leverages a novel embedding-space-based methodology, operating with no access to the LLM model. Instead, the attacks require access only to the vision encoder and utilize one of our four embedding space targeting strategies. By not requiring access to the LLM, the attacks lower the entry barrier for attackers, particularly when vision encoders such as CLIP are embedded in closed-source LLMs. The attacks achieve a high success rate across different VLMs, highlighting the risk of cross-modality alignment vulnerabilities, and the need for new alignment approaches for multi-modal models.
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
381,965
2405.07673
An Empirical Study on the Robustness of Massively Multilingual Neural Machine Translation
Massively multilingual neural machine translation (MMNMT) has been proven to enhance the translation quality of low-resource languages. In this paper, we empirically investigate the translation robustness of Indonesian-Chinese translation in the face of various naturally occurring noise. To assess this, we create a robustness evaluation benchmark dataset for Indonesian-Chinese translation. This dataset is automatically translated into Chinese using four NLLB-200 models of different sizes. We conduct both automatic and human evaluations. Our in-depth analysis reveal the correlations between translation error types and the types of noise present, how these correlations change across different model sizes, and the relationships between automatic evaluation indicators and human evaluation indicators. The dataset is publicly available at https://github.com/tjunlp-lab/ID-ZH-MTRobustEval.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
453,814
2402.03110
Non-Stationary Latent Auto-Regressive Bandits
We consider the stochastic multi-armed bandit problem with non-stationary rewards. We present a novel formulation of non-stationarity in the environment where changes in the mean reward of the arms over time are due to some unknown, latent, auto-regressive (AR) state of order $k$. We call this new environment the latent AR bandit. Different forms of the latent AR bandit appear in many real-world settings, especially in emerging scientific fields such as behavioral health or education where there are few mechanistic models of the environment. If the AR order $k$ is known, we propose an algorithm that achieves $\tilde{O}(k\sqrt{T})$ regret in this setting. Empirically, our algorithm outperforms standard UCB across multiple non-stationary environments, even if $k$ is mis-specified.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
426,855
1102.5597
Fast and Faster: A Comparison of Two Streamed Matrix Decomposition Algorithms
With the explosion of the size of digital dataset, the limiting factor for decomposition algorithms is the \emph{number of passes} over the input, as the input is often stored out-of-core or even off-site. Moreover, we're only interested in algorithms that operate in \emph{constant memory} w.r.t. to the input size, so that arbitrarily large input can be processed. In this paper, we present a practical comparison of two such algorithms: a distributed method that operates in a single pass over the input vs. a streamed two-pass stochastic algorithm. The experiments track the effect of distributed computing, oversampling and memory trade-offs on the accuracy and performance of the two algorithms. To ensure meaningful results, we choose the input to be a real dataset, namely the whole of the English Wikipedia, in the application settings of Latent Semantic Analysis.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
9,404
1812.04353
Proximal Mean-field for Neural Network Quantization
Compressing large Neural Networks (NN) by quantizing the parameters, while maintaining the performance is highly desirable due to reduced memory and time complexity. In this work, we cast NN quantization as a discrete labelling problem, and by examining relaxations, we design an efficient iterative optimization procedure that involves stochastic gradient descent followed by a projection. We prove that our simple projected gradient descent approach is, in fact, equivalent to a proximal version of the well-known mean-field method. These findings would allow the decades-old and theoretically grounded research on MRF optimization to be used to design better network quantization schemes. Our experiments on standard classification datasets (MNIST, CIFAR10/100, TinyImageNet) with convolutional and residual architectures show that our algorithm obtains fully-quantized networks with accuracies very close to the floating-point reference networks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
116,196
2011.04853
Social-STAGE: Spatio-Temporal Multi-Modal Future Trajectory Forecast
This paper considers the problem of multi-modal future trajectory forecast with ranking. Here, multi-modality and ranking refer to the multiple plausible path predictions and the confidence in those predictions, respectively. We propose Social-STAGE, Social interaction-aware Spatio-Temporal multi-Attention Graph convolution network with novel Evaluation for multi-modality. Our main contributions include analysis and formulation of multi-modality with ranking using interaction and multi-attention, and introduction of new metrics to evaluate the diversity and associated confidence of multi-modal predictions. We evaluate our approach on existing public datasets ETH and UCY and show that the proposed algorithm outperforms the state of the arts on these datasets.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
205,719
2112.00260
Ranking Distance Calibration for Cross-Domain Few-Shot Learning
Recent progress in few-shot learning promotes a more realistic cross-domain setting, where the source and target datasets are from different domains. Due to the domain gap and disjoint label spaces between source and target datasets, their shared knowledge is extremely limited. This encourages us to explore more information in the target domain rather than to overly elaborate training strategies on the source domain as in many existing methods. Hence, we start from a generic representation pre-trained by a cross-entropy loss and a conventional distance-based classifier, along with an image retrieval view, to employ a re-ranking process for calibrating a target distance matrix by discovering the reciprocal k-nearest neighbours within the task. Assuming the pre-trained representation is biased towards the source, we construct a non-linear subspace to minimise task-irrelevant features therewithin while keep more transferrable discriminative information by a hyperbolic tangent transformation. The calibrated distance in this target-aware non-linear subspace is complementary to that in the pre-trained representation. To impose such distance calibration information onto the pre-trained representation, a Kullback-Leibler divergence loss is employed to gradually guide the model towards the calibrated distance-based distribution. Extensive evaluations on eight target domains show that this target ranking calibration process can improve conventional distance-based classifiers in few-shot learning.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
269,072
2403.14664
ClickTree: A Tree-based Method for Predicting Math Students' Performance Based on Clickstream Data
The prediction of student performance and the analysis of students' learning behavior play an important role in enhancing online courses. By analysing a massive amount of clickstream data that captures student behavior, educators can gain valuable insights into the factors that influence academic outcomes and identify areas of improvement in courses. In this study, we developed ClickTree, a tree-based methodology, to predict student performance in mathematical assignments based on students' clickstream data. We extracted a set of features, including problem-level, assignment-level and student-level features, from the extensive clickstream data and trained a CatBoost tree to predict whether a student successfully answers a problem in an assignment. The developed method achieved an AUC of 0.78844 in the Educational Data Mining Cup 2023 and ranked second in the competition. Furthermore, our results indicate that students encounter more difficulties in the problem types that they must select a subset of answers from a given set as well as problem subjects of Algebra II. Additionally, students who performed well in answering end-unit assignment problems engaged more with in-unit assignments and answered more problems correctly, while those who struggled had higher tutoring request rate. The proposed method can be utilized to improve students' learning experiences, and the above insights can be integrated into mathematical courses to enhance students' learning outcomes.
true
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
440,187
2404.01243
A Unified and Interpretable Emotion Representation and Expression Generation
Canonical emotions, such as happy, sad, and fearful, are easy to understand and annotate. However, emotions are often compound, e.g. happily surprised, and can be mapped to the action units (AUs) used for expressing emotions, and trivially to the canonical ones. Intuitively, emotions are continuous as represented by the arousal-valence (AV) model. An interpretable unification of these four modalities - namely, Canonical, Compound, AUs, and AV - is highly desirable, for a better representation and understanding of emotions. However, such unification remains to be unknown in the current literature. In this work, we propose an interpretable and unified emotion model, referred as C2A2. We also develop a method that leverages labels of the non-unified models to annotate the novel unified one. Finally, we modify the text-conditional diffusion models to understand continuous numbers, which are then used to generate continuous expressions using our unified emotion model. Through quantitative and qualitative experiments, we show that our generated images are rich and capture subtle expressions. Our work allows a fine-grained generation of expressions in conjunction with other textual inputs and offers a new label space for emotions at the same time.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
443,333
2410.13056
Channel-Wise Mixed-Precision Quantization for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable success across a wide range of language tasks, but their deployment on edge devices remains challenging due to the substantial memory requirements imposed by their large parameter sizes. Weight-only quantization presents a promising solution to reduce the memory footprint of LLMs. However, existing approaches primarily focus on integer-bit quantization, limiting their adaptability to fractional-bit quantization tasks and preventing the full utilization of available storage space on devices. In this paper, we introduce Channel-Wise Mixed-Precision Quantization (CMPQ), a novel mixed-precision quantization method that allocates quantization precision in a channel-wise pattern based on activation distributions. By assigning different precision levels to different weight channels, CMPQ can adapt to any bit-width constraint. CMPQ employs a non-uniform quantization strategy and incorporates two outlier extraction techniques that collaboratively preserve the critical information, thereby minimizing the quantization loss. Experiments on different sizes of LLMs demonstrate that CMPQ not only enhances performance in integer-bit quantization tasks but also achieves significant performance gains with a modest increase in memory usage. CMPQ thus represents an adaptive and effective approach to LLM quantization, offering substantial benefits across diverse device capabilities.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
499,344
2103.00508
Citizen Participation and Machine Learning for a Better Democracy
The development of democratic systems is a crucial task as confirmed by its selection as one of the Millennium Sustainable Development Goals by the United Nations. In this article, we report on the progress of a project that aims to address barriers, one of which is information overload, to achieving effective direct citizen participation in democratic decision-making processes. The main objectives are to explore if the application of Natural Language Processing (NLP) and machine learning can improve citizens' experience of digital citizen participation platforms. Taking as a case study the "Decide Madrid" Consul platform, which enables citizens to post proposals for policies they would like to see adopted by the city council, we used NLP and machine learning to provide new ways to (a) suggest to citizens proposals they might wish to support; (b) group citizens by interests so that they can more easily interact with each other; (c) summarise comments posted in response to proposals; (d) assist citizens in aggregating and developing proposals. Evaluation of the results confirms that NLP and machine learning have a role to play in addressing some of the barriers users of platforms such as Consul currently experience.
false
false
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
222,310
2408.02796
Gaussian Mixture based Evidential Learning for Stereo Matching
In this paper, we introduce a novel Gaussian mixture based evidential learning solution for robust stereo matching. Diverging from previous evidential deep learning approaches that rely on a single Gaussian distribution, our framework posits that individual image data adheres to a mixture-of-Gaussian distribution in stereo matching. This assumption yields more precise pixel-level predictions and more accurately mirrors the real-world image distribution. By further employing the inverse-Gamma distribution as an intermediary prior for each mixture component, our probabilistic model achieves improved depth estimation compared to its counterpart with the single Gaussian and effectively captures the model uncertainty, which enables a strong cross-domain generation ability. We evaluated our method for stereo matching by training the model using the Scene Flow dataset and testing it on KITTI 2015 and Middlebury 2014. The experiment results consistently show that our method brings improvements over the baseline methods in a trustworthy manner. Notably, our approach achieved new state-of-the-art results on both the in-domain validated data and the cross-domain datasets, demonstrating its effectiveness and robustness in stereo matching tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
478,766
2309.14552
Tactile Estimation of Extrinsic Contact Patch for Stable Placement
Precise perception of contact interactions is essential for fine-grained manipulation skills for robots. In this paper, we present the design of feedback skills for robots that must learn to stack complex-shaped objects on top of each other (see Fig.1). To design such a system, a robot should be able to reason about the stability of placement from very gentle contact interactions. Our results demonstrate that it is possible to infer the stability of object placement based on tactile readings during contact formation between the object and its environment. In particular, we estimate the contact patch between a grasped object and its environment using force and tactile observations to estimate the stability of the object during a contact formation. The contact patch could be used to estimate the stability of the object upon release of the grasp. The proposed method is demonstrated in various pairs of objects that are used in a very popular board game.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
394,655
1903.02706
Twitter Speaks: A Case of National Disaster Situational Awareness
In recent years, we have been faced with a series of natural disasters causing a tremendous amount of financial, environmental, and human losses. The unpredictable nature of natural disasters' behavior makes it hard to have a comprehensive situational awareness (SA) to support disaster management. Using opinion surveys is a traditional approach to analyze public concerns during natural disasters; however, this approach is limited, expensive, and time-consuming. Luckily the advent of social media has provided scholars with an alternative means of analyzing public concerns. Social media enable users (people) to freely communicate their opinions and disperse information regarding current events including natural disasters. This research emphasizes the value of social media analysis and proposes an analytical framework: Twitter Situational Awareness (TwiSA). This framework uses text mining methods including sentiment analysis and topic modeling to create a better SA for disaster preparedness, response, and recovery. TwiSA has also effectively deployed on a large number of tweets and tracks the negative concerns of people during the 2015 South Carolina flood.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
123,554
2109.00965
Domain Adaptive Cascade R-CNN for MItosis DOmain Generalization (MIDOG) Challenge
We present a summary of the domain adaptive cascade R-CNN method for mitosis detection of digital histopathology images. By comprehensive data augmentation and adapting existing popular detection architecture, our proposed method has achieved an F1 score of 0.7500 on the preliminary test set in MItosis DOmain Generalization (MIDOG) Challenge at MICCAI 2021.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
253,301
2003.06311
Predictive Analysis for Detection of Human Neck Postures using a robust integration of kinetics and kinematics
Human neck postures and movements need to be monitored, measured, quantified and analyzed, as a preventive measure in healthcare applications. Improper neck postures are an increasing source of neck musculoskeletal disorders, requiring therapy and rehabilitation. The motivation for the research presented in this paper was the need to develop a notification mechanism for improper neck usage. Kinematic data captured by sensors have limitations in accurately classifying the neck postures. Hence, we propose an integrated use of kinematic and kinetic data to efficiently classify neck postures. Using machine learning algorithms we obtained 100% accuracy in the predictive analysis of this data. The research analysis and discussions show that the kinetic data of the Hyoid muscles can accurately detect the neck posture given the corresponding kinematic data captured by the neck-band. The proposed robust platform for the integration of kinematic and kinetic data has enabled the design of a smart neck-band for the prevention of neck musculoskeletal disorders.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
168,082
2006.05612
Deep Learning for Change Detection in Remote Sensing Images: Comprehensive Review and Meta-Analysis
Deep learning (DL) algorithms are considered as a methodology of choice for remote-sensing image analysis over the past few years. Due to its effective applications, deep learning has also been introduced for automatic change detection and achieved great success. The present study attempts to provide a comprehensive review and a meta-analysis of the recent progress in this subfield. Specifically, we first introduce the fundamentals of deep learning methods which arefrequently adopted for change detection. Secondly, we present the details of the meta-analysis conducted to examine the status of change detection DL studies. Then, we focus on deep learning-based change detection methodologies for remote sensing images by giving a general overview of the existing methods. Specifically, these deep learning-based methods were classified into three groups; fully supervised learning-based methods, fully unsupervised learning-based methods and transfer learning-based techniques. As a result of these investigations, promising new directions were identified for future research. This study will contribute in several ways to our understanding of deep learning for change detection and will provide a basis for further research.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,136
2310.16742
Interferometric Neural Networks
On the one hand, artificial neural networks have many successful applications in the field of machine learning and optimization. On the other hand, interferometers are integral parts of any field that deals with waves such as optics, astronomy, and quantum physics. Here, we introduce neural networks composed of interferometers and then build generative adversarial networks from them. Our networks do not have any classical layer and can be realized on quantum computers or photonic chips. We demonstrate their applicability for combinatorial optimization, image classification, and image generation. For combinatorial optimization, our network consistently converges to the global optimum or remains within a narrow range of it. In multi-class image classification tasks, our networks achieve accuracies of 93% and 83%. Lastly, we show their capability to generate images of digits from 0 to 9 as well as human faces.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
402,854
2104.12627
Analyzing Green View Index and Green View Index best path using Google Street View and deep learning
As an important part of urban landscape research, analyzing and studying street-level greenery can increase the understanding of a city's greenery, contributing to better urban living environment planning and design. Planning the best path of urban greenery is a means to effectively maximize the use of urban greenery, which plays a positive role in the physical and mental health of urban residents and the path planning of visitors. In this paper, we used Google Street View (GSV) to obtain street view images of Osaka City. The semantic segmentation model is adopted to segment the street view images and analyze the Green View Index (GVI) of Osaka City. Based on the GVI, we take advantage of the adjacency matrix and Floyd-Warshall Algorithm to calculate Green View Index best path, solving the limitations of ArcGIS software. Our analysis not only allows the calculation of specific routes for the GVI best paths but also realizes the visualization and integration of neighborhood urban greenery. By summarizing all the data, we can conduct an intuitive feeling and objective analysis of the street-level greenery in the research area. Based on this, such as urban residents and visitors can maximize the available natural resources for a better life. The dataset and code are available at https://github.com/Jackieam/GVI-Best-Path.
false
false
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
232,275
2207.07072
A Query-Optimal Algorithm for Finding Counterfactuals
We design an algorithm for finding counterfactuals with strong theoretical guarantees on its performance. For any monotone model $f : X^d \to \{0,1\}$ and instance $x^\star$, our algorithm makes \[ {S(f)^{O(\Delta_f(x^\star))}\cdot \log d}\] queries to $f$ and returns {an {\sl optimal}} counterfactual for $x^\star$: a nearest instance $x'$ to $x^\star$ for which $f(x')\ne f(x^\star)$. Here $S(f)$ is the sensitivity of $f$, a discrete analogue of the Lipschitz constant, and $\Delta_f(x^\star)$ is the distance from $x^\star$ to its nearest counterfactuals. The previous best known query complexity was $d^{\,O(\Delta_f(x^\star))}$, achievable by brute-force local search. We further prove a lower bound of $S(f)^{\Omega(\Delta_f(x^\star))} + \Omega(\log d)$ on the query complexity of any algorithm, thereby showing that the guarantees of our algorithm are essentially optimal.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
308,094
2402.00772
Neural Risk Limiting Dispatch in Power Networks: Formulation and Generalization Guarantees
Risk limiting dispatch (RLD) has been proposed as an approach that effectively trades off economic costs with operational risks for power dispatch under uncertainty. However, how to solve the RLD problem with provably near-optimal performance still remains an open problem. This paper presents a learning-based solution to this challenge. We first design a data-driven formulation for the RLD problem, which aims to construct a decision rule that directly maps day-ahead observable information to cost-effective dispatch decisions for the future delivery interval. Unlike most existing works that follow a predict-then-optimize paradigm, this end-to-end rule bypasses the additional suboptimality introduced by separately handling prediction and optimization. We then propose neural RLD, a novel solution method to the data-driven formulation. This method leverages an L2-regularized neural network to learn the decision rule, thereby transforming the data-driven formulation into a neural network training task that can be efficiently completed by stochastic gradient descent. A theoretical performance guarantee is further established to bound the suboptimality of our method, which implies that its suboptimality approaches to zero with high probability as more samples are utilized. Simulation tests across various systems demonstrate our method's superior performance in convergence, suboptimality, and computational efficiency compared with benchmarks.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
425,716
2310.02564
Performance Analysis and Optimization of Reconfigurable Multi-Functional Surface Assisted Wireless Communications
Although reconfigurable intelligent surfaces (RISs) can improve the performance of wireless networks by smartly reconfiguring the radio environment, existing passive RISs face two key challenges, i.e., double-fading attenuation and dependence on grid/battery. To address these challenges, this paper proposes a new RIS architecture, called multi-functional RIS (MF-RIS). Different from conventional reflecting-only RIS, the proposed MF-RIS is capable of supporting multiple functions with one surface, including signal reflection, amplification, and energy harvesting. As such, our MF-RIS is able to overcome the double-fading attenuation by harvesting energy from incident signals. Through theoretical analysis, we derive the achievable capacity of an MF-RIS-aided communication network. Compared to the capacity achieved by the existing self-sustainable RIS, we derive the number of reflective elements required for MF-RIS to outperform self-sustainable RIS. To realize a self-sustainable communication system, we investigate the use of MF-RIS in improving the sum-rate of multi-user wireless networks. Specifically, we solve a non-convex optimization problem by jointly designing the transmit beamforming and MF-RIS coefficients. As an extension, we investigate a resource allocation problem in a practical scenario with imperfect channel state information. By approximating the semi-infinite constraints with the S-procedure and the general sign-definiteness, we propose a robust beamforming scheme to combat the inevitable channel estimation errors. Finally, numerical results show that: 1) compared to the self-sustainable RIS, MF-RIS can strike a better balance between energy self-sustainability and throughput improvement; and 2) unlike reflecting-only RIS which can be deployed near the transmitter or receiver, MF-RIS should be deployed closer to the transmitter for higher spectrum efficiency.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
396,904
2411.13904
Towards Full Delegation: Designing Ideal Agentic Behaviors for Travel Planning
How are LLM-based agents used in the future? While many of the existing work on agents has focused on improving the performance of a specific family of objective and challenging tasks, in this work, we take a different perspective by thinking about full delegation: agents take over humans' routine decision-making processes and are trusted by humans to find solutions that fit people's personalized needs and are adaptive to ever-changing context. In order to achieve such a goal, the behavior of the agents, i.e., agentic behaviors, should be evaluated not only on their achievements (i.e., outcome evaluation), but also how they achieved that (i.e., procedure evaluation). For this, we propose APEC Agent Constitution, a list of criteria that an agent should follow for good agentic behaviors, including Accuracy, Proactivity, Efficiency and Credibility. To verify whether APEC aligns with human preferences, we develop APEC-Travel, a travel planning agent that proactively extracts hidden personalized needs via multi-round dialog with travelers. APEC-Travel is constructed purely from synthetic data generated by Llama3.1-405B-Instruct with a diverse set of travelers' persona to simulate rich distribution of dialogs. Iteratively fine-tuned to follow APEC Agent Constitution, APEC-Travel surpasses baselines by 20.7% on rule-based metrics and 9.1% on LLM-as-a-Judge scores across the constitution axes.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
509,966
2105.14322
RPG: Learning Recursive Point Cloud Generation
In this paper we propose a novel point cloud generator that is able to reconstruct and generate 3D point clouds composed of semantic parts. Given a latent representation of the target 3D model, the generation starts from a single point and gets expanded recursively to produce the high-resolution point cloud via a sequence of point expansion stages. During the recursive procedure of generation, we not only obtain the coarse-to-fine point clouds for the target 3D model from every expansion stage, but also unsupervisedly discover the semantic segmentation of the target model according to the hierarchical/parent-child relation between the points across expansion stages. Moreover, the expansion modules and other elements used in our recursive generator are mostly sharing weights thus making the overall framework light and efficient. Extensive experiments are conducted to demonstrate that our proposed point cloud generator has comparable or even superior performance on both generation and reconstruction tasks in comparison to various baselines, as well as provides the consistent co-segmentation among 3D instances of the same object class.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
237,626
1304.1113
On Heuristics for Finding Loop Cutsets in Multiply-Connected Belief Networks
We introduce a new heuristic algorithm for the problem of finding minimum size loop cutsets in multiply connected belief networks. We compare this algorithm to that proposed in [Suemmondt and Cooper, 1988]. We provide lower bounds on the performance of these algorithms with respect to one another and with respect to optimal. We demonstrate that no heuristic algorithm for this problem cam be guaranteed to produce loop cutsets within a constant difference from optimal. We discuss experimental results based on randomly generated networks, and discuss future work and open questions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
23,466
2501.14053
The Redundancy of Non-Singular Channel Simulation
Channel simulation is an alternative to quantization and entropy coding for performing lossy source coding. Recently, channel simulation has gained significant traction in both the machine learning and information theory communities, as it integrates better with machine learning-based data compression algorithms and has better rate-distortion-perception properties than quantization. As the practical importance of channel simulation increases, it is vital to understand its fundamental limitations. Recently, Sriramu and Wagner provided an almost complete characterisation of the redundancy of channel simulation algorithms. In this paper, we complete this characterisation. First, we significantly extend a result of Li and El Gamal, and show that the redundancy of any instance of a channel simulation problem is lower bounded by the channel simulation divergence. Second, we give two proofs that the asymptotic redundancy of simulating iid non-singular channels is lower-bounded by $1/2$: one using a direct approach based on the asymptotic expansion of the channel simulation divergence and one using large deviations theory.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
526,961
1804.05814
Multidimensional Constellations for Uplink SCMA Systems --- A Comparative Study
Sparse code multiple access (SCMA) is a class of non-orthogonal multiple access (NOMA) that is proposed to support uplink machine-type communication services. In an SCMA system, designing multidimensional constellation plays an important role in the performance of the system. Since the behaviour of multidimensional constellations highly depends on the type of the channel, it is crucial to employ a constellation that is suitable for a certain application. In this paper, we first highlight and review the key performance indicators (KPIs) of multidimensional constellations that should be considered in their design process for various channel scenarios. We then provide a survey on the known multidimensional constellations in the context of SCMA systems with their design criteria. The performance of some of those constellations are evaluated for uncoded, high-rate, and low-rate LTE turbo-coded SCMA systems under different channel conditions through extensive simulations. All turbo-coded comparisons are performed for bit-interleaved coded modulation, with a concatenated detection and decoding scheme. Simulation results confirm that multidimensional constellations that satisfy KPIs of a certain channel scenario outperform others. Moreover, the bit error rate performance of uncoded systems, and the performance of the coded systems are coupled to their bit-labeling. The performance of the systems also remarkably depends on the behavior of the multi-user detector at different signal-to-noise ratio regions.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
95,149
2409.12960
LVCD: Reference-based Lineart Video Colorization with Diffusion Models
We propose the first video diffusion framework for reference-based lineart video colorization. Unlike previous works that rely solely on image generative models to colorize lineart frame by frame, our approach leverages a large-scale pretrained video diffusion model to generate colorized animation videos. This approach leads to more temporally consistent results and is better equipped to handle large motions. Firstly, we introduce Sketch-guided ControlNet which provides additional control to finetune an image-to-video diffusion model for controllable video synthesis, enabling the generation of animation videos conditioned on lineart. We then propose Reference Attention to facilitate the transfer of colors from the reference frame to other frames containing fast and expansive motions. Finally, we present a novel scheme for sequential sampling, incorporating the Overlapped Blending Module and Prev-Reference Attention, to extend the video diffusion model beyond its original fixed-length limitation for long video colorization. Both qualitative and quantitative results demonstrate that our method significantly outperforms state-of-the-art techniques in terms of frame and video quality, as well as temporal consistency. Moreover, our method is capable of generating high-quality, long temporal-consistent animation videos with large motions, which is not achievable in previous works. Our code and model are available at https://luckyhzt.github.io/lvcd.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
489,788
2303.07205
The Science of Detecting LLM-Generated Texts
The emergence of large language models (LLMs) has resulted in the production of LLM-generated texts that is highly sophisticated and almost indistinguishable from texts written by humans. However, this has also sparked concerns about the potential misuse of such texts, such as spreading misinformation and causing disruptions in the education system. Although many detection approaches have been proposed, a comprehensive understanding of the achievements and challenges is still lacking. This survey aims to provide an overview of existing LLM-generated text detection techniques and enhance the control and regulation of language generation models. Furthermore, we emphasize crucial considerations for future research, including the development of comprehensive evaluation metrics and the threat posed by open-source LLMs, to drive progress in the area of LLM-generated text detection.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
351,169
1412.8079
Persian Sentiment Analyzer: A Framework based on a Novel Feature Selection Method
In the recent decade, with the enormous growth of digital content in internet and databases, sentiment analysis has received more and more attention between information retrieval and natural language processing researchers. Sentiment analysis aims to use automated tools to detect subjective information from reviews. One of the main challenges in sentiment analysis is feature selection. Feature selection is widely used as the first stage of analysis and classification tasks to reduce the dimension of problem, and improve speed by the elimination of irrelevant and redundant features. Up to now as there are few researches conducted on feature selection in sentiment analysis, there are very rare works for Persian sentiment analysis. This paper considers the problem of sentiment classification using different feature selection methods for online customer reviews in Persian language. Three of the challenges of Persian text are using of a wide variety of declensional suffixes, different word spacing and many informal or colloquial words. In this paper we study these challenges by proposing a model for sentiment classification of Persian review documents. The proposed model is based on lemmatization and feature selection and is employed Naive Bayes algorithm for classification. We evaluate the performance of the model on a manually gathered collection of cellphone reviews, where the results show the effectiveness of the proposed approaches.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
38,886
2407.05262
FastSpiker: Enabling Fast Training for Spiking Neural Networks on Event-based Data through Learning Rate Enhancements for Autonomous Embedded Systems
Autonomous embedded systems (e.g., robots) typically necessitate intelligent computation with low power/energy processing for completing their tasks. Such requirements can be fulfilled by embodied neuromorphic intelligence with spiking neural networks (SNNs) because of their high learning quality (e.g., accuracy) and sparse computation. Here, the employment of event-based data is preferred to ensure seamless connectivity between input and processing parts. However, state-of-the-art SNNs still face a long training time to achieve high accuracy, thereby incurring high energy consumption and producing a high rate of carbon emission. Toward this, we propose FastSpiker, a novel methodology that enables fast SNN training on event-based data through learning rate enhancements targeting autonomous embedded systems. In FastSpiker, we first investigate the impact of different learning rate policies and their values, then select the ones that quickly offer high accuracy. Afterward, we explore different settings for the selected learning rate policies to find the appropriate policies through a statistical-based decision. Experimental results show that our FastSpiker offers up to 10.5x faster training time and up to 88.39% lower carbon emission to achieve higher or comparable accuracy to the state-of-the-art on the event-based automotive dataset (i.e., NCARS). In this manner, our FastSpiker methodology paves the way for green and sustainable computing in realizing embodied neuromorphic intelligence for autonomous embedded systems.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
true
false
false
470,897
2105.00071
Evaluating Attribution in Dialogue Systems: The BEGIN Benchmark
Knowledge-grounded dialogue systems powered by large language models often generate responses that, while fluent, are not attributable to a relevant source of information. Progress towards models that do not exhibit this issue requires evaluation metrics that can quantify its prevalence. To this end, we introduce the Benchmark for Evaluation of Grounded INteraction (BEGIN), comprised of 12k dialogue turns generated by neural dialogue systems trained on three knowledge-grounded dialogue corpora. We collect human annotations assessing the extent to which the models' responses can be attributed to the given background information. We then use BEGIN to analyze eight evaluation metrics. We find that these metrics rely on spurious correlations, do not reliably distinguish attributable abstractive responses from unattributable ones, and perform substantially worse when the knowledge source is longer. Our findings underscore the need for more sophisticated and robust evaluation metrics for knowledge-grounded dialogue. We make BEGIN publicly available at https://github.com/google/BEGIN-dataset.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
233,076
1811.10789
Flexible Attributed Network Embedding
Network embedding aims to find a way to encode network by learning an embedding vector for each node in the network. The network often has property information which is highly informative with respect to the node's position and role in the network. Most network embedding methods fail to utilize this information during network representation learning. In this paper, we propose a novel framework, FANE, to integrate structure and property information in the network embedding process. In FANE, we design a network to unify heterogeneity of the two information sources, and define a new random walking strategy to leverage property information and make the two information compensate. FANE is conceptually simple and empirically powerful. It improves over the state-of-the-art methods on Cora dataset classification task by over 5%, more than 10% on WebKB dataset classification task. Experiments also show that the results improve more than the state-of-the-art methods as increasing training size. Moreover, qualitative visualization show that our framework is helpful in network property information exploration. In all, we present a new way for efficiently learning state-of-the-art task-independent representations in complex attributed networks. The source code and datasets of this paper can be obtained from https://github.com/GraphWorld/FANE.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
114,587
2309.09450
Are You Worthy of My Trust?: A Socioethical Perspective on the Impacts of Trustworthy AI Systems on the Environment and Human Society
With ubiquitous exposure of AI systems today, we believe AI development requires crucial considerations to be deemed trustworthy. While the potential of AI systems is bountiful, though, is still unknown-as are their risks. In this work, we offer a brief, high-level overview of societal impacts of AI systems. To do so, we highlight the requirement of multi-disciplinary governance and convergence throughout its lifecycle via critical systemic examinations (e.g., energy consumption), and later discuss induced effects on the environment (i.e., carbon footprint) and its users (i.e., social development). In particular, we consider these impacts from a multi-disciplinary perspective: computer science, sociology, environmental science, and so on to discuss its inter-connected societal risks and inability to simultaneously satisfy aspects of well-being. Therefore, we accentuate the necessity of holistically addressing pressing concerns of AI systems from a socioethical impact assessment perspective to explicate its harmful societal effects to truly enable humanity-centered Trustworthy AI.
true
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
392,613
2112.09624
Reciprocity, community detection, and link prediction in dynamic networks
Many complex systems change their structure over time, in these cases dynamic networks can provide a richer representation of such phenomena. As a consequence, many inference methods have been generalized to the dynamic case with the aim to model dynamic interactions. Particular interest has been devoted to extend the stochastic block model and its variant, to capture community structure as the network changes in time. While these models assume that edge formation depends only on the community memberships, recent work for static networks show the importance to include additional parameters capturing structural properties, as reciprocity for instance. Remarkably, these models are capable of generating more realistic network representations than those that only consider community membership. To this aim, we present a probabilistic generative model with hidden variables that integrates reciprocity and communities as structural information of networks that evolve in time. The model assumes a fundamental order in observing reciprocal data, that is an edge is observed, conditional on its reciprocated edge in the past. We deploy a Markovian approach to construct the network's transition matrix between time steps and parameters' inference is performed with an Expectation-Maximization algorithm that leads to high computational efficiency because it exploits the sparsity of the dataset. We test the performance of the model on synthetic dynamical networks, as well as on real networks of citations and email datasets. We show that our model captures the reciprocity of real networks better than standard models with only community structure, while performing well at link prediction tasks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
272,201
2009.10638
Sense-Deliberate-Act Cognitive Agents for Sense-Compute-Control Applications in the Internet of Things & Services
In this paper, we advocate Agent-Oriented Software Engi-neering (AOSE) through employing Belief-Desire-Intention (BDI) intel-ligent agents for developing Sense-Compute-Control (SCC) applications in the Internet of Things and Services (IoTS). We argue that not only the agent paradigm, in general, but also cognitive BDI agents with sense-deliberate-act cycle, in particular, fit very well to the nature of SCC applications in the IoTS. However, considering the highly constrained heterogeneous devices that are prevalent in the IoTS, existing BDI agent frameworks, even those especially created for Wireless Sensor Networks (WSNs), do not work. We elaborate on the challenges and propose pos-sible approaches to address them.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
196,945
2309.15940
Context-Aware Entity Grounding with Open-Vocabulary 3D Scene Graphs
We present an Open-Vocabulary 3D Scene Graph (OVSG), a formal framework for grounding a variety of entities, such as object instances, agents, and regions, with free-form text-based queries. Unlike conventional semantic-based object localization approaches, our system facilitates context-aware entity localization, allowing for queries such as ``pick up a cup on a kitchen table" or ``navigate to a sofa on which someone is sitting". In contrast to existing research on 3D scene graphs, OVSG supports free-form text input and open-vocabulary querying. Through a series of comparative experiments using the ScanNet dataset and a self-collected dataset, we demonstrate that our proposed approach significantly surpasses the performance of previous semantic-based localization techniques. Moreover, we highlight the practical application of OVSG in real-world robot navigation and manipulation experiments.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
395,158
2409.02730
Complete and Efficient Covariants for 3D Point Configurations with Application to Learning Molecular Quantum Properties
When modeling physical properties of molecules with machine learning, it is desirable to incorporate $SO(3)$-covariance. While such models based on low body order features are not complete, we formulate and prove general completeness properties for higher order methods, and show that $6k-5$ of these features are enough for up to $k$ atoms. We also find that the Clebsch--Gordan operations commonly used in these methods can be replaced by matrix multiplications without sacrificing completeness, lowering the scaling from $O(l^6)$ to $O(l^3)$ in the degree of the features. We apply this to quantum chemistry, but the proposed methods are generally applicable for problems involving 3D point configurations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
485,820
2305.09703
Dynamic Causal Explanation Based Diffusion-Variational Graph Neural Network for Spatio-temporal Forecasting
Graph neural networks (GNNs), especially dynamic GNNs, have become a research hotspot in spatio-temporal forecasting problems. While many dynamic graph construction methods have been developed, relatively few of them explore the causal relationship between neighbour nodes. Thus, the resulting models lack strong explainability for the causal relationship between the neighbour nodes of the dynamically generated graphs, which can easily lead to a risk in subsequent decisions. Moreover, few of them consider the uncertainty and noise of dynamic graphs based on the time series datasets, which are ubiquitous in real-world graph structure networks. In this paper, we propose a novel Dynamic Diffusion-Variational Graph Neural Network (DVGNN) for spatio-temporal forecasting. For dynamic graph construction, an unsupervised generative model is devised. Two layers of graph convolutional network (GCN) are applied to calculate the posterior distribution of the latent node embeddings in the encoder stage. Then, a diffusion model is used to infer the dynamic link probability and reconstruct causal graphs in the decoder stage adaptively. The new loss function is derived theoretically, and the reparameterization trick is adopted in estimating the probability distribution of the dynamic graphs by Evidence Lower Bound during the backpropagation period. After obtaining the generated graphs, dynamic GCN and temporal attention are applied to predict future states. Experiments are conducted on four real-world datasets of different graph structures in different domains. The results demonstrate that the proposed DVGNN model outperforms state-of-the-art approaches and achieves outstanding Root Mean Squared Error result while exhibiting higher robustness. Also, by F1-score and probability distribution analysis, we demonstrate that DVGNN better reflects the causal relationship and uncertainty of dynamic graphs.
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
false
false
364,745
2406.02204
The Deep Latent Space Particle Filter for Real-Time Data Assimilation with Uncertainty Quantification
In Data Assimilation, observations are fused with simulations to obtain an accurate estimate of the state and parameters for a given physical system. Combining data with a model, however, while accurately estimating uncertainty, is computationally expensive and infeasible to run in real-time for complex systems. Here, we present a novel particle filter methodology, the Deep Latent Space Particle filter or D-LSPF, that uses neural network-based surrogate models to overcome this computational challenge. The D-LSPF enables filtering in the low-dimensional latent space obtained using Wasserstein AEs with modified vision transformer layers for dimensionality reduction and transformers for parameterized latent space time stepping. As we demonstrate on three test cases, including leak localization in multi-phase pipe flow and seabed identification for fully nonlinear water waves, the D-LSPF runs orders of magnitude faster than a high-fidelity particle filter and 3-5 times faster than alternative methods while being up to an order of magnitude more accurate. The D-LSPF thus enables real-time data assimilation with uncertainty quantification for physical systems.
false
true
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
460,665
2407.09571
ImPORTance -- Machine Learning-Driven Analysis of Global Port Significance and Network Dynamics for Improved Operational Efficiency
Seaports play a crucial role in the global economy, and researchers have sought to understand their significance through various studies. In this paper, we aim to explore the common characteristics shared by important ports by analyzing the network of connections formed by vessel movement among them. To accomplish this task, we adopt a bottom-up network construction approach that combines three years' worth of AIS (Automatic Identification System) data from around the world, constructing a Ports Network that represents the connections between different ports. Through such representation, we use machine learning to measure the relative significance of different port features. Our model examined such features and revealed that geographical characteristics and the depth of the port are indicators of a port's significance to the Ports Network. Accordingly, this study employs a data-driven approach and utilizes machine learning to provide a comprehensive understanding of the factors contributing to ports' importance. The outcomes of our work are aimed to inform decision-making processes related to port development, resource allocation, and infrastructure planning in the industry.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
472,650
2406.17659
DKPROMPT: Domain Knowledge Prompting Vision-Language Models for Open-World Planning
Vision-language models (VLMs) have been applied to robot task planning problems, where the robot receives a task in natural language and generates plans based on visual inputs. While current VLMs have demonstrated strong vision-language understanding capabilities, their performance is still far from being satisfactory in planning tasks. At the same time, although classical task planners, such as PDDL-based, are strong in planning for long-horizon tasks, they do not work well in open worlds where unforeseen situations are common. In this paper, we propose a novel task planning and execution framework, called DKPROMPT, which automates VLM prompting using domain knowledge in PDDL for classical planning in open worlds. Results from quantitative experiments show that DKPROMPT outperforms classical planning, pure VLM-based and a few other competitive baselines in task completion rate.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
467,667
1911.09304
Automatic Text-based Personality Recognition on Monologues and Multiparty Dialogues Using Attentive Networks and Contextual Embeddings
Previous works related to automatic personality recognition focus on using traditional classification models with linguistic features. However, attentive neural networks with contextual embeddings, which have achieved huge success in text classification, are rarely explored for this task. In this project, we have two major contributions. First, we create the first dialogue-based personality dataset, FriendsPersona, by annotating 5 personality traits of speakers from Friends TV Show through crowdsourcing. Second, we present a novel approach to automatic personality recognition using pre-trained contextual embeddings (BERT and RoBERTa) and attentive neural networks. Our models largely improve the state-of-art results on the monologue Essays dataset by 2.49%, and establish a solid benchmark on our FriendsPersona. By comparing results in two datasets, we demonstrate the challenges of modeling personality in multi-party dialogue.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
154,473
2306.08889
Dissecting Multimodality in VideoQA Transformer Models by Impairing Modality Fusion
While VideoQA Transformer models demonstrate competitive performance on standard benchmarks, the reasons behind their success are not fully understood. Do these models capture the rich multimodal structures and dynamics from video and text jointly? Or are they achieving high scores by exploiting biases and spurious features? Hence, to provide insights, we design $\textit{QUAG}$ (QUadrant AveraGe), a lightweight and non-parametric probe, to conduct dataset-model combined representation analysis by impairing modality fusion. We find that the models achieve high performance on many datasets without leveraging multimodal representations. To validate QUAG further, we design $\textit{QUAG-attention}$, a less-expressive replacement of self-attention with restricted token interactions. Models with QUAG-attention achieve similar performance with significantly fewer multiplication operations without any finetuning. Our findings raise doubts about the current models' abilities to learn highly-coupled multimodal representations. Hence, we design the $\textit{CLAVI}$ (Complements in LAnguage and VIdeo) dataset, a stress-test dataset curated by augmenting real-world videos to have high modality coupling. Consistent with the findings of QUAG, we find that most of the models achieve near-trivial performance on CLAVI. This reasserts the limitations of current models for learning highly-coupled multimodal representations, that is not evaluated by the current datasets (project page: https://dissect-videoqa.github.io ).
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
373,587
2210.08990
Improving Object-centric Learning with Query Optimization
The ability to decompose complex natural scenes into meaningful object-centric abstractions lies at the core of human perception and reasoning. In the recent culmination of unsupervised object-centric learning, the Slot-Attention module has played an important role with its simple yet effective design and fostered many powerful variants. These methods, however, have been exceedingly difficult to train without supervision and are ambiguous in the notion of object, especially for complex natural scenes. In this paper, we propose to address these issues by investigating the potential of learnable queries as initializations for Slot-Attention learning, uniting it with efforts from existing attempts on improving Slot-Attention learning with bi-level optimization. With simple code adjustments on Slot-Attention, our model, Bi-level Optimized Query Slot Attention, achieves state-of-the-art results on 3 challenging synthetic and 7 complex real-world datasets in unsupervised image segmentation and reconstruction, outperforming previous baselines by a large margin. We provide thorough ablative studies to validate the necessity and effectiveness of our design. Additionally, our model exhibits great potential for concept binding and zero-shot learning. Our work is made publicly available at https://bo-qsa.github.io
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
324,377
2405.13803
"I Like Sunnie More Than I Expected!": Exploring User Expectation and Perception of an Anthropomorphic LLM-based Conversational Agent for Well-Being Support
The human-computer interaction (HCI) research community has a longstanding interest in exploring the mismatch between users' actual experiences and expectation toward new technologies, for instance, large language models (LLMs). In this study, we compared users' (N = 38) initial expectations against their post-interaction perceptions of two LLM-powered mental well-being intervention activity recommendation systems. Both systems have a built-in LLM to recommend a personalized well-being intervention activity, but one system (Sunnie) has an anthropomorphic conversational interaction design via elements such as appearance, persona, and natural conversation. Results showed that user engagement was high with both systems, and both systems exceeded users' expectations along the utility dimension, highlighting AI's potential to offer useful intervention activity recommendations. In addition, Sunnie further outperformed the non-anthropomorphic baseline system in relational warmth. These findings suggest that anthropomorphic conversational interaction design may be particularly effective in fostering warmth in mental health support contexts.
true
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
456,089
2109.03880
Integrated and Adaptive Guidance and Control for Endoatmospheric Missiles via Reinforcement Learning
We apply a reinforcement meta-learning framework to optimize an integrated and adaptive guidance and flight control system for an air-to-air missile. The system is implemented as a policy that maps navigation system outputs directly to commanded rates of change for the missile's control surface deflections. The system induces intercept trajectories against a maneuvering target that satisfy control constraints on fin deflection angles, and path constraints on look angle and load. We test the optimized system in a six degrees-of-freedom simulator that includes a non-linear radome model and a strapdown seeker model, and demonstrate that the system adapts to both a large flight envelope and off-nominal flight conditions including perturbation of aerodynamic coefficient parameters and center of pressure locations, and flexible body dynamics. Moreover, we find that the system is robust to the parasitic attitude loop induced by radome refraction and imperfect seeker stabilization. We compare our system's performance to a longitudinal model of proportional navigation coupled with a three loop autopilot, and find that our system outperforms this benchmark by a large margin. Additional experiments investigate the impact of removing the recurrent layer from the policy and value function networks, performance with an infrared seeker, and flexible body dynamics.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
254,200
2205.07877
A Comprehensive Survey on Model Quantization for Deep Neural Networks in Image Classification
Recent advancements in machine learning achieved by Deep Neural Networks (DNNs) have been significant. While demonstrating high accuracy, DNNs are associated with a huge number of parameters and computations, which leads to high memory usage and energy consumption. As a result, deploying DNNs on devices with constrained hardware resources poses significant challenges. To overcome this, various compression techniques have been widely employed to optimize DNN accelerators. A promising approach is quantization, in which the full-precision values are stored in low bit-width precision. Quantization not only reduces memory requirements but also replaces high-cost operations with low-cost ones. DNN quantization offers flexibility and efficiency in hardware design, making it a widely adopted technique in various methods. Since quantization has been extensively utilized in previous works, there is a need for an integrated report that provides an understanding, analysis, and comparison of different quantization approaches. Consequently, we present a comprehensive survey of quantization concepts and methods, with a focus on image classification. We describe clustering-based quantization methods and explore the use of a scale factor parameter for approximating full-precision values. Moreover, we thoroughly review the training of a quantized DNN, including the use of a straight-through estimator and quantization regularization. We explain the replacement of floating-point operations with low-cost bitwise operations in a quantized DNN and the sensitivity of different layers in quantization. Furthermore, we highlight the evaluation metrics for quantization methods and important benchmarks in the image classification task. We also present the accuracy of the state-of-the-art methods on CIFAR-10 and ImageNet.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
296,753
2311.13134
Lightweight High-Speed Photography Built on Coded Exposure and Implicit Neural Representation of Videos
The demand for compact cameras capable of recording high-speed scenes with high resolution is steadily increasing. However, achieving such capabilities often entails high bandwidth requirements, resulting in bulky, heavy systems unsuitable for low-capacity platforms. To address this challenge, leveraging a coded exposure setup to encode a frame sequence into a blurry snapshot and subsequently retrieve the latent sharp video presents a lightweight solution. Nevertheless, restoring motion from blur remains a formidable challenge due to the inherent ill-posedness of motion blur decomposition, the intrinsic ambiguity in motion direction, and the diverse motions present in natural videos. In this study, we propose a novel approach to address these challenges by combining the classical coded exposure imaging technique with the emerging implicit neural representation for videos. We strategically embed motion direction cues into the blurry image during the imaging process. Additionally, we develop a novel implicit neural representation based blur decomposition network to sequentially extract the latent video frames from the blurry image, leveraging the embedded motion direction cues. To validate the effectiveness and efficiency of our proposed framework, we conduct extensive experiments using benchmark datasets and real-captured blurry images. The results demonstrate that our approach significantly outperforms existing methods in terms of both quality and flexibility. The code for our work is available at .https://github.com/zhihongz/BDINR
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
409,633
1801.07292
Convergence of Value Aggregation for Imitation Learning
Value aggregation is a general framework for solving imitation learning problems. Based on the idea of data aggregation, it generates a policy sequence by iteratively interleaving policy optimization and evaluation in an online learning setting. While the existence of a good policy in the policy sequence can be guaranteed non-asymptotically, little is known about the convergence of the sequence or the performance of the last policy. In this paper, we debunk the common belief that value aggregation always produces a convergent policy sequence with improving performance. Moreover, we identify a critical stability condition for convergence and provide a tight non-asymptotic bound on the performance of the last policy. These new theoretical insights let us stabilize problems with regularization, which removes the inconvenient process of identifying the best policy in the policy sequence in stochastic problems.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
88,757
1907.05267
Perturbation theory approach to study the latent space degeneracy of Variational Autoencoders
The use of Variational Autoencoders in different Machine Learning tasks has drastically increased in the last years. They have been developed as denoising, clustering and generative tools, highlighting a large potential in a wide range of fields. Their embeddings are able to extract relevant information from highly dimensional inputs, but the converged models can differ significantly and lead to degeneracy on the latent space. We leverage the relation between theoretical physics and machine learning to explain this behaviour, and introduce a new approach to correct for degeneration by using perturbation theory. The re-formulation of the embedding as multi-dimensional generative distribution, allows mapping to a new set of functions and their corresponding energy spectrum. We optimise for a perturbed Hamiltonian, with an additional energy potential that is related to the unobserved topology of the data. Our results show the potential of a new theoretical approach that can be used to interpret the latent space and generative nature of unsupervised learning, while the energy landscapes defined by the perturbations can be further used for modelling and dynamical purposes.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
138,307
2211.02930
1-D Convolutional Graph Convolutional Networks for Fault Detection in Distributed Energy Systems
This paper presents a 1-D convolutional graph neural network for fault detection in microgrids. The combination of 1-D convolutional neural networks (1D-CNN) and graph convolutional networks (GCN) helps extract both spatial-temporal correlations from the voltage measurements in microgrids. The fault detection scheme includes fault event detection, fault type and phase classification, and fault location. There are five neural network model training to handle these tasks. Transfer learning and fine-tuning are applied to reduce training efforts. The combined recurrent graph convolutional neural networks (1D-CGCN) is compared with the traditional ANN structure on the Potsdam 13-bus microgrid dataset. The achievable accuracy of 99.27%, 98.1%, 98.75%, and 95.6% for fault detection, fault type classification, fault phase identification, and fault location respectively.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
328,756
2304.03147
Improving Visual Question Answering Models through Robustness Analysis and In-Context Learning with a Chain of Basic Questions
Deep neural networks have been critical in the task of Visual Question Answering (VQA), with research traditionally focused on improving model accuracy. Recently, however, there has been a trend towards evaluating the robustness of these models against adversarial attacks. This involves assessing the accuracy of VQA models under increasing levels of noise in the input, which can target either the image or the proposed query question, dubbed the main question. However, there is currently a lack of proper analysis of this aspect of VQA. This work proposes a new method that utilizes semantically related questions, referred to as basic questions, acting as noise to evaluate the robustness of VQA models. It is hypothesized that as the similarity of a basic question to the main question decreases, the level of noise increases. To generate a reasonable noise level for a given main question, a pool of basic questions is ranked based on their similarity to the main question, and this ranking problem is cast as a LASSO optimization problem. Additionally, this work proposes a novel robustness measure, R_score, and two basic question datasets to standardize the analysis of VQA model robustness. The experimental results demonstrate that the proposed evaluation method effectively analyzes the robustness of VQA models. Moreover, the experiments show that in-context learning with a chain of basic questions can enhance model accuracy.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
356,686
2009.02035
What the Future Brings: Investigating the Impact of Lookahead for Incremental Neural TTS
In incremental text to speech synthesis (iTTS), the synthesizer produces an audio output before it has access to the entire input sentence. In this paper, we study the behavior of a neural sequence-to-sequence TTS system when used in an incremental mode, i.e. when generating speech output for token n, the system has access to n + k tokens from the text sequence. We first analyze the impact of this incremental policy on the evolution of the encoder representations of token n for different values of k (the lookahead parameter). The results show that, on average, tokens travel 88% of the way to their full context representation with a one-word lookahead and 94% after 2 words. We then investigate which text features are the most influential on the evolution towards the final representation using a random forest analysis. The results show that the most salient factors are related to token length. We finally evaluate the effects of lookahead k at the decoder level, using a MUSHRA listening test. This test shows results that contrast with the above high figures: speech synthesis quality obtained with 2 word-lookahead is significantly lower than the one obtained with the full sentence.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
194,451
2009.09496
Learning Soft Labels via Meta Learning
One-hot labels do not represent soft decision boundaries among concepts, and hence, models trained on them are prone to overfitting. Using soft labels as targets provide regularization, but different soft labels might be optimal at different stages of optimization. Also, training with fixed labels in the presence of noisy annotations leads to worse generalization. To address these limitations, we propose a framework, where we treat the labels as learnable parameters, and optimize them along with model parameters. The learned labels continuously adapt themselves to the model's state, thereby providing dynamic regularization. When applied to the task of supervised image-classification, our method leads to consistent gains across different datasets and architectures. For instance, dynamically learned labels improve ResNet18 by 2.1% on CIFAR100. When applied to dataset containing noisy labels, the learned labels correct the annotation mistakes, and improves over state-of-the-art by a significant margin. Finally, we show that learned labels capture semantic relationship between classes, and thereby improve teacher models for the downstream task of distillation.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
196,599
2008.02863
A Transfer Learning Method for Speech Emotion Recognition from Automatic Speech Recognition
This paper presents a transfer learning method in speech emotion recognition based on a Time-Delay Neural Network (TDNN) architecture. A major challenge in the current speech-based emotion detection research is data scarcity. The proposed method resolves this problem by applying transfer learning techniques in order to leverage data from the automatic speech recognition (ASR) task for which ample data is available. Our experiments also show the advantage of speaker-class adaptation modeling techniques by adopting identity-vector (i-vector) based features in addition to standard Mel-Frequency Cepstral Coefficient (MFCC) features.[1] We show the transfer learning models significantly outperform the other methods without pretraining on ASR. The experiments performed on the publicly available IEMOCAP dataset which provides 12 hours of motional speech data. The transfer learning was initialized by using the Ted-Lium v.2 speech dataset providing 207 hours of audio with the corresponding transcripts. We achieve the highest significantly higher accuracy when compared to state-of-the-art, using five-fold cross validation. Using only speech, we obtain an accuracy 71.7% for anger, excitement, sadness, and neutrality emotion content.
true
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
190,732
2406.09936
ALGM: Adaptive Local-then-Global Token Merging for Efficient Semantic Segmentation with Plain Vision Transformers
This work presents Adaptive Local-then-Global Merging (ALGM), a token reduction method for semantic segmentation networks that use plain Vision Transformers. ALGM merges tokens in two stages: (1) In the first network layer, it merges similar tokens within a small local window and (2) halfway through the network, it merges similar tokens across the entire image. This is motivated by an analysis in which we found that, in those situations, tokens with a high cosine similarity can likely be merged without a drop in segmentation quality. With extensive experiments across multiple datasets and network configurations, we show that ALGM not only significantly improves the throughput by up to 100%, but can also enhance the mean IoU by up to +1.1, thereby achieving a better trade-off between segmentation quality and efficiency than existing methods. Moreover, our approach is adaptive during inference, meaning that the same model can be used for optimal efficiency or accuracy, depending on the application. Code is available at https://tue-mps.github.io/ALGM.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
464,151
2406.19107
FDLite: A Single Stage Lightweight Face Detector Network
Face detection is frequently attempted by using heavy pre-trained backbone networks like ResNet-50/101/152 and VGG16/19. Few recent works have also proposed lightweight detectors with customized backbones, novel loss functions and efficient training strategies. The novelty of this work lies in the design of a lightweight detector while training with only the commonly used loss functions and learning strategies. The proposed face detector grossly follows the established RetinaFace architecture. The first contribution of this work is the design of a customized lightweight backbone network (BLite) having 0.167M parameters with 0.52 GFLOPs. The second contribution is the use of two independent multi-task losses. The proposed lightweight face detector (FDLite) has 0.26M parameters with 0.94 GFLOPs. The network is trained on the WIDER FACE dataset. FDLite is observed to achieve 92.3\%, 89.8\%, and 82.2\% Average Precision (AP) on the easy, medium, and hard subsets of the WIDER FACE validation dataset, respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
468,304
2402.02457
A Risk-aware Planning Framework of UGVs in Off-Road Environment
Planning module is an essential component of intelligent vehicle study. In this paper, we address the risk-aware planning problem of UGVs through a global-local planning framework which seamlessly integrates risk assessment methods. In particular, a global planning algorithm named Coarse2fine A* is proposed, which incorporates a potential field approach to enhance the safety of the planning results while ensuring the efficiency of the algorithm. A deterministic sampling method for local planning is leveraged and modified to suit off-road environment. It also integrates a risk assessment model to emphasize the avoidance of local risks. The performance of the algorithm is demonstrated through simulation experiments by comparing it with baseline algorithms, where the results of Coarse2fine A* are shown to be approximately 30% safer than those of the baseline algorithms. The practicality and effectiveness of the proposed planning framework are validated by deploying it on a real-world system consisting of a control center and a practical UGV platform.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
426,573
1305.6126
Problems on q-Analogs in Coding Theory
The interest in $q$-analogs of codes and designs has been increased in the last few years as a consequence of their new application in error-correction for random network coding. There are many interesting theoretical, algebraic, and combinatorial coding problems concerning these q-analogs which remained unsolved. The first goal of this paper is to make a short summary of the large amount of research which was done in the area mainly in the last few years and to provide most of the relevant references. The second goal of this paper is to present one hundred open questions and problems for future research, whose solution will advance the knowledge in this area. The third goal of this paper is to present and start some directions in solving some of these problems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
24,815
1809.02129
Structural Consistency and Controllability for Diverse Colorization
Colorizing a given gray-level image is an important task in the media and advertising industry. Due to the ambiguity inherent to colorization (many shades are often plausible), recent approaches started to explicitly model diversity. However, one of the most obvious artifacts, structural inconsistency, is rarely considered by existing methods which predict chrominance independently for every pixel. To address this issue, we develop a conditional random field based variational auto-encoder formulation which is able to achieve diversity while taking into account structural consistency. Moreover, we introduce a controllability mecha- nism that can incorporate external constraints from diverse sources in- cluding a user interface. Compared to existing baselines, we demonstrate that our method obtains more diverse and globally consistent coloriza- tions on the LFW, LSUN-Church and ILSVRC-2015 datasets.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
106,968
2112.02999
Dynamic Mirror Descent based Model Predictive Control for Accelerating Robot Learning
Recent works in Reinforcement Learning (RL) combine model-free (Mf)-RL algorithms with model-based (Mb)-RL approaches to get the best from both: asymptotic performance of Mf-RL and high sample-efficiency of Mb-RL. Inspired by these works, we propose a hierarchical framework that integrates online learning for the Mb-trajectory optimization with off-policy methods for the Mf-RL. In particular, two loops are proposed, where the Dynamic Mirror Descent based Model Predictive Control (DMD-MPC) is used as the inner loop Mb-RL to obtain an optimal sequence of actions. These actions are in turn used to significantly accelerate the outer loop Mf-RL. We show that our formulation is generic for a broad class of MPC-based policies and objectives, and includes some of the well-known Mb-Mf approaches. We finally introduce a new algorithm: Mirror-Descent Model Predictive RL (M-DeMoRL), which uses Cross-Entropy Method (CEM) with elite fractions for the inner loop. Our experiments show faster convergence of the proposed hierarchical approach on benchmark MuJoCo tasks. We also demonstrate hardware training for trajectory tracking in a 2R leg and hardware transfer for robust walking in a quadruped. We show that the inner-loop Mb-RL significantly decreases the number of training iterations required in the real system, thereby validating the proposed approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
270,041
2007.03615
Detecting Signatures of Early-stage Dementia with Behavioural Models Derived from Sensor Data
There is a pressing need to automatically understand the state and progression of chronic neurological diseases such as dementia. The emergence of state-of-the-art sensing platforms offers unprecedented opportunities for indirect and automatic evaluation of disease state through the lens of behavioural monitoring. This paper specifically seeks to characterise behavioural signatures of mild cognitive impairment (MCI) and Alzheimer's disease (AD) in the \textit{early} stages of the disease. We introduce bespoke behavioural models and analyses of key symptoms and deploy these on a novel dataset of longitudinal sensor data from persons with MCI and AD. We present preliminary findings that show the relationship between levels of sleep quality and wandering can be subtly different between patients in the early stages of dementia and healthy cohabiting controls.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
186,110
1903.01707
The Complexity of Morality: Checking Markov Blanket Consistency with DAGs via Morality
A family of Markov blankets in a faithful Bayesian network satisfies the symmetry and consistency properties. In this paper, we draw a bijection between families of consistent Markov blankets and moral graphs. We define the new concepts of weak recursive simpliciality and perfect elimination kits. We prove that they are equivalent to graph morality. In addition, we prove that morality can be decided in polynomial time for graphs with maximum degree less than $5$, but the problem is NP-complete for graphs with higher maximum degrees.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
123,323
1609.08445
AP16-OL7: A Multilingual Database for Oriental Languages and A Language Recognition Baseline
We present the AP16-OL7 database which was released as the training and test data for the oriental language recognition (OLR) challenge on APSIPA 2016. Based on the database, a baseline system was constructed on the basis of the i-vector model. We report the baseline results evaluated in various metrics defined by the AP16-OLR evaluation plan and demonstrate that AP16-OL7 is a reasonable data resource for multilingual research.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
61,600
2109.12871
Strong entanglement distribution of quantum networks
Large-scale quantum networks have been employed to overcome practical constraints of transmissions and storage for single entangled systems. Our goal in this article is to explore the strong entanglement distribution of quantum networks. We firstly show any connected network consisting of generalized EPR states and GHZ states satisfies strong CKW monogamy inequality in terms of bipartite entanglement measure. This reveals interesting feature of high-dimensional entanglement with local tensor decomposition going beyond qubit entanglement. We then apply the new entanglement distribution relation in entangled networks for getting quantum max-flow min-cut theorem in terms of von Neumann entropy and R\'{e}nyi-$\alpha$ entropy. We finally classify entangled quantum networks by distinguishing network configurations under local unitary operations. These results provide new insights into characterizing quantum networks in quantum information processing.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
257,443
1911.11632
Minimal Linear Codes Constructed from Functions
In this paper, we consider minimal linear codes in a general construction of linear codes from q-ary functions. First, we give the sufficient and necessary condition for codewords to be minimal. Second, as an application, we present four constructions of minimal linear codes which contained some recent results as special cases.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
155,183
1807.10584
Uncertainty and Interpretability in Convolutional Neural Networks for Semantic Segmentation of Colorectal Polyps
Convolutional Neural Networks (CNNs) are propelling advances in a range of different computer vision tasks such as object detection and object segmentation. Their success has motivated research in applications of such models for medical image analysis. If CNN-based models are to be helpful in a medical context, they need to be precise, interpretable, and uncertainty in predictions must be well understood. In this paper, we develop and evaluate recent advances in uncertainty estimation and model interpretability in the context of semantic segmentation of polyps from colonoscopy images. We evaluate and enhance several architectures of Fully Convolutional Networks (FCNs) for semantic segmentation of colorectal polyps and provide a comparison between these models. Our highest performing model achieves a 76.06\% mean IOU accuracy on the EndoScene dataset, a considerable improvement over the previous state-of-the-art.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
103,976