id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1606.02467
Point-wise mutual information-based video segmentation with high temporal consistency
In this paper, we tackle the problem of temporally consistent boundary detection and hierarchical segmentation in videos. While finding the best high-level reasoning of region assignments in videos is the focus of much recent research, temporal consistency in boundary detection has so far only rarely been tackled. We argue that temporally consistent boundaries are a key component to temporally consistent region assignment. The proposed method is based on the point-wise mutual information (PMI) of spatio-temporal voxels. Temporal consistency is established by an evaluation of PMI-based point affinities in the spectral domain over space and time. Thus, the proposed method is independent of any optical flow computation or previously learned motion models. The proposed low-level video segmentation method outperforms the learning-based state of the art in terms of standard region metrics.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
56,966
2407.09548
Towards Temporal Change Explanations from Bi-Temporal Satellite Images
Explaining temporal changes between satellite images taken at different times is important for urban planning and environmental monitoring. However, manual dataset construction for the task is costly, so human-AI collaboration is promissing. Toward the direction, in this paper, we investigate the ability of Large-scale Vision-Language Models (LVLMs) to explain temporal changes between satellite images. While LVLMs are known to generate good image captions, they receive only a single image as input. To deal with a par of satellite images as input, we propose three prompting methods. Through human evaluation, we found the effectiveness of our step-by-step reasoning based prompting.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
472,636
2201.05972
Sparse Cross-scale Attention Network for Efficient LiDAR Panoptic Segmentation
Two major challenges of 3D LiDAR Panoptic Segmentation (PS) are that point clouds of an object are surface-aggregated and thus hard to model the long-range dependency especially for large instances, and that objects are too close to separate each other. Recent literature addresses these problems by time-consuming grouping processes such as dual-clustering, mean-shift offsets, etc., or by bird-eye-view (BEV) dense centroid representation that downplays geometry. However, the long-range geometry relationship has not been sufficiently modeled by local feature learning from the above methods. To this end, we present SCAN, a novel sparse cross-scale attention network to first align multi-scale sparse features with global voxel-encoded attention to capture the long-range relationship of instance context, which can boost the regression accuracy of the over-segmented large objects. For the surface-aggregated points, SCAN adopts a novel sparse class-agnostic representation of instance centroids, which can not only maintain the sparsity of aligned features to solve the under-segmentation on small objects, but also reduce the computation amount of the network through sparse convolution. Our method outperforms previous methods by a large margin in the SemanticKITTI dataset for the challenging 3D PS task, achieving 1st place with a real-time inference speed.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
275,571
2101.10721
Data sharing games
Data sharing issues pervade online social and economic environments. To foster social progress, it is important to develop models of the interaction between data producers and consumers that can promote the rise of cooperation between the involved parties. We formalize this interaction as a game, the data sharing game, based on the Iterated Prisoner's Dilemma and deal with it through multi-agent reinforcement learning techniques. We consider several strategies for how the citizens may behave, depending on the degree of centralization sought. Simulations suggest mechanisms for cooperation to take place and, thus, achieve maximum social utility: data consumers should perform some kind of opponent modeling, or a regulator should transfer utility between both players and incentivise them.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
217,026
1806.09076
Distributed Edge Caching in Ultra-dense Fog Radio Access Networks: A Mean Field Approach
In this paper, the edge caching problem in ultra-dense fog radio access networks (F-RAN) is investigated. Taking into account time-variant user requests and ultra-dense deployment of fog access points (F-APs), we propose a dynamic distributed edge caching scheme to jointly minimize the request service delay and fronthaul traffic load. Considering the interactive relationship among F-APs, we model the caching optimization problem as a stochastic differential game (SDG) which captures the temporal dynamics of F-AP states and incorporates user requests status. The SDG is further approximated as a mean field game (MFG) by exploiting the ultra-dense property of F-RAN. In the MFG, each F-AP can optimize its caching policy independently through iteratively solving the corresponding partial differential equations without any information exchange with other F-APs. The simulation results show that the proposed edge caching scheme outperforms the baseline schemes under both static and time-variant user requests.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
101,287
2404.08095
Energy-Intensive Industries Providing Ancillary Services: A Real Case of Zinc Galvanizing Process
Energy-intensive industries can adapt to help balance the power grid. By using a real-world case study of a zinc galvanizing process in Denmark, we show how a modest investment in power control of the furnace enables the provision of various ancillary services. We consider two types of services, namely frequency containment reserve (FCR) and manual frequency restoration reserve (mFRR), and numerically conclude that the monetary value of both services is significant, such that the pay-back time of investment is potentially within a year. The FCR service provision is more preferable as its impact on the temperature of the zinc is negligible.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
446,111
2307.07909
Is Imitation All You Need? Generalized Decision-Making with Dual-Phase Training
We introduce DualMind, a generalist agent designed to tackle various decision-making tasks that addresses challenges posed by current methods, such as overfitting behaviors and dependence on task-specific fine-tuning. DualMind uses a novel "Dual-phase" training strategy that emulates how humans learn to act in the world. The model first learns fundamental common knowledge through a self-supervised objective tailored for control tasks and then learns how to make decisions based on different contexts through imitating behaviors conditioned on given prompts. DualMind can handle tasks across domains, scenes, and embodiments using just a single set of model weights and can execute zero-shot prompting without requiring task-specific fine-tuning. We evaluate DualMind on MetaWorld and Habitat through extensive experiments and demonstrate its superior generalizability compared to previous techniques, outperforming other generalist agents by over 50$\%$ and 70$\%$ on Habitat and MetaWorld, respectively. On the 45 tasks in MetaWorld, DualMind achieves over 30 tasks at a 90$\%$ success rate.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
379,584
2204.03873
Spatial Transformer Network on Skeleton-based Gait Recognition
Skeleton-based gait recognition models usually suffer from the robustness problem, as the Rank-1 accuracy varies from 90\% in normal walking cases to 70\% in walking with coats cases. In this work, we propose a state-of-the-art robust skeleton-based gait recognition model called Gait-TR, which is based on the combination of spatial transformer frameworks and temporal convolutional networks. Gait-TR achieves substantial improvements over other skeleton-based gait models with higher accuracy and better robustness on the well-known gait dataset CASIA-B. Particularly in walking with coats cases, Gait-TR get a 90\% Rank-1 gait recognition accuracy rate, which is higher than the best result of silhouette-based models, which usually have higher accuracy than the silhouette-based gait recognition models. Moreover, our experiment on CASIA-B shows that the spatial transformer can extract gait features from the human skeleton better than the widely used graph convolutional network.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
290,462
2007.05901
Shortened Linear Codes over Finite Fields
The puncturing and shortening technique are two important approaches to constructing new linear codes from old ones. In the past 70 years, a lot of progress on the puncturing technique has been made, and many works on punctured linear codes have been done. Many families of linear codes with interesting parameters have been obtained with the puncturing technique. However, little research on the shortening technique has been done and there are only a handful references on shortened linear codes. The first objective of this paper is to prove some general theory for shortened linear codes. The second objective is to study some shortened codes of the Hamming codes, Simplex codes, some Reed-Muller codes, and ovoid codes. Eleven families of optimal shortened codes with interesting parameters are presented in this paper. As a byproduct, five infinite families of $2$-designs are also constructed from some of the shortened codes presented in this paper.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
186,822
2305.10216
CHMMOTv1 -- Cardiac and Hepatic Multi-Echo (T2*) MRI Images and Clinical Dataset for Iron Overload on Thalassemia Patients
Owing to the invasiveness and low accuracy of other tests, including biopsy and ferritin levels, magnetic resonance imaging (T2 and T2*-MRI) has been considered the standard test for patients with thalassemia (THM). Regarding deep learning networks in medical sciences for improving diagnosis and treatment purposes and the existence of minimal resources for them, we decided to provide a set of magnetic resonance images of the cardiac and hepatic organs. The dataset included 124 patients (67 women and 57 men) with a THM age range of (5-52) years. In addition, patients were divided into two groups: with follow-up (1-5 times) at time intervals of about (5-6) months and without follow-up. Also, T2* and, R2* values, the results of the cardiac and hepatic report (normal, mild, moderate, severe, and very severe), and laboratory tests including Ferritin, Bilirubin (D, and T), AST, ALT, and ALP levels were provided as an Excel file. This dataset CHMMOTv1) has been published in Mendeley Dataverse and is accessible through the web at: http://databiox.com.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
364,958
1409.5040
Communities and Hierarchical Structures in Dynamic Social Networks: Analysis and Visualization
Detection of community structures in social networks has attracted lots of attention in the domain of sociology and behavioral sciences. Social networks also exhibit dynamic nature as these networks change continuously with the passage of time. Social networks might also present a hierarchical structure led by individuals that play important roles in a society such as Managers and Decision Makers. Detection and Visualization of these networks changing over time is a challenging problem where communities change as a function of events taking place in the society and the role people play in it. In this paper we address these issues by presenting a system to analyze dynamic social networks. The proposed system is based on dynamic graph discretization and graph clustering. The system allows detection of major structural changes taking place in social communities over time and reveals hierarchies by identifying influential people in a social networks. We use two different data sets for the empirical evaluation and observe that our system helps to discover interesting facts about the social and hierarchical structures present in these social networks.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
36,130
0905.0192
Fuzzy Mnesors
A fuzzy mnesor space is a semimodule over the positive real numbers. It can be used as theoretical framework for fuzzy sets. Hence we can prove a great number of properties for fuzzy sets without refering to the membership functions.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
3,627
2406.00830
Collaborative Novel Object Discovery and Box-Guided Cross-Modal Alignment for Open-Vocabulary 3D Object Detection
Open-vocabulary 3D Object Detection (OV-3DDet) addresses the detection of objects from an arbitrary list of novel categories in 3D scenes, which remains a very challenging problem. In this work, we propose CoDAv2, a unified framework designed to innovatively tackle both the localization and classification of novel 3D objects, under the condition of limited base categories. For localization, the proposed 3D Novel Object Discovery (3D-NOD) strategy utilizes 3D geometries and 2D open-vocabulary semantic priors to discover pseudo labels for novel objects during training. 3D-NOD is further extended with an Enrichment strategy that significantly enriches the novel object distribution in the training scenes, and then enhances the model's ability to localize more novel objects. The 3D-NOD with Enrichment is termed 3D-NODE. For classification, the Discovery-driven Cross-modal Alignment (DCMA) module aligns features from 3D point clouds and 2D/textual modalities, employing both class-agnostic and class-specific alignments that are iteratively refined to handle the expanding vocabulary of objects. Besides, 2D box guidance boosts the classification accuracy against complex background noises, which is coined as Box-DCMA. Extensive evaluation demonstrates the superiority of CoDAv2. CoDAv2 outperforms the best-performing method by a large margin (AP_Novel of 9.17 vs. 3.61 on SUN-RGBD and 9.12 vs. 3.74 on ScanNetv2). Source code and pre-trained models are available at the GitHub project page.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
460,043
1411.7855
V-variable image compression
V-variable fractals, where $V$ is a positive integer, are intuitively fractals with at most $V$ different "forms" or "shapes" at all levels of magnification. In this paper we describe how V-variable fractals can be used for the purpose of image compression.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
37,968
2306.11128
CAMMARL: Conformal Action Modeling in Multi Agent Reinforcement Learning
Before taking actions in an environment with more than one intelligent agent, an autonomous agent may benefit from reasoning about the other agents and utilizing a notion of a guarantee or confidence about the behavior of the system. In this article, we propose a novel multi-agent reinforcement learning (MARL) algorithm CAMMARL, which involves modeling the actions of other agents in different situations in the form of confident sets, i.e., sets containing their true actions with a high probability. We then use these estimates to inform an agent's decision-making. For estimating such sets, we use the concept of conformal predictions, by means of which, we not only obtain an estimate of the most probable outcome but get to quantify the operable uncertainty as well. For instance, we can predict a set that provably covers the true predictions with high probabilities (e.g., 95%). Through several experiments in two fully cooperative multi-agent tasks, we show that CAMMARL elevates the capabilities of an autonomous agent in MARL by modeling conformal prediction sets over the behavior of other agents in the environment and utilizing such estimates to enhance its policy learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
374,483
2204.13874
OA-Mine: Open-World Attribute Mining for E-Commerce Products with Weak Supervision
Automatic extraction of product attributes from their textual descriptions is essential for online shopper experience. One inherent challenge of this task is the emerging nature of e-commerce products -- we see new types of products with their unique set of new attributes constantly. Most prior works on this matter mine new values for a set of known attributes but cannot handle new attributes that arose from constantly changing data. In this work, we study the attribute mining problem in an open-world setting to extract novel attributes and their values. Instead of providing comprehensive training data, the user only needs to provide a few examples for a few known attribute types as weak supervision. We propose a principled framework that first generates attribute value candidates and then groups them into clusters of attributes. The candidate generation step probes a pre-trained language model to extract phrases from product titles. Then, an attribute-aware fine-tuning method optimizes a multitask objective and shapes the language model representation to be attribute-discriminative. Finally, we discover new attributes and values through the self-ensemble of our framework, which handles the open-world challenge. We run extensive experiments on a large distantly annotated development set and a gold standard human-annotated test set that we collected. Our model significantly outperforms strong baselines and can generalize to unseen attributes and product types.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
293,974
2407.13303
Mean Teacher based SSL Framework for Indoor Localization Using Wi-Fi RSSI Fingerprinting
Wi-Fi fingerprinting is widely applied for indoor localization due to the widespread availability of Wi-Fi devices. However, traditional methods are not ideal for multi-building and multi-floor environments due to the scalability issues. Therefore, more and more researchers have employed deep learning techniques to enable scalable indoor localization. This paper introduces a novel semi-supervised learning framework for neural networks based on wireless access point selection, noise injection, and Mean Teacher model, which leverages unlabeled fingerprints to enhance localization performance. The proposed framework can manage hybrid in/outsourcing and voluntarily contributed databases and continually expand the fingerprint database with newly submitted unlabeled fingerprints during service. The viability of the proposed framework was examined using two established deep-learning models with the UJIIndoorLoc database. The experimental results suggest that the proposed framework significantly improves localization performance compared to the supervised learning-based approach in terms of floor-level coordinate estimation using EvAAL metric. It shows enhancements up to 10.99% and 8.98% in the former scenario and 4.25% and 9.35% in the latter, respectively with additional studies highlight the importance of the essential components of the proposed framework.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
474,326
1804.08280
PlusEmo2Vec at SemEval-2018 Task 1: Exploiting emotion knowledge from emoji and #hashtags
This paper describes our system that has been submitted to SemEval-2018 Task 1: Affect in Tweets (AIT) to solve five subtasks. We focus on modeling both sentence and word level representations of emotion inside texts through large distantly labeled corpora with emojis and hashtags. We transfer the emotional knowledge by exploiting neural network models as feature extractors and use these representations for traditional machine learning models such as support vector regression (SVR) and logistic regression to solve the competition tasks. Our system is placed among the Top3 for all subtasks we participated.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
95,734
2304.06469
Analysing Fairness of Privacy-Utility Mobility Models
Preserving the individuals' privacy in sharing spatial-temporal datasets is critical to prevent re-identification attacks based on unique trajectories. Existing privacy techniques tend to propose ideal privacy-utility tradeoffs, however, largely ignore the fairness implications of mobility models and whether such techniques perform equally for different groups of users. The quantification between fairness and privacy-aware models is still unclear and there barely exists any defined sets of metrics for measuring fairness in the spatial-temporal context. In this work, we define a set of fairness metrics designed explicitly for human mobility, based on structural similarity and entropy of the trajectories. Under these definitions, we examine the fairness of two state-of-the-art privacy-preserving models that rely on GAN and representation learning to reduce the re-identification rate of users for data sharing. Our results show that while both models guarantee group fairness in terms of demographic parity, they violate individual fairness criteria, indicating that users with highly similar trajectories receive disparate privacy gain. We conclude that the tension between the re-identification task and individual fairness needs to be considered for future spatial-temporal data analysis and modelling to achieve a privacy-preserving fairness-aware setting.
false
false
false
false
true
false
true
false
false
false
false
false
true
true
false
false
false
false
357,983
2004.04220
Determination of spatial configuration of an underwater swarm with minimum data
The subject is the localization problem of an underwater swarm of autonomous underwater robots (AUV), in the frame of the HARNESS project; by localization, we mean the relative swarm configuration, i.e., the geometrical shape of the group. The result is achieved by using the signals that the robots exchange. The swarm is organized by rules and conceived to perform tasks, ranging from environmental monitoring to terrorism-attack surveillance. Two methods of determining the shape of the swarm, both based on trilateration calculation, are proposed. The first method focuses on the robot's speed. In this case, we use our knowledge of the speeds and distances between the machines, while the second method considers only distances and the orientation angles of the robots. Unlike a trilateration problem, we do not know the position of the beacons and this renders the problem a difficult one. Moreover, we have very few data. More than one step of motion is needed to resolve the multiple solutions found, owing to the symmetries of the system and optimization process of one or more objective functions leading to the final configuration. We subsequently checked our algorithm using a simulator taking into account random errors affecting the measurements
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
171,812
2101.02047
Unified Learning Approach for Egocentric Hand Gesture Recognition and Fingertip Detection
Head-mounted device-based human-computer interaction often requires egocentric recognition of hand gestures and fingertips detection. In this paper, a unified approach of egocentric hand gesture recognition and fingertip detection is introduced. The proposed algorithm uses a single convolutional neural network to predict the probabilities of finger class and positions of fingertips in one forward propagation. Instead of directly regressing the positions of fingertips from the fully connected layer, the ensemble of the position of fingertips is regressed from the fully convolutional network. Subsequently, the ensemble average is taken to regress the final position of fingertips. Since the whole pipeline uses a single network, it is significantly fast in computation. Experimental results show that the proposed method outperforms the existing fingertip detection approaches including the Direct Regression and the Heatmap-based framework. The effectiveness of the proposed method is also shown in-the-wild scenario as well as in a use-case of virtual reality.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
214,515
1810.02497
Compositional planning in Markov decision processes: Temporal abstraction meets generalized logic composition
In hierarchical planning for Markov decision processes (MDPs), temporal abstraction allows planning with macro-actions that take place at different time scale in form of sequential composition. In this paper, we propose a novel approach to compositional reasoning and hierarchical planning for MDPs under temporal logic constraints. In addition to sequential composition, we introduce a composition of policies based on generalized logic composition: Given sub-policies for sub-tasks and a new task expressed as logic compositions of subtasks, a semi-optimal policy, which is optimal in planning with only sub-policies, can be obtained by simply composing sub-polices. Thus, a synthesis algorithm is developed to compute optimal policies efficiently by planning with primitive actions, policies for sub-tasks, and the compositions of sub-policies, for maximizing the probability of satisfying temporal logic specifications. We demonstrate the correctness and efficiency of the proposed method in stochastic planning examples with a single agent and multiple task specifications.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
109,601
1810.08307
Reduction of Parameter Redundancy in Biaffine Classifiers with Symmetric and Circulant Weight Matrices
Currently, the biaffine classifier has been attracting attention as a method to introduce an attention mechanism into the modeling of binary relations. For instance, in the field of dependency parsing, the Deep Biaffine Parser by Dozat and Manning has achieved state-of-the-art performance as a graph-based dependency parser on the English Penn Treebank and CoNLL 2017 shared task. On the other hand, it is reported that parameter redundancy in the weight matrix in biaffine classifiers, which has O(n^2) parameters, results in overfitting (n is the number of dimensions). In this paper, we attempted to reduce the parameter redundancy by assuming either symmetry or circularity of weight matrices. In our experiments on the CoNLL 2017 shared task dataset, our model achieved better or comparable accuracy on most of the treebanks with more than 16% parameter reduction.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
110,791
2411.04833
Finding Control Invariant Sets via Lipschitz Constants of Linear Programs
Control invariant sets play an important role in safety-critical control and find broad application in numerous fields such as obstacle avoidance for mobile robots. However, finding valid control invariant sets of dynamical systems under input limitations is notoriously difficult. We present an approach to safely expand an initial set while always guaranteeing that the set is control invariant. Specifically, we define an expansion law for the boundary of a set and check for control invariance using Linear Programs (LPs). To verify control invariance on a continuous domain, we leverage recently proposed Lipschitz constants of LPs to transform the problem of continuous verification into a finite number of LPs. Using concepts from differentiable optimization, we derive the safe expansion law of the control invariant set and show how it can be interpreted as a second invariance problem in the space of possible boundaries. Finally, we show how the obtained set can be used to obtain a minimally invasive safety filter in a Control Barrier Function (CBF) framework. Our work is supported by theoretical results as well as numerical examples.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
506,436
2203.00146
VaultDB: A Real-World Pilot of Secure Multi-Party Computation within a Clinical Research Network
Electronic health records represent a rich and growing source of clinical data for research. Privacy, regulatory, and institutional concerns limit the speed and ease of sharing this data. VaultDB is a framework for securely computing SQL queries over private data from two or more sources. It evaluates queries using secure multiparty computation: cryptographic protocols that evaluate a function such that the only information revealed from running it is the query answer. We describe the development of a HIPAA-compliant version of VaultDB on the Chicago Area Patient Centered Outcomes Research Network (CAPriCORN). This multi-institutional clinical research network spans the electronic health records of nearly 13M patients over hundreds of clinics and hospitals in the Chicago metropolitan area. Our results from deploying at three health systems within this network show its efficiency and scalability for distributed clinical research analyses without moving patient records from their site of origin.
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
282,891
2010.07004
Binarization Methods for Motor-Imagery Brain-Computer Interface Classification
Successful motor-imagery brain-computer interface (MI-BCI) algorithms either extract a large number of handcrafted features and train a classifier, or combine feature extraction and classification within deep convolutional neural networks (CNNs). Both approaches typically result in a set of real-valued weights, that pose challenges when targeting real-time execution on tightly resource-constrained devices. We propose methods for each of these approaches that allow transforming real-valued weights to binary numbers for efficient inference. Our first method, based on sparse bipolar random projection, projects a large number of real-valued Riemannian covariance features to a binary space, where a linear SVM classifier can be learned with binary weights too. By tuning the dimension of the binary embedding, we achieve almost the same accuracy in 4-class MI ($\leq$1.27% lower) compared to models with float16 weights, yet delivering a more compact model with simpler operations to execute. Second, we propose to use memory-augmented neural networks (MANNs) for MI-BCI such that the augmented memory is binarized. Our method replaces the fully connected layer of CNNs with a binary augmented memory using bipolar random projection, or learned projection. Our experimental results on EEGNet, an already compact CNN for MI-BCI, show that it can be compressed by 1.28x at iso-accuracy using the random projection. On the other hand, using the learned projection provides 3.89% higher accuracy but increases the memory size by 28.10x.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
200,673
2011.00164
Differentially Private ADMM Algorithms for Machine Learning
In this paper, we study efficient differentially private alternating direction methods of multipliers (ADMM) via gradient perturbation for many machine learning problems. For smooth convex loss functions with (non)-smooth regularization, we propose the first differentially private ADMM (DP-ADMM) algorithm with performance guarantee of $(\epsilon,\delta)$-differential privacy ($(\epsilon,\delta)$-DP). From the viewpoint of theoretical analysis, we use the Gaussian mechanism and the conversion relationship between R\'enyi Differential Privacy (RDP) and DP to perform a comprehensive privacy analysis for our algorithm. Then we establish a new criterion to prove the convergence of the proposed algorithms including DP-ADMM. We also give the utility analysis of our DP-ADMM. Moreover, we propose an accelerated DP-ADMM (DP-AccADMM) with the Nesterov's acceleration technique. Finally, we conduct numerical experiments on many real-world datasets to show the privacy-utility tradeoff of the two proposed algorithms, and all the comparative analysis shows that DP-AccADMM converges faster and has a better utility than DP-ADMM, when the privacy budget $\epsilon$ is larger than a threshold.
false
false
false
false
true
false
true
false
false
false
false
false
true
false
false
false
false
true
204,115
1803.01626
Variance-Aware Regret Bounds for Undiscounted Reinforcement Learning in MDPs
The problem of reinforcement learning in an unknown and discrete Markov Decision Process (MDP) under the average-reward criterion is considered, when the learner interacts with the system in a single stream of observations, starting from an initial state without any reset. We revisit the minimax lower bound for that problem by making appear the local variance of the bias function in place of the diameter of the MDP. Furthermore, we provide a novel analysis of the KL-UCRL algorithm establishing a high-probability regret bound scaling as $\widetilde {\mathcal O}\Bigl({\textstyle \sqrt{S\sum_{s,a}{\bf V}^\star_{s,a}T}}\Big)$ for this algorithm for ergodic MDPs, where $S$ denotes the number of states and where ${\bf V}^\star_{s,a}$ is the variance of the bias function with respect to the next-state distribution following action $a$ in state $s$. The resulting bound improves upon the best previously known regret bound $\widetilde {\mathcal O}(DS\sqrt{AT})$ for that algorithm, where $A$ and $D$ respectively denote the maximum number of actions (per state) and the diameter of MDP. We finally compare the leading terms of the two bounds in some benchmark MDPs indicating that the derived bound can provide an order of magnitude improvement in some cases. Our analysis leverages novel variations of the transportation lemma combined with Kullback-Leibler concentration inequalities, that we believe to be of independent interest.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
91,920
2401.16702
Multi-granularity Correspondence Learning from Long-term Noisy Videos
Existing video-language studies mainly focus on learning short video clips, leaving long-term temporal dependencies rarely explored due to over-high computational cost of modeling long videos. To address this issue, one feasible solution is learning the correspondence between video clips and captions, which however inevitably encounters the multi-granularity noisy correspondence (MNC) problem. To be specific, MNC refers to the clip-caption misalignment (coarse-grained) and frame-word misalignment (fine-grained), hindering temporal learning and video understanding. In this paper, we propose NOise Robust Temporal Optimal traNsport (Norton) that addresses MNC in a unified optimal transport (OT) framework. In brief, Norton employs video-paragraph and clip-caption contrastive losses to capture long-term dependencies based on OT. To address coarse-grained misalignment in video-paragraph contrast, Norton filters out the irrelevant clips and captions through an alignable prompt bucket and realigns asynchronous clip-caption pairs based on transport distance. To address the fine-grained misalignment, Norton incorporates a soft-maximum operator to identify crucial words and key frames. Additionally, Norton exploits the potential faulty negative samples in clip-caption contrast by rectifying the alignment target with OT assignment to ensure precise temporal modeling. Extensive experiments on video retrieval, videoQA, and action segmentation verify the effectiveness of our method. Code is available at https://lin-yijie.github.io/projects/Norton.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
424,943
1804.02156
OpenSeqSLAM2.0: An Open Source Toolbox for Visual Place Recognition Under Changing Conditions
Visually recognising a traversed route - regardless of whether seen during the day or night, in clear or inclement conditions, or in summer or winter - is an important capability for navigating robots. Since SeqSLAM was introduced in 2012, a large body of work has followed exploring how robotic systems can use the algorithm to meet the challenges posed by navigation in changing environmental conditions. The following paper describes OpenSeqSLAM2.0, a fully open source toolbox for visual place recognition under changing conditions. Beyond the benefits of open access to the source code, OpenSeqSLAM2.0 provides a number of tools to facilitate exploration of the visual place recognition problem and interactive parameter tuning. Using the new open source platform, it is shown for the first time how comprehensive parameter characterisations provide new insights into many of the system components previously presented in ad hoc ways and provide users with a guide to what system component options should be used under what circumstances and why.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
94,352
2405.11399
An exact coverage path planning algorithm for UAV-based search and rescue operations
Unmanned aerial vehicles (UAVs) are increasingly utilized in global search and rescue efforts, enhancing operational efficiency. In these missions, a coordinated swarm of UAVs is deployed to efficiently cover expansive areas by capturing and analyzing aerial imagery and footage. Rapid coverage is paramount in these scenarios, as swift discovery can mean the difference between life and death for those in peril. This paper focuses on optimizing flight path planning for multiple UAVs in windy conditions to efficiently cover rectangular search areas in minimal time. We address this challenge by dividing the search area into a grid network and formulating it as a mixed-integer program (MIP). Our research introduces a precise lower bound for the objective function and an exact algorithm capable of finding either the optimal solution or a near-optimal solution with a constant absolute gap to optimality. Notably, as the problem complexity increases, our solution exhibits a diminishing relative optimality gap while maintaining negligible computational costs compared to the MIP approach.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
455,117
2108.09682
Uncertainty-aware Clustering for Unsupervised Domain Adaptive Object Re-identification
Unsupervised Domain Adaptive (UDA) object re-identification (Re-ID) aims at adapting a model trained on a labeled source domain to an unlabeled target domain. State-of-the-art object Re-ID approaches adopt clustering algorithms to generate pseudo-labels for the unlabeled target domain. However, the inevitable label noise caused by the clustering procedure significantly degrades the discriminative power of Re-ID model. To address this problem, we propose an uncertainty-aware clustering framework (UCF) for UDA tasks. First, a novel hierarchical clustering scheme is proposed to promote clustering quality. Second, an uncertainty-aware collaborative instance selection method is introduced to select images with reliable labels for model training. Combining both techniques effectively reduces the impact of noisy labels. In addition, we introduce a strong baseline that features a compact contrastive loss. Our UCF method consistently achieves state-of-the-art performance in multiple UDA tasks for object Re-ID, and significantly reduces the gap between unsupervised and supervised Re-ID performance. In particular, the performance of our unsupervised UCF method in the MSMT17$\to$Market1501 task is better than that of the fully supervised setting on Market1501. The code of UCF is available at https://github.com/Wang-pengfei/UCF.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
251,687
2206.01570
On Calibration of Graph Neural Networks for Node Classification
Graphs can model real-world, complex systems by representing entities and their interactions in terms of nodes and edges. To better exploit the graph structure, graph neural networks have been developed, which learn entity and edge embeddings for tasks such as node classification and link prediction. These models achieve good performance with respect to accuracy, but the confidence scores associated with the predictions might not be calibrated. That means that the scores might not reflect the ground-truth probabilities of the predicted events, which would be especially important for safety-critical applications. Even though graph neural networks are used for a wide range of tasks, the calibration thereof has not been sufficiently explored yet. We investigate the calibration of graph neural networks for node classification, study the effect of existing post-processing calibration methods, and analyze the influence of model capacity, graph density, and a new loss function on calibration. Further, we propose a topology-aware calibration method that takes the neighboring nodes into account and yields improved calibration compared to baseline methods.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
300,522
1911.01167
HARQ-CC Enabled NOMA Designs With Outage Probability Constraints
In this paper, we aim to design an adaptive power allocation scheme to minimize the average transmit power of a hybrid automatic repeat request with chase combining (HARQ-CC) enabled non-orthogonal multiple access (NOMA) system under strict outage constraints of users. Specifically, we assume the base station only knows the statistical channel state information of the users. We first focus on the two-user cases. To evaluate the performance of the two-user HARQ-CC enabled NOMA systems, we first analyze the outage probability of each user. Then, an average power minimization problem is formulated. However, the attained expressions of the outage probabilities are nonconvex, and thus make the problem difficult to solve. Thus, we first conservatively approximate it by a tractable one and then use a successive convex approximation based algorithm to handle the relaxed problem iteratively. For more practical applications, we also investigate the HARQ-CC enabled transmissions in multi-user scenarios. The user-paring and power allocation problem is considered. With the aid of matching theory, a low complexity algorithm is presented to first handle the user-paring problem. Then the power allocation problem is solved by the proposed SCA-based algorithm. Simulation results show the efficiency of the proposed transmission strategy and the near-optimality of the proposed algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
152,033
1010.5742
Stochastic Verification Theorem of Forward-Backward Controlled Systems for Viscosity Solutions
In this paper, we investigate the controlled system described by forward-backward stochastic differential equations with the control contained in drift, diffusion and generator of BSDE. A new verification theorem is derived within the framework of viscosity solutions without involving any derivatives of the value functions. It is worth to pointing out that this theorem has wider applicability than the restrictive classical verification theorems. As a relevant problem, the optimal stochastic feedback controls for forward-backward system are discussed as well.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
8,050
2008.11239
HoloLens 2 Research Mode as a Tool for Computer Vision Research
Mixed reality headsets, such as the Microsoft HoloLens 2, are powerful sensing devices with integrated compute capabilities, which makes it an ideal platform for computer vision research. In this technical report, we present HoloLens 2 Research Mode, an API and a set of tools enabling access to the raw sensor streams. We provide an overview of the API and explain how it can be used to build mixed reality applications based on processing sensor data. We also show how to combine the Research Mode sensor data with the built-in eye and hand tracking capabilities provided by HoloLens 2. By releasing the Research Mode API and a set of open-source tools, we aim to foster further research in the fields of computer vision as well as robotics and encourage contributions from the research community.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
193,211
2312.00700
Generative Parameter-Efficient Fine-Tuning
We present Generative Parameter-Efficient Fine-Tuning (GIFT) for adapting pretrained Transformer backbones on downstream tasks. GIFT learns to generate the fine-tuned weights for a layer directly from its pretrained weights. The GIFT network is parameterized in a minimally-simple way by two linear layers (without bias terms), and is shared by different pretrained layers selected for fine-tuning (e.g., the Query layers), which result in significantly fewer trainable parameters compared to the layer-specific methods like Low-Rank Adapter (LoRA). We also show this formulation bridges parameter-efficient fine-tuning and representation fine-tuning. We perform comprehensive experiments on natural language tasks (commonsense and arithmetic reasoning, instruction tuning, and sequence classification) and computer vision tasks (fine-grained classification). We obtain the best performance and parameter efficiency among baselines on commonsense and arithmetic reasoning, and instruction following using the Llama family of models and on visual recognition benchmarks using Vision Transformers. Notably, compared to LoRA, we obtain 5.7% absolute increase in average accuracy with 14 times reduction of parameters on Commonsense170k using Llama-3 (8B), and 5.4% absolute increase in the win rate with 4 times reduction of parameters using Llama-2 (7B) during instruction tuning. Our GIFT also obtains a slightly higher win rate on instruction tuning than GPT 3.5 (Turbo 1106).
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
412,143
2308.06594
CoverNav: Cover Following Navigation Planning in Unstructured Outdoor Environment with Deep Reinforcement Learning
Autonomous navigation in offroad environments has been extensively studied in the robotics field. However, navigation in covert situations where an autonomous vehicle needs to remain hidden from outside observers remains an underexplored area. In this paper, we propose a novel Deep Reinforcement Learning (DRL) based algorithm, called CoverNav, for identifying covert and navigable trajectories with minimal cost in offroad terrains and jungle environments in the presence of observers. CoverNav focuses on unmanned ground vehicles seeking shelters and taking covers while safely navigating to a predefined destination. Our proposed DRL method computes a local cost map that helps distinguish which path will grant the maximal covertness while maintaining a low cost trajectory using an elevation map generated from 3D point cloud data, the robot's pose, and directed goal information. CoverNav helps robot agents to learn the low elevation terrain using a reward function while penalizing it proportionately when it experiences high elevation. If an observer is spotted, CoverNav enables the robot to select natural obstacles (e.g., rocks, houses, disabled vehicles, trees, etc.) and use them as shelters to hide behind. We evaluate CoverNav using the Unity simulation environment and show that it guarantees dynamically feasible velocities in the terrain when fed with an elevation map generated by another DRL based navigation algorithm. Additionally, we evaluate CoverNav's effectiveness in achieving a maximum goal distance of 12 meters and its success rate in different elevation scenarios with and without cover objects. We observe competitive performance comparable to state of the art (SOTA) methods without compromising accuracy.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
385,196
1510.01315
Stochastic model for phonemes uncovers an author-dependency of their usage
We study rank-frequency relations for phonemes, the minimal units that still relate to linguistic meaning. We show that these relations can be described by the Dirichlet distribution, a direct analogue of the ideal-gas model in statistical mechanics. This description allows us to demonstrate that the rank-frequency relations for phonemes of a text do depend on its author. The author-dependency effect is not caused by the author's vocabulary (common words used in different texts), and is confirmed by several alternative means. This suggests that it can be directly related to phonemes. These features contrast to rank-frequency relations for words, which are both author and text independent and are governed by the Zipf's law.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
47,603
2105.04284
Boomerang uniformity of a class of power maps
We consider the boomerang uniformity of an infinite class of (locally-APN) power maps and show that its boomerang uniformity over the finite field $\F_{2^n}$ is $2$ and $4$, when $n \equiv 0 \pmod 4$ and $n \equiv 2 \pmod 4$, respectively. As a consequence, we show that for this class of power maps, the differential uniformity is strictly greater than its boomerang uniformity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
234,458
cs/0109010
Anaphora and Discourse Structure
We argue in this paper that many common adverbial phrases generally taken to signal a discourse relation between syntactically connected units within discourse structure, instead work anaphorically to contribute relational meaning, with only indirect dependence on discourse structure. This allows a simpler discourse structure to provide scaffolding for compositional semantics, and reveals multiple ways in which the relational meaning conveyed by adverbial connectives can interact with that associated with discourse structure. We conclude by sketching out a lexicalised grammar for discourse that facilitates discourse interpretation as a product of compositional rules, anaphor resolution and inference.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
537,411
2005.02549
Birth-Burst in Evolving Networks
The evolution of complex networks is governed by both growing rules and internal properties. Most evolving network models (e.g. preferential attachment) emphasize on the growing strategy, while neglecting the characteristics of individual nodes. In this study, we analyzed a widely studied network: the evolving protein-protein interaction (PPI) network. We discovered the critical contribution of individual nodes, occurring particularly at their birth. Specifically, a node is born with a fitness value - a measurement of its intrinsic significance. Upon the introduction of a node with a large fitness into the network, a corresponding high birth-degree is determined accordingly, leading to an abrupt increase of connectivity in the network. The degree fraction of these large (hub) nodes does not decay away with the network evolution, while keeping a constant influence over the lifetime. Here we developed the birth-burst model, an adaptation of the fitness model, to simulate degree-burst and phase-transition in the network evolution.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
175,903
2408.06872
Generative AI Tools in Academic Research: Applications and Implications for Qualitative and Quantitative Research Methodologies
This study examines the impact of Generative Artificial Intelligence (GenAI) on academic research, focusing on its application to qualitative and quantitative data analysis. As GenAI tools evolve rapidly, they offer new possibilities for enhancing research productivity and democratising complex analytical processes. However, their integration into academic practice raises significant questions regarding research integrity and security, authorship, and the changing nature of scholarly work. Through an examination of current capabilities and potential future applications, this study provides insights into how researchers may utilise GenAI tools responsibly and ethically. We present case studies that demonstrate the application of GenAI in various research methodologies, discuss the challenges of replicability and consistency in AI-assisted research, and consider the ethical implications of increased AI integration in academia. This study explores both qualitative and quantitative applications of GenAI, highlighting tools for transcription, coding, thematic analysis, visual analytics, and statistical analysis. By addressing these issues, we aim to contribute to the ongoing discourse on the role of AI in shaping the future of academic research and provide guidance for researchers exploring the rapidly evolving landscape of AI-assisted research tools and research.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
480,375
2305.04039
Refining the Responses of LLMs by Themselves
In this paper, we propose a simple yet efficient approach based on prompt engineering that leverages the large language model itself to optimize its answers without relying on auxiliary models. We introduce an iterative self-evaluating optimization mechanism, with the potential for improved output quality as iterations progress, removing the need for manual intervention. The experiment's findings indicate that utilizing our response refinement framework on the GPT-3.5 model yields results that are on par with, or even surpass, those generated by the cutting-edge GPT-4 model. Detailed implementation strategies and illustrative examples are provided to demonstrate the superiority of our proposed solution.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
362,608
2502.13420
Probabilistically Robust Uncertainty Analysis and Optimal Control of Continuous Lyophilization via Polynomial Chaos Theory
Lyophilization, aka freeze drying, is a process commonly used to increase the stability of various drug products in biotherapeutics manufacturing, e.g., mRNA vaccines, allowing for higher storage temperature. While the current trends in the industry are moving towards continuous manufacturing, the majority of industrial lyophilization processes are still being operated in a batch mode. This article presents a framework that accounts for the probabilistic uncertainty during the primary and secondary drying steps in continuous lyophilization. The probabilistic uncertainty is incorporated into the mechanistic model via polynomial chaos theory (PCT). The resulting PCT-based model is able to accurately and efficiently quantify the effects of uncertainty on several critical process variables, including the temperature, sublimation front, and concentration of bound water. The integration of the PCT-based model into stochastic optimization and control is demonstrated. The proposed framework and case studies can be used to guide the design and control of continuous lyophilization while accounting for probabilistic uncertainty.
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
535,358
2201.11611
Asymmetric Coded Caching for Multi-Antenna Location-Dependent Content Delivery
Efficient usage of in-device storage and computation capabilities are key solutions to support data-intensive applications such as immersive digital experiences. This paper proposes a location-dependent multi-antenna coded caching -based content delivery scheme tailored specifically for wireless immersive viewing applications. First, a novel memory allocation process incentivizes the content relevant to the identified wireless bottleneck areas. This enables a trade-off between local and global caching gains and results in unequal fractions of location-dependent multimedia content cached by each user. Then, a novel packet generation process is carried out during the subsequent delivery phase, given the asymmetric cache placement. During this phase, the number of packets transmitted to each user is the same, while the sizes of the packets are proportional to the corresponding location-dependent cache ratios. In this regard, each user is served with location-specific content using joint multicast beamforming and a multi-rate modulation scheme that simultaneously benefits from global caching and spatial multiplexing gains. Numerical experiments and mathematical analysis demonstrate significant performance gains compared to the state-of-the-art.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
277,340
2410.16295
Identification of Mean-Field Dynamics using Transformers
This paper investigates the use of transformer architectures to approximate the mean-field dynamics of interacting particle systems exhibiting collective behavior. Such systems are fundamental in modeling phenomena across physics, biology, and engineering, including gas dynamics, opinion formation, biological networks, and swarm robotics. The key characteristic of these systems is that the particles are indistinguishable, leading to permutation-equivariant dynamics. We demonstrate that transformers, which inherently possess permutation equivariance, are well-suited for approximating these dynamics. Specifically, we prove that if a finite-dimensional transformer can effectively approximate the finite-dimensional vector field governing the particle system, then the expected output of this transformer provides a good approximation for the infinite-dimensional mean-field vector field. Leveraging this result, we establish theoretical bounds on the distance between the true mean-field dynamics and those obtained using the transformer. We validate our theoretical findings through numerical simulations on the Cucker-Smale model for flocking, and the mean-field system for training two-layer neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
500,968
2105.07745
Low-Input Accurate Periodic Motion of an Underactuated Mechanism: Mass Distribution and Nonlinear Spring Shaping
This work presents a control-oriented structural design approach for a 2-DOF underactuated mechanical system, with the purpose of generating an optimal oscillatory behavior of the end-effector. To achieve the desired periodic motion, we propose to adjust the dynamic response of the mechanism by selecting its mass distribution and the characteristic of a nonlinear spring. In particular, we introduce a two-step optimization strategy to shape the system's zero dynamics, obtained via input-output linearization. The first part of the procedure aims to minimize the root-mean-square value of the input torque by optimizing the mechanism's mass distribution. In this context, we show that a perfect matching with the desired trajectory can be reached by assuming the ability to design an arbitrary shape of the system's elastic properties. Then, in order to favor a simpler physical implementation of the structure, we dedicate the second optimization step to the piecewise linear approximation of the previously defined stiffness characteristic. The proposed procedure is finally tested in detailed numerical simulations, confirming its effectiveness in generating a complex and efficient periodic motion.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
235,543
2406.08876
Heuristics for Influence Maximization with Tiered Influence and Activation thresholds
The information flows among the people while they communicate through social media websites. Due to the dependency on digital media, a person shares important information or regular updates with friends and family. The set of persons on social media forms a social network. Influence Maximization (IM) is a known problem in social networks. In social networks, information flows from one person to another using an underlying diffusion model. There are two fundamental diffusion models: the Independent Cascade Model (ICM) and the Linear Threshold Model (LTM). In this paper, we study a variant of the IM problem called Minimum Influential Seeds (MINFS) problem proposed by Qiang et al.[16]. It generalizes the classical IM problem with LTM as the diffusion model. Compared to IM, this variant has additional parameters: the influence threshold for each node and the propagation range. The propagation range is a positive integer that specifies how far the information can propagate from a node. A node on the network is not immediately influenced until it receives the same information from enough number of neighbors (influence threshold). Similarly, any node does not forward information until it receives the same information from a sufficient number of neighbors (activation threshold). Once a node becomes activated, it tries to activate or influence its neighbors. The MINFS problem aims to select the minimum number of initial spreader nodes such that all nodes of the graph are influenced. In this paper, we extend the study of the MINFS problem. We propose heuristics that construct seed sets based on the average degree of non-activated nodes, closest first, and backbone-based heaviest path.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
463,670
1808.07456
Stacked Pooling: Improving Crowd Counting by Boosting Scale Invariance
In this work, we explore the cross-scale similarity in crowd counting scenario, in which the regions of different scales often exhibit high visual similarity. This feature is universal both within an image and across different images, indicating the importance of scale invariance of a crowd counting model. Motivated by this, in this paper we propose simple but effective variants of pooling module, i.e., multi-kernel pooling and stacked pooling, to boost the scale invariance of convolutional neural networks (CNNs), benefiting much the crowd density estimation and counting. Specifically, the multi-kernel pooling comprises of pooling kernels with multiple receptive fields to capture the responses at multi-scale local ranges. The stacked pooling is an equivalent form of multi-kernel pooling, while, it reduces considerable computing cost. Our proposed pooling modules do not introduce extra parameters into model and can easily take place of the vanilla pooling layer in implementation. In empirical study on two benchmark crowd counting datasets, the stacked pooling beats the vanilla pooling layer in most cases.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
105,745
1708.00977
Network Community Detection: A Review and Visual Survey
Community structure is an important area of research. It has received a considerable attention from the scientific community. Despite its importance, one of the key problems in locating information about community detection is the diverse spread of related articles across various disciplines. To the best of our knowledge, there is no current comprehensive review of recent literature which uses a scientometric analysis using complex networks analysis covering all relevant articles from the Web of Science (WoS). Here we present a visual survey of key literature using CiteSpace. The idea is to identify emerging trends besides using network techniques to examine the evolution of the domain. Towards that end, we identify the most influential, central, as well as active nodes using scientometric analyses. We examine authors, key articles, cited references, core subject categories, key journals, institutions, as well as countries. The exploration of the scientometric literature of the domain reveals that Yong Wang is a pivot node with the highest centrality. Additionally, we have observed that Mark Newman is the most highly cited author in the network. We have also identified that the journal, "Reviews of Modern Physics" has the strongest citation burst. In terms of cited documents, an article by Andrea Lancichinetti has the highest centrality score. We have also discovered that the origin of the key publications in this domain is from the United States. Whereas Scotland has the strongest and longest citation burst. Additionally, we have found that the categories of "Computer Science" and "Engineering" lead other categories based on frequency and centrality respectively.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
78,309
2106.02044
Heterogeneous Noisy Short Signal Camouflage in Multi-Domain Environment Decision-Making
Data transmission between two or more digital devices in industry and government demands secure and agile technology. Digital information distribution often requires deployment of Internet of Things (IoT) devices and Data Fusion techniques which have also gained popularity in both, civilian and military environments, such as, emergence of Smart Cities and Internet of Battlefield Things (IoBT). This usually requires capturing and consolidating data from multiple sources. Because datasets do not necessarily originate from identical sensors, fused data typically results in a complex Big Data problem. Due to potentially sensitive nature of IoT datasets, Blockchain technology is used to facilitate secure sharing of IoT datasets, which allows digital information to be distributed, but not copied. However, blockchain has several limitations related to complexity, scalability, and excessive energy consumption. We propose an approach to hide information (sensor signal) by transforming it to an image or an audio signal. In one of the latest attempts to the military modernization, we investigate sensor fusion approach by investigating the challenges of enabling an intelligent identification and detection operation and demonstrates the feasibility of the proposed Deep Learning and Anomaly Detection models that can support future application for specific hand gesture alert system from wearable devices.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
238,709
1912.13436
30% Reach Increase via Low-complexity Hybrid HD/SD FEC and Nonlinearity-tolerant 4D Modulation
Current optical coherent transponders technology is driving data rates towards 1 Tb/s/{\lambda}and beyond. This trend requires both high-performance coded modulation schemes and efficient implementation of the forward-error-correction (FEC) decoder. A possible solution to this problem is combining advanced multidimensional modulation formats with low-complexity hybrid HD/SD FEC decoders. Following this rationale, in this paper we combine two recently introduced coded modulation techniques:the geometrically-shaped 4D-64 polarization ring-switched and the soft-aided bit-marking-scaled reliability decoder. This joint scheme enabled us to experimentally demonstrate the transmission of 11x218 Gbit/s channels over transatlantic distances at 5.2bit/4D-sym. Furthermore, a 30% reach increase is demonstrated over PM-8QAM and conventional HD-FEC decoding for product codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
159,079
2102.04883
Introduction to Machine Learning for the Sciences
This is an introductory machine-learning course specifically developed with STEM students in mind. Our goal is to provide the interested reader with the basics to employ machine learning in their own projects and to familiarize themself with the terminology as a foundation for further reading of the relevant literature. In these lecture notes, we discuss supervised, unsupervised, and reinforcement learning. The notes start with an exposition of machine learning methods without neural networks, such as principle component analysis, t-SNE, clustering, as well as linear regression and linear classifiers. We continue with an introduction to both basic and advanced neural-network structures such as dense feed-forward and conventional neural networks, recurrent neural networks, restricted Boltzmann machines, (variational) autoencoders, generative adversarial networks. Questions of interpretability are discussed for latent-space representations and using the examples of dreaming and adversarial attacks. The final section is dedicated to reinforcement learning, where we introduce basic notions of value functions and policy learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
219,253
2404.17550
CoCar NextGen: a Multi-Purpose Platform for Connected Autonomous Driving Research
Real world testing is of vital importance to the success of automated driving. While many players in the business design purpose build testing vehicles, we designed and build a modular platform that offers high flexibility for any kind of scenario. CoCar NextGen is equipped with next generation hardware that addresses all future use cases. Its extensive, redundant sensor setup allows to develop cross-domain data driven approaches that manage the transfer to other sensor setups. Together with the possibility of being deployed on public roads, this creates a unique research platform that supports the road to automated driving on SAE Level 5.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
449,896
2501.13071
Robust Body Composition Analysis by Generating 3D CT Volumes from Limited 2D Slices
Body composition analysis provides valuable insights into aging, disease progression, and overall health conditions. Due to concerns of radiation exposure, two-dimensional (2D) single-slice computed tomography (CT) imaging has been used repeatedly for body composition analysis. However, this approach introduces significant spatial variability that can impact the accuracy and robustness of the analysis. To mitigate this issue and facilitate body composition analysis, this paper presents a novel method to generate 3D CT volumes from limited number of 2D slices using a latent diffusion model (LDM). Our approach first maps 2D slices into a latent representation space using a variational autoencoder. An LDM is then trained to capture the 3D context of a stack of these latent representations. To accurately interpolate intermediateslices and construct a full 3D volume, we utilize body part regression to determine the spatial location and distance between the acquired slices. Experiments on both in-house and public 3D abdominal CT datasets demonstrate that the proposed method significantly enhances body composition analysis compared to traditional 2D-based analysis, with a reduced error rate from 23.3% to 15.2%.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
526,544
1911.00760
GRAPHENE: A Precise Biomedical Literature Retrieval Engine with Graph Augmented Deep Learning and External Knowledge Empowerment
Effective biomedical literature retrieval (BLR) plays a central role in precision medicine informatics. In this paper, we propose GRAPHENE, which is a deep learning based framework for precise BLR. GRAPHENE consists of three main different modules 1) graph-augmented document representation learning; 2) query expansion and representation learning and 3) learning to rank biomedical articles. The graph-augmented document representation learning module constructs a document-concept graph containing biomedical concept nodes and document nodes so that global biomedical related concept from external knowledge source can be captured, which is further connected to a BiLSTM so both local and global topics can be explored. Query expansion and representation learning module expands the query with abbreviations and different names, and then builds a CNN-based model to convolve the expanded query and obtain a vector representation for each query. Learning to rank minimizes a ranking loss between biomedical articles with the query to learn the retrieval function. Experimental results on applying our system to TREC Precision Medicine track data are provided to demonstrate its effectiveness.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
true
151,907
1812.06023
Advanced Super-Resolution using Lossless Pooling Convolutional Networks
In this paper, we present a novel deep learning-based approach for still image super-resolution, that unlike the mainstream models does not rely solely on the input low resolution image for high quality upsampling, and takes advantage of a set of artificially created auxiliary self-replicas of the input image that are incorporated in the neural network to create an enhanced and accurate upscaling scheme. Inclusion of the proposed lossless pooling layers, and the fusion of the input self-replicas enable the model to exploit the high correlation between multiple instances of the same content, and eventually result in significant improvements in the quality of the super-resolution, which is confirmed by extensive evaluations.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
116,524
2112.11975
Page Segmentation using Visual Adjacency Analysis
Page segmentation is a web page analysis process that divides a page into cohesive segments, such as sidebars, headers, and footers. Current page segmentation approaches use either the DOM, textual content, or rendering style information of the page. However, these approaches have a number of drawbacks, such as a large number of parameters and rigid assumptions about the page, which negatively impact their segmentation accuracy. We propose a novel page segmentation approach based on visual analysis of localized adjacency regions. It combines DOM attributes and visual analysis to build features of a given page and guide an unsupervised clustering. We evaluate our approach on 35 real-world web pages, and examine the effectiveness and efficiency of segmentation. The results show that, compared with state-of-the-art, our approach achieves an average of 156% increase in precision and 249% improvement in F-measure.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
272,846
2002.00781
Towards an Operational Definition of Group Network Codes
Group network codes are a generalization of linear codes that have seen several studies over the last decade. When studying network codes, operations performed at internal network nodes called local encoding functions, are of significant interest. While local encoding functions of linear codes are well understood (and of operational significance), no similar operational definition exists for group network codes. To bridge this gap, we study the connections between group network codes and a family of codes called Coordinate-Wise-Linear (CWL) codes. CWL codes generalize linear codes and, in addition, can be defined locally (i.e., operationally). In this work, we study the connection between CWL codes and group codes from both a local and global encoding perspective. We show that Abelian group codes can be expressed as CWL codes and, as a result, they inherit an operational definition.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
162,466
1804.02419
Image Segmentation Using Subspace Representation and Sparse Decomposition
Image foreground extraction is a classical problem in image processing and vision, with a large range of applications. In this dissertation, we focus on the extraction of text and graphics in mixed-content images, and design novel approaches for various aspects of this problem. We first propose a sparse decomposition framework, which models the background by a subspace containing smooth basis vectors, and foreground as a sparse and connected component. We then formulate an optimization framework to solve this problem, by adding suitable regularizations to the cost function to promote the desired characteristics of each component. We present two techniques to solve the proposed optimization problem, one based on alternating direction method of multipliers (ADMM), and the other one based on robust regression. Promising results are obtained for screen content image segmentation using the proposed algorithm. We then propose a robust subspace learning algorithm for the representation of the background component using training images that could contain both background and foreground components, as well as noise. With the learnt subspace for the background, we can further improve the segmentation results, compared to using a fixed subspace. Lastly, we investigate a different class of signal/image decomposition problem, where only one signal component is active at each signal element. In this case, besides estimating each component, we need to find their supports, which can be specified by a binary mask. We propose a mixed-integer programming problem, that jointly estimates the two components and their supports through an alternating optimization scheme. We show the application of this algorithm on various problems, including image segmentation, video motion segmentation, and also separation of text from textured images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,397
2205.12215
DivEMT: Neural Machine Translation Post-Editing Effort Across Typologically Diverse Languages
We introduce DivEMT, the first publicly available post-editing study of Neural Machine Translation (NMT) over a typologically diverse set of target languages. Using a strictly controlled setup, 18 professional translators were instructed to translate or post-edit the same set of English documents into Arabic, Dutch, Italian, Turkish, Ukrainian, and Vietnamese. During the process, their edits, keystrokes, editing times and pauses were recorded, enabling an in-depth, cross-lingual evaluation of NMT quality and post-editing effectiveness. Using this new dataset, we assess the impact of two state-of-the-art NMT systems, Google Translate and the multilingual mBART-50 model, on translation productivity. We find that post-editing is consistently faster than translation from scratch. However, the magnitude of productivity gains varies widely across systems and languages, highlighting major disparities in post-editing effectiveness for languages at different degrees of typological relatedness to English, even when controlling for system architecture and training data size. We publicly release the complete dataset including all collected behavioral data, to foster new research on the translation capabilities of NMT systems for typologically diverse languages.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
298,444
2009.08322
Moving with the Times: Investigating the Alt-Right Network Gab with Temporal Interaction Graphs
Gab is an online social network often associated with the alt-right political movement and users barred from other networks. It presents an interesting opportunity for research because near-complete data is available from day one of the network's creation. In this paper, we investigate the evolution of the user interaction graph, that is the graph where a link represents a user interacting with another user at a given time. We view this graph both at different times and at different timescales. The latter is achieved by using sliding windows on the graph which gives a novel perspective on social network data. The Gab network is relatively slowly growing over the period of months but subject to large bursts of arrivals over hours and days. We identify plausible events that are of interest to the Gab community associated with the most obvious such bursts. The network is characterised by interactions between `strangers' rather than by reinforcing links between `friends'. Gab usage follows the diurnal cycle of the predominantly US and Europe based users. At off-peak hours the Gab interaction network fragments into sub-networks with absolutely no interaction between them. A small group of users are highly influential across larger timescales, but a substantial number of users gain influence for short periods of time. Temporal analysis at different timescales gives new insights above and beyond what could be found on static graphs.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
196,206
2006.04421
Cyber-Physical Control of Indoor Multi-vehicle Testbed for Cooperative Driving
The system of connected vehicle to vehicle and vehicle to infrastructure can be considered as a wireless cyberphysical system of systems (Wireless CPSoS), which will be provided with the high ability of adaptive control on system of systems, cooperative scenarios to control of a Wireless CPSoS and adaptive wireless networked control system (WNCS). In this paper we present our multi-vehicle testbed based on the cyber-physical system that was designed for verification and validation of cooperative driving algorithm involving WNCS testing. Vehicles were developed as the physical prototype equipped with Raspberry-pi microprocessor and other sensing elements. This testbed consists of a fleet of 4 robot vehicles. An indoor positioning system (IPS) based on particle filter is purposed by using an inertial measurement unit (IMU) and iBeacon that is built upon Bluetooth Low Energy. Some typical cooperative driving scenarios can be implemented on this testbed under indoor laboratory. The method used to realize the objective statement was Model Predictive Control (MPC) with a state observer based on a Kalman Filter (KF). Because the wireless control systems can be severely affected by the imperfections of the wireless communication link. Our experimental testbed paves the way for testing and evaluating more intelligent cooperative driving scenario with the use new wireless technology and control system in the future.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
180,687
2409.14755
BranchPoseNet: Characterizing tree branching with a deep learning-based pose estimation approach
This paper presents an automated pipeline for detecting tree whorls in proximally laser scanning data using a pose-estimation deep learning model. Accurate whorl detection provides valuable insights into tree growth patterns, wood quality, and offers potential for use as a biometric marker to track trees throughout the forestry value chain. The workflow processes point cloud data to create sectional images, which are subsequently used to identify keypoints representing tree whorls and branches along the stem. The method was tested on a dataset of destructively sampled individual trees, where the whorls were located along the stems of felled trees. The results demonstrated strong potential, with accurate identification of tree whorls and precise calculation of key structural metrics, unlocking new insights and deeper levels of information from individual tree point clouds.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
490,618
2104.02045
A robust extended Kalman filter for power system dynamic state estimation using PMU measurements
This paper develops a robust extended Kalman filter to estimate the rotor angles and the rotor speeds of synchronous generators of a multimachine power system. Using a batch-mode regression form, the filter processes together predicted state vector and PMU measurements to track the system dynamics faster than the standard extended Kalman filter. Our proposed filter is based on a robust GM-estimator that bounds the influence of vertical outliers and bad leverage points, which are identified by means of the projection statistics. Good statistical efficiency under the Gaussian distribution assumption of the process and the observation noise is achieved thanks to the use of the Huber cost function, which is minimized via the iteratively reweighted least squares algorithm. The asymptotic covariance matrix of the state estimation error vector is derived via the covariance matrix of the total influence function of the GM-estimator. Simulations carried out on the IEEE 39-bus test system reveal that our robust extended Kalman filter exhibits good tracking capabilities under Gaussian process and observation noise while suppressing observation outliers, even in position of leverage. These good performances are obtained only under the validity of the linear approximation of the power system model.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
228,571
2410.16512
TIPS: Text-Image Pretraining with Spatial Awareness
While image-text representation learning has become very popular in recent years, existing models tend to lack spatial awareness and have limited direct applicability for dense understanding tasks. For this reason, self-supervised image-only pretraining is still the go-to method for many dense vision applications (e.g. depth estimation, semantic segmentation), despite the lack of explicit supervisory signals. In this paper, we close this gap between image-text and self-supervised learning, by proposing a novel general-purpose image-text model, which can be effectively used off-the-shelf for dense and global vision tasks. Our method, which we refer to as Text-Image Pretraining with Spatial awareness (TIPS), leverages two simple and effective insights. First, on textual supervision: we reveal that replacing noisy web image captions by synthetically generated textual descriptions boosts dense understanding performance significantly, due to a much richer signal for learning spatially aware representations. We propose an adapted training method that combines noisy and synthetic captions, resulting in improvements across both dense and global understanding tasks. Second, on the learning technique: we propose to combine contrastive image-text learning with self-supervised masked image modeling, to encourage spatial coherence, unlocking substantial enhancements for downstream applications. Building on these two ideas, we scale our model using the transformer architecture, trained on a curated set of public images. Our experiments are conducted on 8 tasks involving 16 datasets in total, demonstrating strong off-the-shelf performance on both dense and global understanding, for several image-only and image-text tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
501,060
1903.10676
SciBERT: A Pretrained Language Model for Scientific Text
Obtaining large-scale annotated data for NLP tasks in the scientific domain is challenging and expensive. We release SciBERT, a pretrained language model based on BERT (Devlin et al., 2018) to address the lack of high-quality, large-scale labeled scientific data. SciBERT leverages unsupervised pretraining on a large multi-domain corpus of scientific publications to improve performance on downstream scientific NLP tasks. We evaluate on a suite of tasks including sequence tagging, sentence classification and dependency parsing, with datasets from a variety of scientific domains. We demonstrate statistically significant improvements over BERT and achieve new state-of-the-art results on several of these tasks. The code and pretrained models are available at https://github.com/allenai/scibert/.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
125,337
2311.10811
A novel post-hoc explanation comparison metric and applications
Explanatory systems make the behavior of machine learning models more transparent, but are often inconsistent. To quantify the differences between explanatory systems, this paper presents the Shreyan Distance, a novel metric based on the weighted difference between ranked feature importance lists produced by such systems. This paper uses the Shreyan Distance to compare two explanatory systems, SHAP and LIME, for both regression and classification learning tasks. Because we find that the average Shreyan Distance varies significantly between these two tasks, we conclude that consistency between explainers not only depends on inherent properties of the explainers themselves, but also the type of learning task. This paper further contributes the XAISuite library, which integrates the Shreyan distance algorithm into machine learning pipelines.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
408,681
cs/0603053
Automatic generation of simplified weakest preconditions for integrity constraint verification
Given a constraint $c$ assumed to hold on a database $B$ and an update $u$ to be performed on $B$, we address the following question: will $c$ still hold after $u$ is performed? When $B$ is a relational database, we define a confluent terminating rewriting system which, starting from $c$ and $u$, automatically derives a simplified weakest precondition $wp(c,u)$ such that, whenever $B$ satisfies $wp(c,u)$, then the updated database $u(B)$ will satisfy $c$, and moreover $wp(c,u)$ is simplified in the sense that its computation depends only upon the instances of $c$ that may be modified by the update. We then extend the definition of a simplified $wp(c,u)$ to the case of deductive databases; we prove it using fixpoint induction.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
true
539,328
1802.03943
Temporal and volumetric denoising via quantile sparse image prior
This paper introduces an universal and structure-preserving regularization term, called quantile sparse image (QuaSI) prior. The prior is suitable for denoising images from various medical imaging modalities. We demonstrate its effectiveness on volumetric optical coherence tomography (OCT) and computed tomography (CT) data, which show different noise and image characteristics. OCT offers high-resolution scans of the human retina but is inherently impaired by speckle noise. CT on the other hand has a lower resolution and shows high-frequency noise. For the purpose of denoising, we propose a variational framework based on the QuaSI prior and a Huber data fidelity model that can handle 3-D and 3-D+t data. Efficient optimization is facilitated through the use of an alternating direction method of multipliers (ADMM) scheme and the linearization of the quantile filter. Experiments on multiple datasets emphasize the excellent performance of the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
90,108
2006.09610
Canonicalizing Open Knowledge Bases with Multi-Layered Meta-Graph Neural Network
Noun phrases and relational phrases in Open Knowledge Bases are often not canonical, leading to redundant and ambiguous facts. In this work, we integrate structural information (from which tuple, which sentence) and semantic information (semantic similarity) to do the canonicalization. We represent the two types of information as a multi-layered graph: the structural information forms the links across the sentence, relational phrase, and noun phrase layers; the semantic information forms weighted intra-layer links for each layer. We propose a graph neural network model to aggregate the representations of noun phrases and relational phrases through the multi-layered meta-graph structure. Experiments show that our model outperforms existing approaches on a public datasets in general domain.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
182,596
1605.06431
Residual Networks Behave Like Ensembles of Relatively Shallow Networks
In this work we propose a novel interpretation of residual networks showing that they can be seen as a collection of many paths of differing length. Moreover, residual networks seem to enable very deep networks by leveraging only the short paths during training. To support this observation, we rewrite residual networks as an explicit collection of paths. Unlike traditional models, paths through residual networks vary in length. Further, a lesion study reveals that these paths show ensemble-like behavior in the sense that they do not strongly depend on each other. Finally, and most surprising, most paths are shorter than one might expect, and only the short paths are needed during training, as longer paths do not contribute any gradient. For example, most of the gradient in a residual network with 110 layers comes from paths that are only 10-34 layers deep. Our results reveal one of the key characteristics that seem to enable the training of very deep networks: Residual networks avoid the vanishing gradient problem by introducing short paths which can carry gradient throughout the extent of very deep networks.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
true
false
false
56,136
2311.11194
Testing with Non-identically Distributed Samples
We examine the extent to which sublinear-sample property testing and estimation applies to settings where samples are independently but not identically distributed. Specifically, we consider the following distributional property testing framework: Suppose there is a set of distributions over a discrete support of size $k$, $\textbf{p}_1, \textbf{p}_2,\ldots,\textbf{p}_T$, and we obtain $c$ independent draws from each distribution. Suppose the goal is to learn or test a property of the average distribution, $\textbf{p}_{\mathrm{avg}}$. This setup models a number of important practical settings where the individual distributions correspond to heterogeneous entities -- either individuals, chronologically distinct time periods, spatially separated data sources, etc. From a learning standpoint, even with $c=1$ samples from each distribution, $\Theta(k/\varepsilon^2)$ samples are necessary and sufficient to learn $\textbf{p}_{\mathrm{avg}}$ to within error $\varepsilon$ in TV distance. To test uniformity or identity -- distinguishing the case that $\textbf{p}_{\mathrm{avg}}$ is equal to some reference distribution, versus has $\ell_1$ distance at least $\varepsilon$ from the reference distribution, we show that a linear number of samples in $k$ is necessary given $c=1$ samples from each distribution. In contrast, for $c \ge 2$, we recover the usual sublinear sample testing of the i.i.d. setting: we show that $O(\sqrt{k}/\varepsilon^2 + 1/\varepsilon^4)$ samples are sufficient, matching the optimal sample complexity in the i.i.d. case in the regime where $\varepsilon \ge k^{-1/4}$. Additionally, we show that in the $c=2$ case, there is a constant $\rho > 0$ such that even in the linear regime with $\rho k$ samples, no tester that considers the multiset of samples (ignoring which samples were drawn from the same $\textbf{p}_i$) can perform uniformity testing.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
true
408,841
2211.09081
Secure SWIPT in the Multiuser STAR-RIS Aided MISO Rate Splitting Downlink
Recently, simultaneously transmitting and reflecting reconfigurable intelligent surfaces (STAR-RISs) have emerged as a novel technology that provides 360 coverage and new degrees-of-freedom (DoFs). They are also capable of manipulating signal propagation and simultaneous wireless information and power transfer (SWIPT). This paper introduces a novel STAR-RIS-aided secure SWIPT system for downlink multiple input single output rate-splitting multiple access (RSMA) networks. The transmitter concurrently communicates with the information receivers (IRs) and sends energy to untrusted energy receivers (UERs). The UERs are also capable of wiretapping the IR streams. We assume that the channel state information (CSI) of the IRs is known at the information transmitter, but only imperfect CSI for the UERs is available at the energy transmitter. By exploiting RSMA, the base station splits the messages of the IRs into common and private parts. The former is encoded into a common stream that can be decoded by all IRs, while the private messages are individually decoded by their respective IRs. We find the precoders and STAR-RIS configuration that maximizes the achievable worst-case sum secrecy rate of the IRs under a total transmit power constraint, a sum energy constraint for the UERs, and subject to constraints on the transmission and reflection coefficients. The formulated problem is non-convex and has intricately coupled variables. To tackle this challenge, a suboptimal two-step iterative algorithm based on the sequential parametric convex approximation method is proposed. Simulations demonstrate that the RSMA-based algorithm implemented with a STAR-RIS enhances both the rate of confidential information transmission and the total spectral efficiency. Furthermore, our method surpasses the performance of both orthogonal multiple access (OMA) and non-OMA (NOMA).
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
330,867
2406.16740
Learning the boundary-to-domain mapping using Lifting Product Fourier Neural Operators for partial differential equations
Neural operators such as the Fourier Neural Operator (FNO) have been shown to provide resolution-independent deep learning models that can learn mappings between function spaces. For example, an initial condition can be mapped to the solution of a partial differential equation (PDE) at a future time-step using a neural operator. Despite the popularity of neural operators, their use to predict solution functions over a domain given only data over the boundary (such as a spatially varying Dirichlet boundary condition) remains unexplored. In this paper, we refer to such problems as boundary-to-domain problems; they have a wide range of applications in areas such as fluid mechanics, solid mechanics, heat transfer etc. We present a novel FNO-based architecture, named Lifting Product FNO (or LP-FNO) which can map arbitrary boundary functions defined on the lower-dimensional boundary to a solution in the entire domain. Specifically, two FNOs defined on the lower-dimensional boundary are lifted into the higher dimensional domain using our proposed lifting product layer. We demonstrate the efficacy and resolution independence of the proposed LP-FNO for the 2D Poisson equation.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
467,252
2004.09846
SIBRE: Self Improvement Based REwards for Adaptive Feedback in Reinforcement Learning
We propose a generic reward shaping approach for improving the rate of convergence in reinforcement learning (RL), called Self Improvement Based REwards, or SIBRE. The approach is designed for use in conjunction with any existing RL algorithm, and consists of rewarding improvement over the agent's own past performance. We prove that SIBRE converges in expectation under the same conditions as the original RL algorithm. The reshaped rewards help discriminate between policies when the original rewards are weakly discriminated or sparse. Experiments on several well-known benchmark environments with different RL algorithms show that SIBRE converges to the optimal policy faster and more stably. We also perform sensitivity analysis with respect to hyper-parameters, in comparison with baseline RL algorithms.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
173,475
2411.14365
Formal Simulation and Visualisation of Hybrid Programs
The design and analysis of systems that combine computational behaviour with physical processes' continuous dynamics - such as movement, velocity, and voltage - is a famous, challenging task. Several theoretical results from programming theory emerged in the last decades to tackle the issue; some of which are the basis of a proof-of-concept tool, called Lince, that aids in the analysis of such systems, by presenting simulations of their respective behaviours. However being a proof-of-concept, the tool is quite limited with respect to usability, and when attempting to apply it to a set of common, concrete problems, involving autonomous driving and others, it either simply cannot simulate them or fails to provide a satisfactory user-experience. The current work complements the aforementioned theoretical approaches with a more practical perspective, by improving Lince along several dimensions: to name a few, richer syntactic constructs, more operations, more informative plotting systems and errors messages, and a better performance overall. We illustrate our improvements via a variety of examples that involve both autonomous driving and electrical systems.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
510,126
1505.04342
Sifting Robotic from Organic Text: A Natural Language Approach for Detecting Automation on Twitter
Twitter, a popular social media outlet, has evolved into a vast source of linguistic data, rich with opinion, sentiment, and discussion. Due to the increasing popularity of Twitter, its perceived potential for exerting social influence has led to the rise of a diverse community of automatons, commonly referred to as bots. These inorganic and semi-organic Twitter entities can range from the benevolent (e.g., weather-update bots, help-wanted-alert bots) to the malevolent (e.g., spamming messages, advertisements, or radical opinions). Existing detection algorithms typically leverage meta-data (time between tweets, number of followers, etc.) to identify robotic accounts. Here, we present a powerful classification scheme that exclusively uses the natural language text from organic users to provide a criterion for identifying accounts posting automated messages. Since the classifier operates on text alone, it is flexible and may be applied to any textual data beyond the Twitter-sphere.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
43,174
2409.04743
GRVFL-MV: Graph Random Vector Functional Link Based on Multi-View Learning
The classification performance of the random vector functional link (RVFL), a randomized neural network, has been widely acknowledged. However, due to its shallow learning nature, RVFL often fails to consider all the relevant information available in a dataset. Additionally, it overlooks the geometrical properties of the dataset. To address these limitations, a novel graph random vector functional link based on multi-view learning (GRVFL-MV) model is proposed. The proposed model is trained on multiple views, incorporating the concept of multiview learning (MVL), and it also incorporates the geometrical properties of all the views using the graph embedding (GE) framework. The fusion of RVFL networks, MVL, and GE framework enables our proposed model to achieve the following: i) efficient learning: by leveraging the topology of RVFL, our proposed model can efficiently capture nonlinear relationships within the multi-view data, facilitating efficient and accurate predictions; ii) comprehensive representation: fusing information from diverse perspectives enhance the proposed model's ability to capture complex patterns and relationships within the data, thereby improving the model's overall generalization performance; and iii) structural awareness: by employing the GE framework, our proposed model leverages the original data distribution of the dataset by naturally exploiting both intrinsic and penalty subspace learning criteria. The evaluation of the proposed GRVFL-MV model on various datasets, including 27 UCI and KEEL datasets, 50 datasets from Corel5k, and 45 datasets from AwA, demonstrates its superior performance compared to baseline models. These results highlight the enhanced generalization capabilities of the proposed GRVFL-MV model across a diverse range of datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
486,489
2303.16386
Quantifying VIO Uncertainty
We compute the uncertainty of XIVO, a monocular visual-inertial odometry system based on the Extended Kalman Filter, in the presence of Gaussian noise, drift, and attribution errors in the feature tracks in addition to Gaussian noise and drift in the IMU. Uncertainty is computed using Monte-Carlo simulations of a sufficiently exciting trajectory in the midst of a point cloud that bypass the typical image processing and feature tracking steps. We find that attribution errors have the largest detrimental effect on performance. Even with just small amounts of Gaussian noise and/or drift, however, the probability that XIVO's performance resembles the mean performance when noise and/or drift is artificially high is greater than 1 in 100.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
354,837
2112.14278
Beta-VAE Reproducibility: Challenges and Extensions
$\beta$-VAE is a follow-up technique to variational autoencoders that proposes special weighting of the KL divergence term in the VAE loss to obtain disentangled representations. Unsupervised learning is known to be brittle even on toy datasets and a meaningful, mathematically precise definition of disentanglement remains difficult to find. Here we investigate the original $\beta$-VAE paper and add evidence to the results previously obtained indicating its lack of reproducibility. We also further expand the experimentation of the models and include further more complex datasets in the analysis. We also implement an FID scoring metric for the $\beta$-VAE model and conclude a qualitative analysis of the results obtained. We end with a brief discussion on possible future investigations that can be conducted to add more robustness to the claims.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
273,480
1005.0961
Performance Oriented Query Processing In GEO Based Location Search Engines
Geographic location search engines allow users to constrain and order search results in an intuitive manner by focusing a query on a particular geographic region. Geographic search technology, also called location search, has recently received significant interest from major search engine companies. Academic research in this area has focused primarily on techniques for extracting geographic knowledge from the web. In this paper, we study the problem of efficient query processing in scalable geographic search engines. Query processing is a major bottleneck in standard web search engines, and the main reason for the thousands of machines used by the major engines. Geographic search engine query processing is different in that it requires a combination of text and spatial data processing techniques. We propose several algorithms for efficient query processing in geographic search engines, integrate them into an existing web search query processor, and evaluate them on large sets of real data and query traces.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
6,423
2408.07244
Sign language recognition based on deep learning and low-cost handcrafted descriptors
In recent years, deep learning techniques have been used to develop sign language recognition systems, potentially serving as a communication tool for millions of hearing-impaired individuals worldwide. However, there are inherent challenges in creating such systems. Firstly, it is important to consider as many linguistic parameters as possible in gesture execution to avoid ambiguity between words. Moreover, to facilitate the real-world adoption of the created solution, it is essential to ensure that the chosen technology is realistic, avoiding expensive, intrusive, or low-mobility sensors, as well as very complex deep learning architectures that impose high computational requirements. Based on this, our work aims to propose an efficient sign language recognition system that utilizes low-cost sensors and techniques. To this end, an object detection model was trained specifically for detecting the interpreter's face and hands, ensuring focus on the most relevant regions of the image and generating inputs with higher semantic value for the classifier. Additionally, we introduced a novel approach to obtain features representing hand location and movement by leveraging spatial information derived from centroid positions of bounding boxes, thereby enhancing sign discrimination. The results demonstrate the efficiency of our handcrafted features, increasing accuracy by 7.96% on the AUTSL dataset, while adding fewer than 700 thousand parameters and incurring less than 10 milliseconds of additional inference time. These findings highlight the potential of our technique to strike a favorable balance between computational cost and accuracy, making it a promising approach for practical sign language recognition applications.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
480,507
1308.6487
A New Algorithm of Speckle Filtering using Stochastic Distances
This paper presents a new approach for filter design based on stochastic distances and tests between distributions. A window is defined around each pixel, overlapping samples are compared and only those which pass a goodness-of-fit test are used to compute the filtered value. The technique is applied to intensity SAR data with homogeneous regions using the Gamma model. The proposal is compared with the Lee's filter using a protocol based on Monte Carlo. Among the criteria used to quantify the quality of filters, we employ the equivalent number of looks, line and edge preservation. Moreover, we also assessed the filters by the Universal Image Quality Index and the Pearson's correlation on edges regions.
false
false
false
false
false
false
false
false
false
true
false
true
false
false
false
false
false
true
26,718
2411.18905
FedRGL: Robust Federated Graph Learning for Label Noise
Federated Graph Learning (FGL) is a distributed machine learning paradigm based on graph neural networks, enabling secure and collaborative modeling of local graph data among clients. However, label noise can degrade the global model's generalization performance. Existing federated label noise learning methods, primarily focused on computer vision, often yield suboptimal results when applied to FGL. To address this, we propose a robust federated graph learning method with label noise, termed FedRGL. FedRGL introduces dual-perspective consistency noise node filtering, leveraging both the global model and subgraph structure under class-aware dynamic thresholds. To enhance client-side training, we incorporate graph contrastive learning, which improves encoder robustness and assigns high-confidence pseudo-labels to noisy nodes. Additionally, we measure model quality via predictive entropy of unlabeled nodes, enabling adaptive robust aggregation of the global model. Comparative experiments on multiple real-world graph datasets show that FedRGL outperforms 12 baseline methods across various noise rates, types, and numbers of clients.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
512,047
1906.07492
Chemotaxis Based Virtual Fence for Swarm Robots in Unbounded Environments
This paper presents a novel swarm robotics application of chemotaxis behaviour observed in microorganisms. This approach was used to cause exploration robots to return to a work area around the swarm's nest within a boundless environment. We investigate the performance of our algorithm through extensive simulation studies and hardware validation. Results show that the chemotaxis approach is effective for keeping the swarm close to both stationary and moving nests. Performance comparison of these results with the unrealistic case where a boundary wall was used to keep the swarm within a target search area showed that our chemotaxis approach produced competitive results.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
135,608
2005.09704
Contextual Residual Aggregation for Ultra High-Resolution Image Inpainting
Recently data-driven image inpainting methods have made inspiring progress, impacting fundamental image editing tasks such as object removal and damaged image repairing. These methods are more effective than classic approaches, however, due to memory limitations they can only handle low-resolution inputs, typically smaller than 1K. Meanwhile, the resolution of photos captured with mobile devices increases up to 8K. Naive up-sampling of the low-resolution inpainted result can merely yield a large yet blurry result. Whereas, adding a high-frequency residual image onto the large blurry image can generate a sharp result, rich in details and textures. Motivated by this, we propose a Contextual Residual Aggregation (CRA) mechanism that can produce high-frequency residuals for missing contents by weighted aggregating residuals from contextual patches, thus only requiring a low-resolution prediction from the network. Since convolutional layers of the neural network only need to operate on low-resolution inputs and outputs, the cost of memory and computing power is thus well suppressed. Moreover, the need for high-resolution training datasets is alleviated. In our experiments, we train the proposed model on small images with resolutions 512x512 and perform inference on high-resolution images, achieving compelling inpainting quality. Our model can inpaint images as large as 8K with considerable hole sizes, which is intractable with previous learning-based approaches. We further elaborate on the light-weight design of the network architecture, achieving real-time performance on 2K images on a GTX 1080 Ti GPU. Codes are available at: Atlas200dk/sample-imageinpainting-HiFill.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
177,983
1706.06279
Short-Term Forecasting of Passenger Demand under On-Demand Ride Services: A Spatio-Temporal Deep Learning Approach
Short-term passenger demand forecasting is of great importance to the on-demand ride service platform, which can incentivize vacant cars moving from over-supply regions to over-demand regions. The spatial dependences, temporal dependences, and exogenous dependences need to be considered simultaneously, however, which makes short-term passenger demand forecasting challenging. We propose a novel deep learning (DL) approach, named the fusion convolutional long short-term memory network (FCL-Net), to address these three dependences within one end-to-end learning architecture. The model is stacked and fused by multiple convolutional long short-term memory (LSTM) layers, standard LSTM layers, and convolutional layers. The fusion of convolutional techniques and the LSTM network enables the proposed DL approach to better capture the spatio-temporal characteristics and correlations of explanatory variables. A tailored spatially aggregated random forest is employed to rank the importance of the explanatory variables. The ranking is then used for feature selection. The proposed DL approach is applied to the short-term forecasting of passenger demand under an on-demand ride service platform in Hangzhou, China. Experimental results, validated on real-world data provided by DiDi Chuxing, show that the FCL-Net achieves better predictive performance than traditional approaches including both classical time-series prediction models and neural network based algorithms (e.g., artificial neural network and LSTM). This paper is one of the first DL studies to forecast the short-term passenger demand of an on-demand ride service platform by examining the spatio-temporal correlations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
75,655
2407.02648
STRIDE: An Open-Source, Low-Cost, and Versatile Bipedal Robot Platform for Research and Education
In this paper, we present STRIDE, a Simple, Terrestrial, Reconfigurable, Intelligent, Dynamic, and Educational bipedal platform. STRIDE aims to propel bipedal robotics research and education by providing a cost-effective implementation with step-by-step instructions for building a bipedal robotic platform while providing flexible customizations via a modular and durable design. Moreover, a versatile terrain setup and a quantitative disturbance injection system are augmented to the robot platform to replicate natural terrains and push forces that can be used to evaluate legged locomotion in practical and adversarial scenarios. We demonstrate the functionalities of this platform by realizing an adaptive step-to-step dynamics based walking controller to achieve dynamic walking. Our work with the open-soured implementation shows that STRIDE is a highly versatile and durable platform that can be used in research and education to evaluate locomotion algorithms, mechanical designs, and robust and adaptative controls.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
469,819
1404.1100
A Tutorial on Principal Component Analysis
Principal component analysis (PCA) is a mainstay of modern data analysis - a black box that is widely used but (sometimes) poorly understood. The goal of this paper is to dispel the magic behind this black box. This manuscript focuses on building a solid intuition for how and why principal component analysis works. This manuscript crystallizes this knowledge by deriving from simple intuitions, the mathematics behind PCA. This tutorial does not shy away from explaining the ideas informally, nor does it shy away from the mathematics. The hope is that by addressing both aspects, readers of all levels will be able to gain a better understanding of PCA as well as the when, the how and the why of applying this technique.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
32,076
2006.01216
Crowd simulation for crisis management: the outcomes of the last decade
The last few decades, crowd simulation for crisis management is highlighted as an important topic of interest for many scientific fields. As the continues evolution of computational resources increases, along with the capabilities of Artificial Intelligence, the demand for better and more realistic simulation has become more attractive and popular to scientists. Along those years, there have been published hundreds of research articles and have been created numerous different systems that aim to simulate crowd behaviors, crisis cases and emergency evacuation scenarios. For better outcomes, recent research has focused on the separation of the problem of crisis management, to multiple research sub-fields (categories), such as the navigation of the simulated pedestrians, their psychology, the group dynamics etc. There have been extended research works suggesting new methods and techniques for those categories of problems. In this paper, we propose three main research categories, each one consist of several sub-categories, relying on crowd simulation for crisis management aspects and we present the outcomes of the last decade, focusing mostly on works exploiting multi-agent technologies. We analyze a number of technologies, methodologies, techniques, tools and systems introduced throughout the last years. A comparative review and discussion of the proposed categories is presented towards the identification of the most efficient aspects of the proposed categories. A general framework, towards the future crowd simulation for crisis management is presented based on the most efficient to yield the most realistic outcomes of the last decades. The paper is concluded with some highlights and open questions for future directions.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
179,706
2408.12890
Multiple Areal Feature Aware Transportation Demand Prediction
A reliable short-term transportation demand prediction supports the authorities in improving the capability of systems by optimizing schedules, adjusting fleet sizes, and generating new transit networks. A handful of research efforts incorporate one or a few areal features while learning spatio-temporal correlation, to capture similar demand patterns between similar areas. However, urban characteristics are polymorphic, and they need to be understood by multiple areal features such as land use, sociodemographics, and place-of-interest (POI) distribution. In this paper, we propose a novel spatio-temporal multi-feature-aware graph convolutional recurrent network (ST-MFGCRN) that fuses multiple areal features during spatio-temproal understanding. Inside ST-MFGCRN, we devise sentinel attention to calculate the areal similarity matrix by allowing each area to take partial attention if the feature is not useful. We evaluate the proposed model on two real-world transportation datasets, one with our constructed BusDJ dataset and one with benchmark TaxiBJ. Results show that our model outperforms the state-of-the-art baselines up to 7\% on BusDJ and 8\% on TaxiBJ dataset.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
482,932
2307.09141
Machine Learning for SAT: Restricted Heuristics and New Graph Representations
Boolean satisfiability (SAT) is a fundamental NP-complete problem with many applications, including automated planning and scheduling. To solve large instances, SAT solvers have to rely on heuristics, e.g., choosing a branching variable in DPLL and CDCL solvers. Such heuristics can be improved with machine learning (ML) models; they can reduce the number of steps but usually hinder the running time because useful models are relatively large and slow. We suggest the strategy of making a few initial steps with a trained ML model and then releasing control to classical heuristics; this simplifies cold start for SAT solving and can decrease both the number of steps and overall runtime, but requires a separate decision of when to release control to the solver. Moreover, we introduce a modification of Graph-Q-SAT tailored to SAT problems converted from other domains, e.g., open shop scheduling problems. We validate the feasibility of our approach with random and industrial SAT problems.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
380,066
2501.17749
Early External Safety Testing of OpenAI's o3-mini: Insights from the Pre-Deployment Evaluation
Large Language Models (LLMs) have become an integral part of our daily lives. However, they impose certain risks, including those that can harm individuals' privacy, perpetuate biases and spread misinformation. These risks highlight the need for robust safety mechanisms, ethical guidelines, and thorough testing to ensure their responsible deployment. Safety of LLMs is a key property that needs to be thoroughly tested prior the model to be deployed and accessible to the general users. This paper reports the external safety testing experience conducted by researchers from Mondragon University and University of Seville on OpenAI's new o3-mini LLM as part of OpenAI's early access for safety testing program. In particular, we apply our tool, ASTRAL, to automatically and systematically generate up to date unsafe test inputs (i.e., prompts) that helps us test and assess different safety categories of LLMs. We automatically generate and execute a total of 10,080 unsafe test input on a early o3-mini beta version. After manually verifying the test cases classified as unsafe by ASTRAL, we identify a total of 87 actual instances of unsafe LLM behavior. We highlight key insights and findings uncovered during the pre-deployment external testing phase of OpenAI's latest LLM.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
528,439
2003.05056
Multi-level Context Gating of Embedded Collective Knowledge for Medical Image Segmentation
Medical image segmentation has been very challenging due to the large variation of anatomy across different cases. Recent advances in deep learning frameworks have exhibited faster and more accurate performance in image segmentation. Among the existing networks, U-Net has been successfully applied on medical image segmentation. In this paper, we propose an extension of U-Net for medical image segmentation, in which we take full advantages of U-Net, Squeeze and Excitation (SE) block, bi-directional ConvLSTM (BConvLSTM), and the mechanism of dense convolutions. (I) We improve the segmentation performance by utilizing SE modules within the U-Net, with a minor effect on model complexity. These blocks adaptively recalibrate the channel-wise feature responses by utilizing a self-gating mechanism of the global information embedding of the feature maps. (II) To strengthen feature propagation and encourage feature reuse, we use densely connected convolutions in the last convolutional layer of the encoding path. (III) Instead of a simple concatenation in the skip connection of U-Net, we employ BConvLSTM in all levels of the network to combine the feature maps extracted from the corresponding encoding path and the previous decoding up-convolutional layer in a non-linear way. The proposed model is evaluated on six datasets DRIVE, ISIC 2017 and 2018, lung segmentation, $PH^2$, and cell nuclei segmentation, achieving state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
167,751
2311.12824
Comparative Analysis of Shear Strength Prediction Models for Reinforced Concrete Slab-Column Connections
This research aims at comparative analysis of shear strength prediction at slab-column connection, unifying machine learning, design codes and Finite Element Analysis. Current design codes (CDCs) of ACI 318-19 (ACI), Eurocode 2 (EC2), Compressive Force Path (CFP) method, Feed Forward Neural Network (FNN) based Artificial Neural Network (ANN), PSO-based FNN (PSOFNN), and BAT algorithm-based BATFNN are used. The study is complemented with FEA of slab for validating the experimental results and machine learning predictions.In the case of hybrid models of PSOFNN and BATFNN, mean square error is used as an objective function to obtain the optimized values of the weights, that are used by Feed Forward Neural Network to perform predictions on the slab data. Seven different models of PSOFNN, BATFNN, and FNN are trained on this data and the results exhibited that PSOFNN is the best model overall. PSOFNN has the best results for SCS=1 with highest value of R as 99.37% and lowest of MSE, and MAE values of 0.0275%, and 1.214% respectively which are better than the best FNN model for SCS=4 having the values of R, MSE, and MAE as 97.464%, 0.0492%, and 1.43%, respectively.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
409,497
2104.07295
Variational Co-embedding Learning for Attributed Network Clustering
Recent works for attributed network clustering utilize graph convolution to obtain node embeddings and simultaneously perform clustering assignments on the embedding space. It is effective since graph convolution combines the structural and attributive information for node embedding learning. However, a major limitation of such works is that the graph convolution only incorporates the attribute information from the local neighborhood of nodes but fails to exploit the mutual affinities between nodes and attributes. In this regard, we propose a variational co-embedding learning model for attributed network clustering (VCLANC). VCLANC is composed of dual variational auto-encoders to simultaneously embed nodes and attributes. Relying on this, the mutual affinity information between nodes and attributes could be reconstructed from the embedding space and served as extra self-supervised knowledge for representation learning. At the same time, trainable Gaussian mixture model is used as priors to infer the node clustering assignments. To strengthen the performance of the inferred clusters, we use a mutual distance loss on the centers of the Gaussian priors and a clustering assignment hardening loss on the node embeddings. Experimental results on four real-world attributed network datasets demonstrate the effectiveness of the proposed VCLANC for attributed network clustering.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
230,371
1908.02745
Developing a Simple Model for Sand-Tool Interaction and Autonomously Shaping Sand
Autonomy for robots interacting with sand will enable a wide range of beneficial behaviors, from earth moving for construction and farming vehicles to navigating rough terrain for Mars rovers. The goal of this work is to shape sand into desired forms. Unlike other common autonomous tasks of achieving desired state of a robot, achieving a desired shape of a continuously deformable environment like sand is a much more challenging task. The state of robot can be described with a couple of states-x, y, z, roll, pitch, yaw-but the desired shape of sand can not be described with just a few values. Sand is an aggregation of billions of small particles. After simplifying the model of sand and tool interaction by looking only at the surface of the heightmap, we can formulate the problems into something that is still high dimensional (hundreds to thousands of state dimensions) but much more solvable. We show how this problem can be formulated into a graph search problem and solve it with the A-star algorithm and report preliminary results on using deep reinforcement learning methods like Deep Q-Network and Deep Deterministic Policy Gradient.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
141,082
2108.00648
From LSAT: The Progress and Challenges of Complex Reasoning
Complex reasoning aims to draw a correct inference based on complex rules. As a hallmark of human intelligence, it involves a degree of explicit reading comprehension, interpretation of logical knowledge and complex rule application. In this paper, we take a step forward in complex reasoning by systematically studying the three challenging and domain-general tasks of the Law School Admission Test (LSAT), including analytical reasoning, logical reasoning and reading comprehension. We propose a hybrid reasoning system to integrate these three tasks and achieve impressive overall performance on the LSAT tests. The experimental results demonstrate that our system endows itself a certain complex reasoning ability, especially the fundamental reading comprehension and challenging logical reasoning capacities. Further analysis also shows the effectiveness of combining the pre-trained models with the task-specific reasoning module, and integrating symbolic knowledge into discrete interpretable reasoning steps in complex reasoning. We further shed a light on the potential future directions, like unsupervised symbolic knowledge extraction, model interpretability, few-shot learning and comprehensive benchmark for complex reasoning.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
248,787