id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1902.09434
S-TRIGGER: Continual State Representation Learning via Self-Triggered Generative Replay
We consider the problem of building a state representation model for control, in a continual learning setting. As the environment changes, the aim is to efficiently compress the sensory state's information without losing past knowledge, and then use Reinforcement Learning on the resulting features for efficient policy learning. To this end, we propose S-TRIGGER, a general method for Continual State Representation Learning applicable to Variational Auto-Encoders and its many variants. The method is based on Generative Replay, i.e. the use of generated samples to maintain past knowledge. It comes along with a statistically sound method for environment change detection, which self-triggers the Generative Replay. Our experiments on VAEs show that S-TRIGGER learns state representations that allows fast and high-performing Reinforcement Learning, while avoiding catastrophic forgetting. The resulting system is capable of autonomously learning new information without using past data and with a bounded system size. Code for our experiments is attached in Appendix.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
122,411
2205.10291
Hyper-diffusion on multiplex networks
Multiplex networks describe systems whose interactions can be of different nature, and are fundamental to understand complexity of networks beyond the framework of simple graphs. Recently it has been pointed out that restricting the attention to pairwise interactions is also a limitation, as the vast majority of complex systems include higher-order interactions that strongly affect their dynamics. Here, we propose hyper-diffusion on multiplex networks, a dynamical process in which diffusion on each single layer is coupled with the diffusion in other layers thanks to the presence of higher-order interactions occurring when there exists link overlap. We show that hyper-diffusion on a duplex network can be described by the Hyper-Laplacian in which the strength of four-body interactions among every set of four replica nodes connected by a multilink $(1,1)$ can be tuned by a parameter $\delta_{11}\ge 0$. The Hyper-Laplacian reduces to the standard lower Laplacian, capturing pairwise interactions at the two layers, when $\delta_{11}=0$. By combining tools of spectral graph theory, applied topology and network science we provide a general understanding of hyper-diffusion on duplex networks when $\delta_{11}>0$, including theoretical bounds on the Fiedler and the largest eigenvalue of Hyper-Laplacians and the asymptotic expansion of their spectrum for $\delta_{11}\ll1$ and $\delta_{11}\gg1$. Although hyper-diffusion on multiplex networks does not imply a direct ``transfer of mass" among the layers (i.e. the average state of replica nodes in each layer is a conserved quantity of the dynamics), we find that the dynamics of the two layers is coupled as the relaxation to the steady state becomes synchronous when higher-order interactions are taken into account and the Fiedler eigenvalue of the Hyper-Laplacian is not localized in a single layer of the duplex network.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
297,645
2012.07434
An Adaptive Memory Multi-Batch L-BFGS Algorithm for Neural Network Training
Motivated by the potential for parallel implementation of batch-based algorithms and the accelerated convergence achievable with approximated second order information a limited memory version of the BFGS algorithm has been receiving increasing attention in recent years for large neural network training problems. As the shape of the cost function is generally not quadratic and only becomes approximately quadratic in the vicinity of a minimum, the use of second order information by L-BFGS can be unreliable during the initial phase of training, i.e. when far from a minimum. Therefore, to control the influence of second order information as training progresses, we propose a multi-batch L-BFGS algorithm, namely MB-AM, that gradually increases its trust in the curvature information by implementing a progressive storage and use of curvature data through a development-based increase (dev-increase) scheme. Using six discriminative modelling benchmark problems we show empirically that MB-AM has slightly faster convergence and, on average, achieves better solutions than the standard multi-batch L-BFGS algorithm when training MLP and CNN models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
211,453
1604.08789
Effective Backscatter Approximation for Photometry in Murky Water
Shading-based approaches like Photometric Stereo assume that the image formation model can be effectively optimized for the scene normals. However, in murky water this is a very challenging problem. The light from artificial sources is not only reflected by the scene but it is also scattered by the medium particles, yielding the backscatter component. Backscatter corresponds to a complex term with several unknown variables, and makes the problem of normal estimation hard. In this work, we show that instead of trying to optimize the complex backscatter model or use previous unrealistic simplifications, we can approximate the per-pixel backscatter signal directly from the captured images. Our method is based on the observation that backscatter is saturated beyond a certain distance, i.e. it becomes scene-depth independent, and finally corresponds to a smoothly varying signal which depends strongly on the light position with respect to each pixel. Our backscatter approximation method facilitates imaging and scene reconstruction in murky water when the illumination is artificial as in Photometric Stereo. Specifically, we show that it allows accurate scene normal estimation and offers potentials like single image restoration. We evaluate our approach using numerical simulations and real experiments within both the controlled environment of a big water-tank and real murky port-waters.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
55,259
1103.1359
An Analysis of Optimal Link Bombs
We analyze the phenomenon of collusion for the purpose of boosting the pagerank of a node in an interlinked environment. We investigate the optimal attack pattern for a group of nodes (attackers) attempting to improve the ranking of a specific node (the victim). We consider attacks where the attackers can only manipulate their own outgoing links. We show that the optimal attacks in this scenario are uncoordinated, i.e. the attackers link directly to the victim and no one else. nodes do not link to each other. We also discuss optimal attack patterns for a group that wants to hide itself by not pointing directly to the victim. In these disguised attacks, the attackers link to nodes $l$ hops away from the victim. We show that an optimal disguised attack exists and how it can be computed. The optimal disguised attack also allows us to find optimal link farm configurations. A link farm can be considered a special case of our approach: the target page of the link farm is the victim and the other nodes in the link farm are the attackers for the purpose of improving the rank of the victim. The target page can however control its own outgoing links for the purpose of improving its own rank, which can be modeled as an optimal disguised attack of 1-hop on itself. Our results are unique in the literature as we show optimality not only in the pagerank score, but also in the rank based on the pagerank score. We further validate our results with experiments on a variety of random graph models.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
true
9,510
1910.03757
Secret key agreement from correlated data, with no prior information
A fundamental question that has been studied in cryptography and in information theory is whether two parties can communicate confidentially using exclusively an open channel. We consider the model in which the two parties hold inputs that are correlated in a certain sense. This model has been studied extensively in information theory, and communication protocols have been designed which exploit the correlation to extract from the inputs a shared secret key. However, all the existing protocols are not universal in the sense that they require that the two parties also know some attributes of the correlation. In other words, they require that each party knows something about the other party's input. We present a protocol that does not require any prior additional information. It uses space-bounded Kolmogorov complexity to measure correlation and it allows the two legal parties to obtain a common key that looks random to an eavesdropper that observes the communication and is restricted to use a bounded amount of space for the attack. Thus the protocol achieves complexity-theoretical security, but it does not use any unproven result from computational complexity. On the negative side, the protocol is not efficient in the sense that the computation of the two legal parties uses more space than the space allowed to the adversary.
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
148,578
2110.00667
Data-Driven Detection and Identification of IoT-Enabled Load-Altering Attacks in Power Grids
Advances in edge computing are powering the development and deployment of Internet of Things (IoT) systems to provide advanced services and resource efficiency. However, large-scale IoT-based load-altering attacks (LAAs) can seriously impact power grid operations, such as destabilising the grid's control loops. Timely detection and identification of any compromised nodes are essential to minimise the adverse effects of these attacks on power grid operations. In this work, two data-driven algorithms are proposed to detect and identify compromised nodes and the attack parameters of the LAAs. The first method, based on the Sparse Identification of Nonlinear Dynamics (SINDy) approach, adopts a sparse regression framework to identify attack parameters that best describe the observed dynamics. The second method, based on physics-informed neural networks (PINN), employs neural networks to infer the attack parameters from the measurements. Both algorithms are presented utilising edge computing for deployment over decentralised architectures. Extensive simulations are performed on IEEE 6-,14- and 39-bus systems to verify the effectiveness of the proposed methods. Numerical results confirm that the proposed algorithms outperform existing approaches, such as those based on unscented Kalman filter, support vector machines (SVM), and neural networks (NN), and effectively detect and identify locations of attack in a timely manner.
false
false
false
false
false
false
false
false
false
true
true
false
true
false
false
false
false
false
258,479
2108.10696
Spatio-Temporal Self-Attention Network for Video Saliency Prediction
3D convolutional neural networks have achieved promising results for video tasks in computer vision, including video saliency prediction that is explored in this paper. However, 3D convolution encodes visual representation merely on fixed local spacetime according to its kernel size, while human attention is always attracted by relational visual features at different time. To overcome this limitation, we propose a novel Spatio-Temporal Self-Attention 3D Network (STSANet) for video saliency prediction, in which multiple Spatio-Temporal Self-Attention (STSA) modules are employed at different levels of 3D convolutional backbone to directly capture long-range relations between spatio-temporal features of different time steps. Besides, we propose an Attentional Multi-Scale Fusion (AMSF) module to integrate multi-level features with the perception of context in semantic and spatio-temporal subspaces. Extensive experiments demonstrate the contributions of key components of our method, and the results on DHF1K, Hollywood-2, UCF, and DIEM benchmark datasets clearly prove the superiority of the proposed model compared with all state-of-the-art models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
251,977
2012.11384
Document-Level Relation Extraction with Reconstruction
In document-level relation extraction (DocRE), graph structure is generally used to encode relation information in the input document to classify the relation category between each entity pair, and has greatly advanced the DocRE task over the past several years. However, the learned graph representation universally models relation information between all entity pairs regardless of whether there are relationships between these entity pairs. Thus, those entity pairs without relationships disperse the attention of the encoder-classifier DocRE for ones with relationships, which may further hind the improvement of DocRE. To alleviate this issue, we propose a novel encoder-classifier-reconstructor model for DocRE. The reconstructor manages to reconstruct the ground-truth path dependencies from the graph representation, to ensure that the proposed DocRE model pays more attention to encode entity pairs with relationships in the training. Furthermore, the reconstructor is regarded as a relationship indicator to assist relation classification in the inference, which can further improve the performance of DocRE model. Experimental results on a large-scale DocRE dataset show that the proposed model can significantly improve the accuracy of relation extraction on a strong heterogeneous graph-based baseline.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
212,621
2404.12401
Items or Relations -- what do Artificial Neural Networks learn?
What has an Artificial Neural Network (ANN) learned after being successfully trained to solve a task - the set of training items or the relations between them? This question is difficult to answer for modern applied ANNs because of their enormous size and complexity. Therefore, here we consider a low-dimensional network and a simple task, i.e., the network has to reproduce a set of training items identically. We construct the family of solutions analytically and use standard learning algorithms to obtain numerical solutions. These numerical solutions differ depending on the optimization algorithm and the weight initialization and are shown to be particular members of the family of analytical solutions. In this simple setting, we observe that the general structure of the network weights represents the training set's symmetry group, i.e., the relations between training items. As a consequence, linear networks generalize, i.e., reproduce items that were not part of the training set but are consistent with the symmetry of the training set. In contrast, non-linear networks tend to learn individual training items and show associative memory. At the same time, their ability to generalize is limited. A higher degree of generalization is obtained for networks whose activation function contains a linear regime, such as tanh. Our results suggest ANN's ability to generalize - instead of learning items - could be improved by generating a sufficiently big set of elementary operations to represent relations and strongly depends on the applied non-linearity.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
false
true
447,876
2002.10259
Complex Markov Logic Networks: Expressivity and Liftability
We study expressivity of Markov logic networks (MLNs). We introduce complex MLNs, which use complex-valued weights, and we show that, unlike standard MLNs with real-valued weights, complex MLNs are fully expressive. We then observe that discrete Fourier transform can be computed using weighted first order model counting (WFOMC) with complex weights and use this observation to design an algorithm for computing relational marginal polytopes which needs substantially less calls to a WFOMC oracle than a recent algorithm.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
165,344
1607.02306
CaR-FOREST: Joint Classification-Regression Decision Forests for Overlapping Audio Event Detection
This report describes our submissions to Task2 and Task3 of the DCASE 2016 challenge. The systems aim at dealing with the detection of overlapping audio events in continuous streams, where the detectors are based on random decision forests. The proposed forests are jointly trained for classification and regression simultaneously. Initially, the training is classification-oriented to encourage the trees to select discriminative features from overlapping mixtures to separate positive audio segments from the negative ones. The regression phase is then carried out to let the positive audio segments vote for the event onsets and offsets, and therefore model the temporal structure of audio events. One random decision forest is specifically trained for each event category of interest. Experimental results on the development data show that our systems significantly outperform the baseline on the Task2 evaluation while they are inferior to the baseline in the Task3 evaluation.
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
58,329
1001.2097
Predictability of PV power grid performance on insular sites without weather stations: use of artificial neural networks
The official meteorological network is poor on the island of Corsica: only three sites being about 50 km apart are equipped with pyranometers which enable measurements by hourly and daily step. These sites are Ajaccio (41\degree 55'N and 8\degree 48'E, seaside), Bastia (42\degree 33'N, 9\degree 29'E, seaside) and Corte (42\degree 30'N, 9\degree 15'E average altitude of 486 meters). This lack of weather station makes difficult the predictability of PV power grid performance. This work intends to study a methodology which can predict global solar irradiation using data available from another location for daily and hourly horizon. In order to achieve this prediction, we have used Artificial Neural Network which is a popular artificial intelligence technique in the forecasting domain. A simulator has been obtained using data available for the station of Ajaccio that is the only station for which we have a lot of data: 16 years from 1972 to 1987. Then we have tested the efficiency of this simulator in two places with different geographical features: Corte, a mountainous region and Bastia, a coastal region. On daily horizon, the relocation has implied fewer errors than a "na\"ive" prediction method based on the persistence (RMSE=1468 Vs 1383Wh/m^2 to Bastia and 1325 Vs 1213Wh/m^2 to Corte). On hourly case, the results were still satisfactory, and widely better than persistence (RMSE=138.8 Vs 109.3 Wh/m^2 to Bastia and 135.1 Vs 114.7 Wh/m^2 to Corte). The last experiment was to evaluate the accuracy of our simulator on a PV power grid localized at 10 km from the station of Ajaccio. We got errors very suitable (nRMSE=27.9%, RMSE=99.0 W.h) compared to those obtained with the persistence (nRMSE=42.2%, RMSE=149.7 W.h).
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
5,353
2011.06859
A Mixed-Method Landscape Analysis of SME-focused B2B Platforms in Germany
Digital platforms offer vast potential for increased value creation and innovation, especially through cross-organizational data sharing. It appears that SMEs in Germany are currently hesitant or unable to create their own platforms. To get a holistic overview of the structure of the German SME-focused platform landscape (that is platforms that are led by or targeting SMEs), we applied a mixed method approach of traditional desk research and a quantitative analysis. The study identified large geographical disparity along the borders of the new and old German federal states, and overall fewer platform ventures by SMEs, rather than large companies and startups. Platform ventures for SMEs are more likely set up as partnerships. We indicate that high capital intensity might be a reason for that.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
206,372
2211.13621
ACROBAT -- a multi-stain breast cancer histological whole-slide-image data set from routine diagnostics for computational pathology
The analysis of FFPE tissue sections stained with haematoxylin and eosin (H&E) or immunohistochemistry (IHC) is an essential part of the pathologic assessment of surgically resected breast cancer specimens. IHC staining has been broadly adopted into diagnostic guidelines and routine workflows to manually assess status and scoring of several established biomarkers, including ER, PGR, HER2 and KI67. However, this is a task that can also be facilitated by computational pathology image analysis methods. The research in computational pathology has recently made numerous substantial advances, often based on publicly available whole slide image (WSI) data sets. However, the field is still considerably limited by the sparsity of public data sets. In particular, there are no large, high quality publicly available data sets with WSIs of matching IHC and H&E-stained tissue sections. Here, we publish the currently largest publicly available data set of WSIs of tissue sections from surgical resection specimens from female primary breast cancer patients with matched WSIs of corresponding H&E and IHC-stained tissue, consisting of 4,212 WSIs from 1,153 patients. The primary purpose of the data set was to facilitate the ACROBAT WSI registration challenge, aiming at accurately aligning H&E and IHC images. For research in the area of image registration, automatic quantitative feedback on registration algorithm performance remains available through the ACROBAT challenge website, based on more than 37,000 manually annotated landmark pairs from 13 annotators. Beyond registration, this data set has the potential to enable many different avenues of computational pathology research, including stain-guided learning, virtual staining, unsupervised pre-training, artefact detection and stain-independent models.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
332,536
2410.15283
TRIZ Method for Urban Building Energy Optimization: GWO-SARIMA-LSTM Forecasting model
With the advancement of global climate change and sustainable development goals, urban building energy consumption optimization and carbon emission reduction have become the focus of research. Traditional energy consumption prediction methods often lack accuracy and adaptability due to their inability to fully consider complex energy consumption patterns, especially in dealing with seasonal fluctuations and dynamic changes. This study proposes a hybrid deep learning model that combines TRIZ innovation theory with GWO, SARIMA and LSTM to improve the accuracy of building energy consumption prediction. TRIZ plays a key role in model design, providing innovative solutions to achieve an effective balance between energy efficiency, cost and comfort by systematically analyzing the contradictions in energy consumption optimization. GWO is used to optimize the parameters of the model to ensure that the model maintains high accuracy under different conditions. The SARIMA model focuses on capturing seasonal trends in the data, while the LSTM model handles short-term and long-term dependencies in the data, further improving the accuracy of the prediction. The main contribution of this research is the development of a robust model that leverages the strengths of TRIZ and advanced deep learning techniques, improving the accuracy of energy consumption predictions. Our experiments demonstrate a significant 15% reduction in prediction error compared to existing models. This innovative approach not only enhances urban energy management but also provides a new framework for optimizing energy use and reducing carbon emissions, contributing to sustainable development.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
500,459
1907.01022
Rare Disease Detection by Sequence Modeling with Generative Adversarial Networks
Rare diseases affecting 350 million individuals are commonly associated with delay in diagnosis or misdiagnosis. To improve those patients' outcome, rare disease detection is an important task for identifying patients with rare conditions based on longitudinal medical claims. In this paper, we present a deep learning method for detecting patients with exocrine pancreatic insufficiency (EPI) (a rare disease). The contribution includes 1) a large longitudinal study using 7 years medical claims from 1.8 million patients including 29,149 EPI patients, 2) a new deep learning model using generative adversarial networks (GANs) to boost rare disease class, and also leveraging recurrent neural networks to model patient sequence data, 3) an accurate prediction with 0.56 PR-AUC which outperformed benchmark models in terms of precision and recall.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,197
2502.13626
AI-Empowered Catalyst Discovery: A Survey from Classical Machine Learning Approaches to Large Language Models
Catalysts are essential for accelerating chemical reactions and enhancing selectivity, which is crucial for the sustainable production of energy, materials, and bioactive compounds. Catalyst discovery is fundamental yet challenging in computational chemistry and has garnered significant attention due to the promising performance of advanced Artificial Intelligence (AI) techniques. The development of Large Language Models (LLMs) notably accelerates progress in the discovery of both homogeneous and heterogeneous catalysts, where their chemical reactions differ significantly in material phases, temperature, dynamics, etc. However, there is currently no comprehensive survey that discusses the progress and latest developments in both areas, particularly with the application of LLM techniques. To address this gap, this paper presents a thorough and systematic survey of AI-empowered catalyst discovery, employing a unified and general categorization for homogeneous and heterogeneous catalysts. We examine the progress of AI-empowered catalyst discovery, highlighting their individual advantages and disadvantages, and discuss the challenges faced in this field. Furthermore, we suggest potential directions for future research from the perspective of computer science. Our goal is to assist researchers in computational chemistry, computer science, and related fields in easily tracking the latest advancements, providing a clear overview and roadmap of this area. We also organize and make accessible relevant resources, including article lists and datasets, in an open repository at https://github.com/LuckyGirl-XU/Awesome-Artificial-Intelligence-Empowered-Catalyst-Discovery.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
535,444
2011.01397
Guided Navigation from Multiple Viewpoints using Qualitative Spatial Reasoning
Navigation is an essential ability for mobile agents to be completely autonomous and able to perform complex actions. However, the problem of navigation for agents with limited (or no) perception of the world, or devoid of a fully defined motion model, has received little attention from research in AI and Robotics. One way to tackle this problem is to use guided navigation, in which other autonomous agents, endowed with perception, can combine their distinct viewpoints to infer the localisation and the appropriate commands to guide a sensory deprived agent through a particular path. Due to the limited knowledge about the physical and perceptual characteristics of the guided agent, this task should be conducted on a level of abstraction allowing the use of a generic motion model, and high-level commands, that can be applied by any type of autonomous agents, including humans. The main task considered in this work is, given a group of autonomous agents perceiving their common environment with their independent, egocentric and local vision sensors, the development and evaluation of algorithms capable of producing a set of high-level commands (involving qualitative directions: e.g. move left, go straight ahead) capable of guiding a sensory deprived robot to a goal location.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
204,565
2104.04968
Knowledge-Augmented Contrastive Learning for Abnormality Classification and Localization in Chest X-rays with Radiomics using a Feedback Loop
Building a highly accurate predictive model for classification and localization of abnormalities in chest X-rays usually requires a large number of manually annotated labels and pixel regions (bounding boxes) of abnormalities. However, it is expensive to acquire such annotations, especially the bounding boxes. Recently, contrastive learning has shown strong promise in leveraging unlabeled natural images to produce highly generalizable and discriminative features. However, extending its power to the medical image domain is under-explored and highly non-trivial, since medical images are much less amendable to data augmentations. In contrast, their prior knowledge, as well as radiomic features, is often crucial. To bridge this gap, we propose an end-to-end semi-supervised knowledge-augmented contrastive learning framework, that simultaneously performs disease classification and localization tasks. The key knob of our framework is a unique positive sampling approach tailored for the medical images, by seamlessly integrating radiomic features as a knowledge augmentation. Specifically, we first apply an image encoder to classify the chest X-rays and to generate the image features. We next leverage Grad-CAM to highlight the crucial (abnormal) regions for chest X-rays (even when unannotated), from which we extract radiomic features. The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray. In this way, our framework constitutes a feedback loop for image and radiomic modality features to mutually reinforce each other. Their contrasting yields knowledge-augmented representations that are both robust and interpretable. Extensive experiments on the NIH Chest X-ray dataset demonstrate that our approach outperforms existing baselines in both classification and localization tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
229,555
1901.04106
3D Trajectory Optimization in Rician Fading for UAV-Enabled Data Harvesting
In this paper, we consider a UAV-enabled WSN where a flying UAV is employed to collect data from multiple sensor nodes (SNs). Our objective is to maximize the minimum average data collection rate from all SNs subject to a prescribed reliability constraint for each SN by jointly optimizing the UAV communication scheduling and three-dimensional (3D) trajectory. Different from the existing works that assume the simplified line-of-sight (LoS) UAV-ground channels, we consider the more practically accurate angle-dependent Rician fading channels between the UAV and SNs with the Rician factors determined by the corresponding UAV-SN elevation angles. However, the formulated optimization problem is intractable due to the lack of a closed-form expression for a key parameter termed effective fading power that characterizes the achievable rate given the reliability requirement in terms of outage probability. To tackle this difficulty, we first approximate the parameter by a logistic ('S' shape) function with respect to the 3D UAV trajectory by using the data regression method. Then the original problem is reformulated to an approximate form, which, however, is still challenging to solve due to its non-convexity. As such, we further propose an efficient algorithm to derive its suboptimal solution by using the block coordinate descent technique, which iteratively optimizes the communication scheduling, the UAV's horizontal trajectory, and its vertical trajectory. The latter two subproblems are shown to be non-convex, while locally optimal solutions are obtained for them by using the successive convex approximation technique. Last, extensive numerical results are provided to evaluate the performance of the proposed algorithm and draw new insights on the 3D UAV trajectory under the Rician fading as compared to conventional LoS channel models.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
118,552
2501.13416
M3PT: A Transformer for Multimodal, Multi-Party Social Signal Prediction with Person-aware Blockwise Attention
Understanding social signals in multi-party conversations is important for human-robot interaction and artificial social intelligence. Social signals include body pose, head pose, speech, and context-specific activities like acquiring and taking bites of food when dining. Past work in multi-party interaction tends to build task-specific models for predicting social signals. In this work, we address the challenge of predicting multimodal social signals in multi-party settings in a single model. We introduce M3PT, a causal transformer architecture with modality and temporal blockwise attention masking to simultaneously process multiple social cues across multiple participants and their temporal interactions. We train and evaluate M3PT on the Human-Human Commensality Dataset (HHCD), and demonstrate that using multiple modalities improves bite timing and speaking status prediction. Source code: https://github.com/AbrarAnwar/masked-social-signals/.
false
false
false
false
true
false
true
true
false
false
false
false
false
false
false
false
false
false
526,677
2502.13520
A Large and Balanced Corpus for Fine-grained Arabic Readability Assessment
This paper introduces the Balanced Arabic Readability Evaluation Corpus BAREC, a large-scale, fine-grained dataset for Arabic readability assessment. BAREC consists of 68,182 sentences spanning 1+ million words, carefully curated to cover 19 readability levels, from kindergarten to postgraduate comprehension. The corpus balances genre diversity, topical coverage, and target audiences, offering a comprehensive resource for evaluating Arabic text complexity. The corpus was fully manually annotated by a large team of annotators. The average pairwise inter-annotator agreement, measured by Quadratic Weighted Kappa, is 81.3%, reflecting a high level of substantial agreement. Beyond presenting the corpus, we benchmark automatic readability assessment across different granularity levels, comparing a range of techniques. Our results highlight the challenges and opportunities in Arabic readability modeling, demonstrating competitive performance across various methods. To support research and education, we will make BAREC openly available, along with detailed annotation guidelines and benchmark results.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
535,403
2409.13393
Hey Robot! Personalizing Robot Navigation through Model Predictive Control with a Large Language Model
Robot navigation methods allow mobile robots to operate in applications such as warehouses or hospitals. While the environment in which the robot operates imposes requirements on its navigation behavior, most existing methods do not allow the end-user to configure the robot's behavior and priorities, possibly leading to undesirable behavior (e.g., fast driving in a hospital). We propose a novel approach to adapt robot motion behavior based on natural language instructions provided by the end-user. Our zero-shot method uses an existing Visual Language Model to interpret a user text query or an image of the environment. This information is used to generate the cost function and reconfigure the parameters of a Model Predictive Controller, translating the user's instruction to the robot's motion behavior. This allows our method to safely and effectively navigate in dynamic and challenging environments. We extensively evaluate our method's individual components and demonstrate the effectiveness of our method on a ground robot in simulation and real-world experiments, and across a variety of environments and user specifications.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
489,974
2311.01851
Holistic Representation Learning for Multitask Trajectory Anomaly Detection
Video anomaly detection deals with the recognition of abnormal events in videos. Apart from the visual signal, video anomaly detection has also been addressed with the use of skeleton sequences. We propose a holistic representation of skeleton trajectories to learn expected motions across segments at different times. Our approach uses multitask learning to reconstruct any continuous unobserved temporal segment of the trajectory allowing the extrapolation of past or future segments and the interpolation of in-between segments. We use an end-to-end attention-based encoder-decoder. We encode temporally occluded trajectories, jointly learn latent representations of the occluded segments, and reconstruct trajectories based on expected motions across different temporal segments. Extensive experiments on three trajectory-based video anomaly detection datasets show the advantages and effectiveness of our approach with state-of-the-art results on anomaly detection in skeleton trajectories.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
405,202
2112.02393
Optimization-Based Separations for Neural Networks
Depth separation results propose a possible theoretical explanation for the benefits of deep neural networks over shallower architectures, establishing that the former possess superior approximation capabilities. However, there are no known results in which the deeper architecture leverages this advantage into a provable optimization guarantee. We prove that when the data are generated by a distribution with radial symmetry which satisfies some mild assumptions, gradient descent can efficiently learn ball indicator functions using a depth 2 neural network with two layers of sigmoidal activations, and where the hidden layer is held fixed throughout training. By building on and refining existing techniques for approximation lower bounds of neural networks with a single layer of non-linearities, we show that there are $d$-dimensional radial distributions on the data such that ball indicators cannot be learned efficiently by any algorithm to accuracy better than $\Omega(d^{-4})$, nor by a standard gradient descent implementation to accuracy better than a constant. These results establish what is to the best of our knowledge, the first optimization-based separations where the approximation benefits of the stronger architecture provably manifest in practice. Our proof technique introduces new tools and ideas that may be of independent interest in the theoretical study of both the approximation and optimization of neural networks.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
269,831
1903.11633
Laplace Landmark Localization
Landmark localization in images and videos is a classic problem solved in various ways. Nowadays, with deep networks prevailing throughout machine learning, there are revamped interests in pushing facial landmark detection technologies to handle more challenging data. Most efforts use network objectives based on L1 or L2 norms, which have several disadvantages. First of all, the locations of landmarks are determined from generated heatmaps (i.e., confidence maps) from which predicted landmark locations (i.e., the means) get penalized without accounting for the spread: a high scatter corresponds to low confidence and vice-versa. For this, we introduce a LaplaceKL objective that penalizes for a low confidence. Another issue is a dependency on labeled data, which are expensive to obtain and susceptible to error. To address both issues we propose an adversarial training framework that leverages unlabeled data to improve model performance. Our method claims state-of-the-art on all of the 300W benchmarks and ranks second-to-best on the Annotated Facial Landmarks in the Wild (AFLW) dataset. Furthermore, our model is robust with a reduced size: 1/8 the number of channels (i.e., 0.0398MB) is comparable to state-of-that-art in real-time on CPU. Thus, we show that our method is of high practical value to real-life application.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
125,548
2303.08029
Class-level Multiple Distributions Representation are Necessary for Semantic Segmentation
Existing approaches focus on using class-level features to improve semantic segmentation performance. How to characterize the relationships of intra-class pixels and inter-class pixels is the key to extract the discriminative representative class-level features. In this paper, we introduce for the first time to describe intra-class variations by multiple distributions. Then, multiple distributions representation learning(\textbf{MDRL}) is proposed to augment the pixel representations for semantic segmentation. Meanwhile, we design a class multiple distributions consistency strategy to construct discriminative multiple distribution representations of embedded pixels. Moreover, we put forward a multiple distribution semantic aggregation module to aggregate multiple distributions of the corresponding class to enhance pixel semantic information. Our approach can be seamlessly integrated into popular segmentation frameworks FCN/PSPNet/CCNet and achieve 5.61\%/1.75\%/0.75\% mIoU improvements on ADE20K. Extensive experiments on the Cityscapes, ADE20K datasets have proved that our method can bring significant performance improvement.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
351,478
1504.03490
Computational Modeling of an MRI Guided Drug Delivery System Based on Magnetic Nanoparticle Aggregations for the Navigation of Paramagnetic Nanocapsules
A computational method for magnetically guided drug delivery is presented and the results are compared for the aggregation process of magnetic particles within a fluid environment. The model is developed for the simulation of the aggregation patterns of magnetic nanoparticles under the influence of MRI magnetic coils. A novel approach for the calculation of the drag coefficient of aggregates is presented. The comparison against experimental and numerical results from the literature is showed that the proposed method predicts well the aggregations in respect to their size and pattern dependance, on the concentration and the strength of the magnetic field, as well as their velocity when particles are driven through the fluid by magnetic gradients.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
42,036
2408.10934
SDI-Net: Toward Sufficient Dual-View Interaction for Low-light Stereo Image Enhancement
Currently, most low-light image enhancement methods only consider information from a single view, neglecting the correlation between cross-view information. Therefore, the enhancement results produced by these methods are often unsatisfactory. In this context, there have been efforts to develop methods specifically for low-light stereo image enhancement. These methods take into account the cross-view disparities and enable interaction between the left and right views, leading to improved performance. However, these methods still do not fully exploit the interaction between left and right view information. To address this issue, we propose a model called Toward Sufficient Dual-View Interaction for Low-light Stereo Image Enhancement (SDI-Net). The backbone structure of SDI-Net is two encoder-decoder pairs, which are used to learn the mapping function from low-light images to normal-light images. Among the encoders and the decoders, we design a module named Cross-View Sufficient Interaction Module (CSIM), aiming to fully exploit the correlations between the binocular views via the attention mechanism. The quantitative and visual results on public datasets validate the superiority of our method over other related methods. Ablation studies also demonstrate the effectiveness of the key elements in our model.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
482,087
2312.00500
Global Localization: Utilizing Relative Spatio-Temporal Geometric Constraints from Adjacent and Distant Cameras
Re-localizing a camera from a single image in a previously mapped area is vital for many computer vision applications in robotics and augmented/virtual reality. In this work, we address the problem of estimating the 6 DoF camera pose relative to a global frame from a single image. We propose to leverage a novel network of relative spatial and temporal geometric constraints to guide the training of a Deep Network for localization. We employ simultaneously spatial and temporal relative pose constraints that are obtained not only from adjacent camera frames but also from camera frames that are distant in the spatio-temporal space of the scene. We show that our method, through these constraints, is capable of learning to localize when little or very sparse ground-truth 3D coordinates are available. In our experiments, this is less than 1% of available ground-truth data. We evaluate our method on 3 common visual localization datasets and show that it outperforms other direct pose estimation methods.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
412,071
2408.09777
Summarizing long regulatory documents with a multi-step pipeline
Due to their length and complexity, long regulatory texts are challenging to summarize. To address this, a multi-step extractive-abstractive architecture is proposed to handle lengthy regulatory documents more effectively. In this paper, we show that the effectiveness of a two-step architecture for summarizing long regulatory texts varies significantly depending on the model used. Specifically, the two-step architecture improves the performance of decoder-only models. For abstractive encoder-decoder models with short context lengths, the effectiveness of an extractive step varies, whereas for long-context encoder-decoder models, the extractive step worsens their performance. This research also highlights the challenges of evaluating generated texts, as evidenced by the differing results from human and automated evaluations. Most notably, human evaluations favoured language models pretrained on legal text, while automated metrics rank general-purpose language models higher. The results underscore the importance of selecting the appropriate summarization strategy based on model architecture and context length.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
481,586
2308.06998
Mutual Information-driven Triple Interaction Network for Efficient Image Dehazing
Multi-stage architectures have exhibited efficacy in image dehazing, which usually decomposes a challenging task into multiple more tractable sub-tasks and progressively estimates latent hazy-free images. Despite the remarkable progress, existing methods still suffer from the following shortcomings: (1) limited exploration of frequency domain information; (2) insufficient information interaction; (3) severe feature redundancy. To remedy these issues, we propose a novel Mutual Information-driven Triple interaction Network (MITNet) based on spatial-frequency dual domain information and two-stage architecture. To be specific, the first stage, named amplitude-guided haze removal, aims to recover the amplitude spectrum of the hazy images for haze removal. And the second stage, named phase-guided structure refined, devotes to learning the transformation and refinement of the phase spectrum. To facilitate the information exchange between two stages, an Adaptive Triple Interaction Module (ATIM) is developed to simultaneously aggregate cross-domain, cross-scale, and cross-stage features, where the fused features are further used to generate content-adaptive dynamic filters so that applying them to enhance global context representation. In addition, we impose the mutual information minimization constraint on paired scale encoder and decoder features from both stages. Such an operation can effectively reduce information redundancy and enhance cross-stage feature complementarity. Extensive experiments on multiple public datasets exhibit that our MITNet performs superior performance with lower model complexity.The code and models are available at https://github.com/it-hao/MITNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
385,357
2309.01750
On CNF formulas irredundant with respect to unit clause propagation
Two CNF formulas are called ucp-equivalent, if they behave in the same way with respect to the unit clause propagation (UCP). A formula is called ucp-irredundant, if removing any clause leads to a formula which is not ucp-equivalent to the original one. As a consequence of known results, the ratio of the size of a ucp-irredundant formula and the size of a smallest ucp-equivalent formula is at most $n^2$, where $n$ is the number of the variables. We demonstrate an example of a ucp-irredundant formula for a symmetric definite Horn function which is larger than a smallest ucp-equivalent formula by a factor $\Omega(n/\ln n)$ and, hence, a general upper bound on the above ratio cannot be smaller than this.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
389,787
2008.05369
Anomaly localization by modeling perceptual features
Although unsupervised generative modeling of an image dataset using a Variational AutoEncoder (VAE) has been used to detect anomalous images, or anomalous regions in images, recent works have shown that this method often identifies images or regions that do not concur with human perception, even questioning the usability of generative models for robust anomaly detection. Here, we argue that those issues can emerge from having a simplistic model of the anomaly distribution and we propose a new VAE-based model expressing a more complex anomaly model that is also closer to human perception. This Feature-Augmented VAE is trained by not only reconstructing the input image in pixel space, but also in several different feature spaces, which are computed by a convolutional neural network trained beforehand on a large image dataset. It achieves clear improvement over state-of-the-art methods on the MVTec anomaly detection and localization datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
191,492
2501.07972
Zero-shot Video Moment Retrieval via Off-the-shelf Multimodal Large Language Models
The target of video moment retrieval (VMR) is predicting temporal spans within a video that semantically match a given linguistic query. Existing VMR methods based on multimodal large language models (MLLMs) overly rely on expensive high-quality datasets and time-consuming fine-tuning. Although some recent studies introduce a zero-shot setting to avoid fine-tuning, they overlook inherent language bias in the query, leading to erroneous localization. To tackle the aforementioned challenges, this paper proposes Moment-GPT, a tuning-free pipeline for zero-shot VMR utilizing frozen MLLMs. Specifically, we first employ LLaMA-3 to correct and rephrase the query to mitigate language bias. Subsequently, we design a span generator combined with MiniGPT-v2 to produce candidate spans adaptively. Finally, to leverage the video comprehension capabilities of MLLMs, we apply VideoChatGPT and span scorer to select the most appropriate spans. Our proposed method substantially outperforms the state-ofthe-art MLLM-based and zero-shot models on several public datasets, including QVHighlights, ActivityNet-Captions, and Charades-STA.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
524,574
2101.07905
Feature Sharing Cooperative Network for Semantic Segmentation
In recent years, deep neural networks have achieved high ac-curacy in the field of image recognition. By inspired from human learning method, we propose a semantic segmentation method using cooperative learning which shares the information resembling a group learning. We use two same networks and paths for sending feature maps between two networks. Two networks are trained simultaneously. By sharing feature maps, one of two networks can obtain the information that cannot be obtained by a single network. In addition, in order to enhance the degree of cooperation, we propose two kinds of methods that connect only the same layer and multiple layers. We evaluated our proposed idea on two kinds of networks. One is Dual Attention Network (DANet) and the other one is DeepLabv3+. The proposed method achieved better segmentation accuracy than the conventional single network and ensemble of networks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
216,162
2409.19445
HTML-LSTM: Information Extraction from HTML Tables in Web Pages using Tree-Structured LSTM
In this paper, we propose a novel method for extracting information from HTML tables with similar contents but with a different structure. We aim to integrate multiple HTML tables into a single table for retrieval of information containing in various Web pages. The method is designed by extending tree-structured LSTM, the neural network for tree-structured data, in order to extract information that is both linguistic and structural information of HTML data. We evaluate the proposed method through experiments using real data published on the WWW.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
492,683
1605.02669
The GPU-based Parallel Ant Colony System
The Ant Colony System (ACS) is, next to Ant Colony Optimization (ACO) and the MAX-MIN Ant System (MMAS), one of the most efficient metaheuristic algorithms inspired by the behavior of ants. In this article we present three novel parallel versions of the ACS for the graphics processing units (GPUs). To the best of our knowledge, this is the first such work on the ACS which shares many key elements of the ACO and the MMAS, but differences in the process of building solutions and updating the pheromone trails make obtaining an efficient parallel version for the GPUs a difficult task. The proposed parallel versions of the ACS differ mainly in their implementations of the pheromone memory. The first two use the standard pheromone matrix, and the third uses a novel selective pheromone memory. Computational experiments conducted on several Travelling Salesman Problem (TSP) instances of sizes ranging from 198 to 2392 cities showed that the parallel ACS on Nvidia Kepler GK104 GPU (1536 CUDA cores) is able to obtain a speedup up to 24.29x vs the sequential ACS running on a single core of Intel Xeon E5-2670 CPU. The parallel ACS with the selective pheromone memory achieved speedups up to 16.85x, but in most cases the obtained solutions were of significantly better quality than for the sequential ACS.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
55,657
1904.12371
Counterexample-Driven Synthesis for Probabilistic Program Sketches
Probabilistic programs are key to deal with uncertainty in e.g. controller synthesis. They are typically small but intricate. Their development is complex and error prone requiring quantitative reasoning over a myriad of alternative designs. To mitigate this complexity, we adopt counterexample-guided inductive synthesis (CEGIS) to automatically synthesise finite-state probabilistic programs. Our approach leverages efficient model checking, modern SMT solving, and counterexample generation at program level. Experiments on practically relevant case studies show that design spaces with millions of candidate designs can be fully explored using a few thousand verification queries.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
129,096
2203.00761
Tricks and Plugins to GBM on Images and Sequences
Convolutional neural networks (CNNs) and transformers, which are composed of multiple processing layers and blocks to learn the representations of data with multiple abstract levels, are the most successful machine learning models in recent years. However, millions of parameters and many blocks make them difficult to be trained, and sometimes several days or weeks are required to find an ideal architecture or tune the parameters. Within this paper, we propose a new algorithm for boosting Deep Convolutional Neural Networks (BoostCNN) to combine the merits of dynamic feature selection and BoostCNN, and another new family of algorithms combining boosting and transformers. To learn these new models, we introduce subgrid selection and importance sampling strategies and propose a set of algorithms to incorporate boosting weights into a deep learning architecture based on a least squares objective function. These algorithms not only reduce the required manual effort for finding an appropriate network architecture but also result in superior performance and lower running time. Experiments show that the proposed methods outperform benchmarks on several fine-grained classification tasks.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
283,104
2101.11203
Achieving Linear Speedup with Partial Worker Participation in Non-IID Federated Learning
Federated learning (FL) is a distributed machine learning architecture that leverages a large number of workers to jointly learn a model with decentralized data. FL has received increasing attention in recent years thanks to its data privacy protection, communication efficiency and a linear speedup for convergence in training (i.e., convergence performance increases linearly with respect to the number of workers). However, existing studies on linear speedup for convergence are only limited to the assumptions of i.i.d. datasets across workers and/or full worker participation, both of which rarely hold in practice. So far, it remains an open question whether or not the linear speedup for convergence is achievable under non-i.i.d. datasets with partial worker participation in FL. In this paper, we show that the answer is affirmative. Specifically, we show that the federated averaging (FedAvg) algorithm (with two-sided learning rates) on non-i.i.d. datasets in non-convex settings achieves a convergence rate $\mathcal{O}(\frac{1}{\sqrt{mKT}} + \frac{1}{T})$ for full worker participation and a convergence rate $\mathcal{O}(\frac{\sqrt{K}}{\sqrt{nT}} + \frac{1}{T})$ for partial worker participation, where $K$ is the number of local steps, $T$ is the number of total communication rounds, $m$ is the total worker number and $n$ is the worker number in one communication round if for partial worker participation. Our results also reveal that the local steps in FL could help the convergence and show that the maximum number of local steps can be improved to $T/m$ in full worker participation. We conduct extensive experiments on MNIST and CIFAR-10 to verify our theoretical results.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
217,193
2002.02125
On the Age of Information for Multicast Transmission with Hard Deadlines in IoT Systems
We consider the multicast transmission of a real-time Internet of Things (IoT) system, where a server transmits time-stamped status updates to multiple IoT devices. We apply a recently proposed metric, named age of information (AoI), to capture the timeliness of the information delivery. The AoI is defined as the time elapsed since the generation of the most recently received status update. Different from the existing studies that considered either multicast transmission without hard deadlines or unicast transmission with hard deadlines, we enforce a hard deadline for the service time of multicast transmission. This is important for many emerging multicast IoT applications, where the outdated status updates are useless for IoT devices. Specifically, the transmission of a status update is terminated when either the hard deadline expires or a sufficient number of IoT devices successfully receive the status update. We first calculate the distributions of the service time for all possible reception outcomes at IoT devices, and then derive a closed-form expression of the average AoI. Simulations validate the performance analysis, which reveals that: 1) the multicast transmission with hard deadlines achieves a lower average AoI than that without hard deadlines; and 2) there exists an optimal value of the hard deadline that minimizes the average AoI.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
162,842
2501.06833
Unveiling Temporal Trends in 19th Century Literature: An Information Retrieval Approach
In English literature, the 19th century witnessed a significant transition in styles, themes, and genres. Consequently, the novels from this period display remarkable diversity. This paper explores these variations by examining the evolution of term usage in 19th century English novels through the lens of information retrieval. By applying a query expansion-based approach to a decade-segmented collection of fiction from the British Library, we examine how related terms vary over time. Our analysis employs multiple standard metrics including Kendall's tau, Jaccard similarity, and Jensen-Shannon divergence to assess overlaps and shifts in expanded query term sets. Our results indicate a significant degree of divergence in the related terms across decades as selected by the query expansion technique, suggesting substantial linguistic and conceptual changes throughout the 19th century novels.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
true
524,148
1801.05541
Over-the-Air Implementation of Uplink NOMA
Though the concept of non-orthogonal multiple access (NOMA) was proposed several years ago, the performance of uplink NOMA has only been verified in theory, but not in practice. This paper presents an over-the-air implementation of a uplink NOMA system, while providing solutions to most common practical problems, i.e., carrier frequency offset (CFO) synchronization, time synchronization, and channel estimation. The implemented CFO synchronization method adopts the primary synchronization signal (PSS) of LTE. Also, we design a novel preamble for each uplink user, and it is appended to every frame before it is transmitted through the air. This preamble will be used for time synchronization and channel estimation at the BS. Also, a low-complexity, iterative linear minimum mean squared error (LMMSE) detector has been implemented for multi-user decoding. The paper also validates the proposed architecture numerically, as well as experimentally.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
88,476
2204.06271
TangoBERT: Reducing Inference Cost by using Cascaded Architecture
The remarkable success of large transformer-based models such as BERT, RoBERTa and XLNet in many NLP tasks comes with a large increase in monetary and environmental cost due to their high computational load and energy consumption. In order to reduce this computational load in inference time, we present TangoBERT, a cascaded model architecture in which instances are first processed by an efficient but less accurate first tier model, and only part of those instances are additionally processed by a less efficient but more accurate second tier model. The decision of whether to apply the second tier model is based on a confidence score produced by the first tier model. Our simple method has several appealing practical advantages compared to standard cascading approaches based on multi-layered transformer models. First, it enables higher speedup gains (average lower latency). Second, it takes advantage of batch size optimization for cascading, which increases the relative inference cost reductions. We report TangoBERT inference CPU speedup on four text classification GLUE tasks and on one reading comprehension task. Experimental results show that TangoBERT outperforms efficient early exit baseline models; on the the SST-2 task, it achieves an accuracy of 93.9% with a CPU speedup of 8.2x.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
291,289
1808.01457
On the Optimality of the Kautz-Singleton Construction in Probabilistic Group Testing
We consider the probabilistic group testing problem where $d$ random defective items in a large population of $N$ items are identified with high probability by applying binary tests. It is known that $\Theta(d \log N)$ tests are necessary and sufficient to recover the defective set with vanishing probability of error when $d = O(N^{\alpha})$ for some $\alpha \in (0, 1)$. However, to the best of our knowledge, there is no explicit (deterministic) construction achieving $\Theta(d \log N)$ tests in general. In this work, we show that a famous construction introduced by Kautz and Singleton for the combinatorial group testing problem (which is known to be suboptimal for combinatorial group testing for moderate values of $d$) achieves the order optimal $\Theta(d \log N)$ tests in the probabilistic group testing problem when $d = \Omega(\log^2 N)$. This provides a strongly explicit construction achieving the order optimal result in the probabilistic group testing setting for a wide range of values of $d$. To prove the order-optimality of Kautz and Singleton's construction in the probabilistic setting, we provide a novel analysis of the probability of a non-defective item being covered by a random defective set directly, rather than arguing from combinatorial properties of the underlying code, which has been the main approach in the literature. Furthermore, we use a recursive technique to convert this construction into one that can also be efficiently decoded with only a log-log factor increase in the number of tests.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
104,571
2111.02128
Tensor Decomposition Bounds for TBM-Based Massive Access
Tensor-based modulation (TBM) has been proposed in the context of unsourced random access for massive uplink communication. In this modulation, transmitters encode data as rank-1 tensors, with factors from a discrete vector constellation. This construction allows to split the multi-user receiver into a user separation step based on a low-rank tensor decomposition, and independent single-user demappers. In this paper, we analyze the limits of the tensor decomposition using Cram\'er-Rao bounds, providing bounds on the accuracy of the estimated factors. These bounds are shown by simulation to be tight at high SNR. We introduce an approximate perturbation model for the output of the tensor decomposition, which facilitates the computation of the log-likelihood ratios (LLR) of the transmitted bits, and provides an approximate achievable bound for the finite-length error probability. Combining TBM with classical forward error correction coding schemes such as polar codes, we use the approximate LLR to derive soft-decision decoder showing a gain over hard-decision decoders at low SNR.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
264,773
1707.08313
Cascaded Scene Flow Prediction using Semantic Segmentation
Given two consecutive frames from a pair of stereo cameras, 3D scene flow methods simultaneously estimate the 3D geometry and motion of the observed scene. Many existing approaches use superpixels for regularization, but may predict inconsistent shapes and motions inside rigidly moving objects. We instead assume that scenes consist of foreground objects rigidly moving in front of a static background, and use semantic cues to produce pixel-accurate scene flow estimates. Our cascaded classification framework accurately models 3D scenes by iteratively refining semantic segmentation masks, stereo correspondences, 3D rigid motion estimates, and optical flow fields. We evaluate our method on the challenging KITTI autonomous driving benchmark, and show that accounting for the motion of segmented vehicles leads to state-of-the-art performance.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
77,805
2407.06516
VQA-Diff: Exploiting VQA and Diffusion for Zero-Shot Image-to-3D Vehicle Asset Generation in Autonomous Driving
Generating 3D vehicle assets from in-the-wild observations is crucial to autonomous driving. Existing image-to-3D methods cannot well address this problem because they learn generation merely from image RGB information without a deeper understanding of in-the-wild vehicles (such as car models, manufacturers, etc.). This leads to their poor zero-shot prediction capability to handle real-world observations with occlusion or tricky viewing angles. To solve this problem, in this work, we propose VQA-Diff, a novel framework that leverages in-the-wild vehicle images to create photorealistic 3D vehicle assets for autonomous driving. VQA-Diff exploits the real-world knowledge inherited from the Large Language Model in the Visual Question Answering (VQA) model for robust zero-shot prediction and the rich image prior knowledge in the Diffusion model for structure and appearance generation. In particular, we utilize a multi-expert Diffusion Models strategy to generate the structure information and employ a subject-driven structure-controlled generation mechanism to model appearance information. As a result, without the necessity to learn from a large-scale image-to-3D vehicle dataset collected from the real world, VQA-Diff still has a robust zero-shot image-to-novel-view generation ability. We conduct experiments on various datasets, including Pascal 3D+, Waymo, and Objaverse, to demonstrate that VQA-Diff outperforms existing state-of-the-art methods both qualitatively and quantitatively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
471,423
2501.19143
Imitation Game for Adversarial Disillusion with Multimodal Generative Chain-of-Thought Role-Play
As the cornerstone of artificial intelligence, machine perception confronts a fundamental threat posed by adversarial illusions. These adversarial attacks manifest in two primary forms: deductive illusion, where specific stimuli are crafted based on the victim model's general decision logic, and inductive illusion, where the victim model's general decision logic is shaped by specific stimuli. The former exploits the model's decision boundaries to create a stimulus that, when applied, interferes with its decision-making process. The latter reinforces a conditioned reflex in the model, embedding a backdoor during its learning phase that, when triggered by a stimulus, causes aberrant behaviours. The multifaceted nature of adversarial illusions calls for a unified defence framework, addressing vulnerabilities across various forms of attack. In this study, we propose a disillusion paradigm based on the concept of an imitation game. At the heart of the imitation game lies a multimodal generative agent, steered by chain-of-thought reasoning, which observes, internalises and reconstructs the semantic essence of a sample, liberated from the classic pursuit of reversing the sample to its original state. As a proof of concept, we conduct experimental simulations using a multimodal generative dialogue agent and evaluates the methodology under a variety of attack scenarios.
false
false
false
false
true
false
false
false
false
false
false
true
true
false
false
false
false
false
529,030
1705.00519
Towards Verification of Uncertain Cyber-Physical Systems
Cyber-Physical Systems (CPS) pose new challenges to verification and validation that go beyond the proof of functional correctness based on high-level models. Particular challenges are, in particular for formal methods, its heterogeneity and scalability. For numerical simulation, uncertain behavior can hardly be covered in a comprehensive way which motivates the use of symbolic methods. The paper describes an approach for symbolic simulation-based verification of CPS with uncertainties. We define a symbolic model and representation of uncertain computations: Affine Arithmetic Decision Diagrams. Then we integrate this approach in the SystemC AMS simulator that supports simulation in different models of computation. We demonstrate the approach by analyzing a water-level monitor with uncertainties, self-diagnosis, and error-reactions.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
true
72,695
1309.5014
Characterizing and modeling an electoral campaign in the context of Twitter: 2011 Spanish Presidential Election as a case study
Transmitting messages in the most efficient way as possible has always been one of politicians main concerns during electoral processes. Due to the rapidly growing number of users, online social networks have become ideal platforms for politicians to interact with their potential voters. Exploiting the available potential of these tools to maximize their influence over voters is one of politicians actual challenges. To step in this direction, we have analyzed the user activity in the online social network Twitter, during the 2011 Spanish Presidential electoral process, and found that such activity is correlated with the election results. We introduce a new measure to study political support in Twitter, which we call the Relative Support. We have also characterized user behavior by analyzing the structural and dynamical patterns of the complex networks emergent from the mention and retweet networks. Our results suggest that the collective attention is driven by a very small fraction of users. Furthermore we have analyzed the interactions taking place among politicians, observing a lack of debate. Finally we develop a network growth model to reproduce the interactions taking place among politicians.
false
false
false
true
false
false
false
false
false
false
false
false
false
true
false
false
false
false
27,138
2012.07538
Sentiment analysis in Bengali via transfer learning using multi-lingual BERT
Sentiment analysis (SA) in Bengali is challenging due to this Indo-Aryan language's highly inflected properties with more than 160 different inflected forms for verbs and 36 different forms for noun and 24 different forms for pronouns. The lack of standard labeled datasets in the Bengali domain makes the task of SA even harder. In this paper, we present manually tagged 2-class and 3-class SA datasets in Bengali. We also demonstrate that the multi-lingual BERT model with relevant extensions can be trained via the approach of transfer learning over those novel datasets to improve the state-of-the-art performance in sentiment classification tasks. This deep learning model achieves an accuracy of 71\% for 2-class sentiment classification compared to the current state-of-the-art accuracy of 68\%. We also present the very first Bengali SA classifier for the 3-class manually tagged dataset, and our proposed model achieves an accuracy of 60\%. We further use this model to analyze the sentiment of public comments in the online daily newspaper. Our analysis shows that people post negative comments for political or sports news more often, while the religious article comments represent positive sentiment. The dataset and code is publicly available at https://github.com/KhondokerIslam/Bengali\_Sentiment.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
211,494
2404.06795
Extracting Clean and Balanced Subset for Noisy Long-tailed Classification
Real-world datasets usually are class-imbalanced and corrupted by label noise. To solve the joint issue of long-tailed distribution and label noise, most previous works usually aim to design a noise detector to distinguish the noisy and clean samples. Despite their effectiveness, they may be limited in handling the joint issue effectively in a unified way. In this work, we develop a novel pseudo labeling method using class prototypes from the perspective of distribution matching, which can be solved with optimal transport (OT). By setting a manually-specific probability measure and using a learned transport plan to pseudo-label the training samples, the proposed method can reduce the side-effects of noisy and long-tailed data simultaneously. Then we introduce a simple yet effective filter criteria by combining the observed labels and pseudo labels to obtain a more balanced and less noisy subset for a robust model training. Extensive experiments demonstrate that our method can extract this class-balanced subset with clean labels, which brings effective performance gains for long-tailed classification with label noise.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
445,607
2406.15575
Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity
Graph Neural Networks (GNNs) are widely applied to graph learning problems such as node classification. When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph and keep the full graph adjacency and node embeddings in memory (which is often infeasible) or mini-batch sample the graph (which results in exponentially growing computational complexities with respect to the number of GNN layers). Various sampling-based and historical-embedding-based methods are proposed to avoid this exponential growth of complexities. However, none of these solutions eliminates the linear dependence on graph size. This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size by training GNNs atop a few compact sketches of graph adjacency and node embeddings. Based on polynomial tensor-sketch (PTS) theory, our framework provides a novel protocol for sketching non-linear activations and graph convolution matrices in GNNs, as opposed to existing methods that sketch linear weights or gradients in neural networks. In addition, we develop a locality-sensitive hashing (LSH) technique that can be trained to improve the quality of sketches. Experiments on large-graph benchmarks demonstrate the scalability and competitive performance of our Sketch-GNNs versus their full-size GNN counterparts.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
466,780
1710.10898
Learning to solve inverse problems using Wasserstein loss
We propose using the Wasserstein loss for training in inverse problems. In particular, we consider a learned primal-dual reconstruction scheme for ill-posed inverse problems using the Wasserstein distance as loss function in the learning. This is motivated by miss-alignments in training data, which when using standard mean squared error loss could severely degrade reconstruction quality. We prove that training with the Wasserstein loss gives a reconstruction operator that correctly compensates for miss-alignments in certain cases, whereas training with the mean squared error gives a smeared reconstruction. Moreover, we demonstrate these effects by training a reconstruction algorithm using both mean squared error and optimal transport loss for a problem in computerized tomography.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
83,491
2406.00816
Invisible Backdoor Attacks on Diffusion Models
In recent years, diffusion models have achieved remarkable success in the realm of high-quality image generation, garnering increased attention. This surge in interest is paralleled by a growing concern over the security threats associated with diffusion models, largely attributed to their susceptibility to malicious exploitation. Notably, recent research has brought to light the vulnerability of diffusion models to backdoor attacks, enabling the generation of specific target images through corresponding triggers. However, prevailing backdoor attack methods rely on manually crafted trigger generation functions, often manifesting as discernible patterns incorporated into input noise, thus rendering them susceptible to human detection. In this paper, we present an innovative and versatile optimization framework designed to acquire invisible triggers, enhancing the stealthiness and resilience of inserted backdoors. Our proposed framework is applicable to both unconditional and conditional diffusion models, and notably, we are the pioneers in demonstrating the backdooring of diffusion models within the context of text-guided image editing and inpainting pipelines. Moreover, we also show that the backdoors in the conditional generation can be directly applied to model watermarking for model ownership verification, which further boosts the significance of the proposed framework. Extensive experiments on various commonly used samplers and datasets verify the efficacy and stealthiness of the proposed framework. Our code is publicly available at https://github.com/invisibleTriggerDiffusion/invisible_triggers_for_diffusion.
false
false
false
false
false
false
true
false
false
false
false
true
true
false
false
false
false
false
460,039
2009.05889
Identifying Grey-box Thermal Models with Bayesian Neural Networks
Smart thermostats are one of the most prevalent home automation products. They learn occupant preferences and schedules, and utilize an accurate thermal model to reduce the energy use of heating and cooling equipment while maintaining the temperature for maximum comfort. Despite the importance of having an accurate thermal model for the operation of smart thermostats, fast and reliable identification of this model is still an open problem. In this paper, we explore various techniques for establishing a suitable thermal model using time series data generated by smart thermostats. We show that Bayesian neural networks can be used to estimate parameters of a grey-box thermal model if sufficient training data is available, and this model outperforms several black-box models in terms of the temperature prediction accuracy. Leveraging real data from 8,884 homes equipped with smart thermostats, we discuss how the prior knowledge about the model parameters can be utilized to quickly build an accurate thermal model for another home with similar floor area and age in the same climate zone. Moreover, we investigate how to adapt the model originally built for the same home in another season using a small amount of data collected in this season. Our results confirm that maintaining only a small number of pre-trained thermal models will suffice to quickly build accurate thermal models for many other homes, and that 1~day smart thermostat data could significantly improve the accuracy of transferred models in another season.
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
195,461
2209.09111
DeePhy: On Deepfake Phylogeny
Deepfake refers to tailored and synthetically generated videos which are now prevalent and spreading on a large scale, threatening the trustworthiness of the information available online. While existing datasets contain different kinds of deepfakes which vary in their generation technique, they do not consider progression of deepfakes in a "phylogenetic" manner. It is possible that an existing deepfake face is swapped with another face. This process of face swapping can be performed multiple times and the resultant deepfake can be evolved to confuse the deepfake detection algorithms. Further, many databases do not provide the employed generative model as target labels. Model attribution helps in enhancing the explainability of the detection results by providing information on the generative model employed. In order to enable the research community to address these questions, this paper proposes DeePhy, a novel Deepfake Phylogeny dataset which consists of 5040 deepfake videos generated using three different generation techniques. There are 840 videos of one-time swapped deepfakes, 2520 videos of two-times swapped deepfakes and 1680 videos of three-times swapped deepfakes. With over 30 GBs in size, the database is prepared in over 1100 hours using 18 GPUs of 1,352 GB cumulative memory. We also present the benchmark on DeePhy dataset using six deepfake detection algorithms. The results highlight the need to evolve the research of model attribution of deepfakes and generalize the process over a variety of deepfake generation techniques. The database is available at: http://iab-rubric.org/deephy-database
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
318,387
2009.05708
A Review on Cyber Crimes on the Internet of Things
Internet of Things (IoT) devices are rapidly becoming universal. The success of IoT cannot be ignored in the scenario today, along with its attacks and threats on IoT devices and facilities are also increasing day by day. Cyber attacks become a part of IoT and affecting the life and society of users, so steps must be taken to defend cyber seriously. Cybercrimes threaten the infrastructure of governments and businesses globally and can damage the users in innumerable ways. With the global cybercrime damages predicted to cost up to 6 trillion dollars annually on the global economy by cyber crime. Estimated of 328 Million Dollar annual losses with the cyber attacks in Australia itself. Various steps are taken to slow down these attacks but unfortunately not able to achieve success properly. Therefor secure IoT is the need of this time and understanding of attacks and threats in IoT structure should be studied. The reasons for cyber-attacks can be Countries having week cyber securities, Cybercriminals use new technologies to attack, Cybercrime is possible with services and other business schemes. MSP (Managed Service Providers) face different difficulties in fighting with Cyber-crime. They have to ensure that security of the customer as well as their security in terms of their servers, devices, and systems. Hence, they must use effective, fast, and easily usable antivirus and antimalware tools.
false
false
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
195,398
2201.07681
Learning to Rank For Push Notifications Using Pairwise Expected Regret
Listwise ranking losses have been widely studied in recommender systems. However, new paradigms of content consumption present new challenges for ranking methods. In this work we contribute an analysis of learning to rank for personalized mobile push notifications and discuss the unique challenges this presents compared to traditional ranking problems. To address these challenges, we introduce a novel ranking loss based on weighting the pairwise loss between candidates by the expected regret incurred for misordering the pair. We demonstrate that the proposed method can outperform prior methods both in a simulated environment and in a production experiment on a major social network.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
276,101
2401.08740
SiT: Exploring Flow and Diffusion-based Generative Models with Scalable Interpolant Transformers
We present Scalable Interpolant Transformers (SiT), a family of generative models built on the backbone of Diffusion Transformers (DiT). The interpolant framework, which allows for connecting two distributions in a more flexible way than standard diffusion models, makes possible a modular study of various design choices impacting generative models built on dynamical transport: learning in discrete or continuous time, the objective function, the interpolant that connects the distributions, and deterministic or stochastic sampling. By carefully introducing the above ingredients, SiT surpasses DiT uniformly across model sizes on the conditional ImageNet 256x256 and 512x512 benchmark using the exact same model structure, number of parameters, and GFLOPs. By exploring various diffusion coefficients, which can be tuned separately from learning, SiT achieves an FID-50K score of 2.06 and 2.62, respectively.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
422,018
1704.03755
Unsupervised part learning for visual recognition
Part-based image classification aims at representing categories by small sets of learned discriminative parts, upon which an image representation is built. Considered as a promising avenue a decade ago, this direction has been neglected since the advent of deep neural networks. In this context, this paper brings two contributions: first, it shows that despite the recent success of end-to-end holistic models, explicit part learning can boosts classification performance. Second, this work proceeds one step further than recent part-based models (PBM), focusing on how to learn parts without using any labeled data. Instead of learning a set of parts per class, as generally done in the PBM literature, the proposed approach both constructs a partition of a given set of images into visually similar groups, and subsequently learn a set of discriminative parts per group in a fully unsupervised fashion. This strategy opens the door to the use of PBM in new applications for which the notion of image categories is irrelevant, such as instance-based image retrieval, for example. We experimentally show that our learned parts can help building efficient image representations, for classification as well as for indexing tasks, resulting in performance superior to holistic state-of-the art Deep Convolutional Neural Networks (DCNN) encoding.
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
false
71,689
2412.16380
LiRCDepth: Lightweight Radar-Camera Depth Estimation via Knowledge Distillation and Uncertainty Guidance
Recently, radar-camera fusion algorithms have gained significant attention as radar sensors provide geometric information that complements the limitations of cameras. However, most existing radar-camera depth estimation algorithms focus solely on improving performance, often neglecting computational efficiency. To address this gap, we propose LiRCDepth, a lightweight radar-camera depth estimation model. We incorporate knowledge distillation to enhance the training process, transferring critical information from a complex teacher model to our lightweight student model in three key domains. Firstly, low-level and high-level features are transferred by incorporating pixel-wise and pair-wise distillation. Additionally, we introduce an uncertainty-aware inter-depth distillation loss to refine intermediate depth maps during decoding. Leveraging our proposed knowledge distillation scheme, the lightweight model achieves a 6.6% improvement in MAE on the nuScenes dataset compared to the model trained without distillation. Code: https://github.com/harborsarah/LiRCDepth
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
519,488
1110.5450
Hand Tracking based on Hierarchical Clustering of Range Data
Fast and robust hand segmentation and tracking is an essential basis for gesture recognition and thus an important component for contact-less human-computer interaction (HCI). Hand gesture recognition based on 2D video data has been intensively investigated. However, in practical scenarios purely intensity based approaches suffer from uncontrollable environmental conditions like cluttered background colors. In this paper we present a real-time hand segmentation and tracking algorithm using Time-of-Flight (ToF) range cameras and intensity data. The intensity and range information is fused into one pixel value, representing its combined intensity-depth homogeneity. The scene is hierarchically clustered using a GPU based parallel merging algorithm, allowing a robust identification of both hands even for inhomogeneous backgrounds. After the detection, both hands are tracked on the CPU. Our tracking algorithm can cope with the situation that one hand is temporarily covered by the other hand.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
12,766
2305.09452
A sequential transit network design algorithm with optimal learning under correlated beliefs
Mobility service route design requires demand information to operate in a service region. Transit planners and operators can access various data sources including household travel survey data and mobile device location logs. However, when implementing a mobility system with emerging technologies, estimating demand becomes harder because of limited data resulting in uncertainty. This study proposes an artificial intelligence-driven algorithm that combines sequential transit network design with optimal learning to address the operation under limited data. An operator gradually expands its route system to avoid risks from inconsistency between designed routes and actual travel demand. At the same time, observed information is archived to update the knowledge that the operator currently uses. Three learning policies are compared within the algorithm: multi-armed bandit, knowledge gradient, and knowledge gradient with correlated beliefs. For validation, a new route system is designed on an artificial network based on public use microdata areas in New York City. Prior knowledge is reproduced from the regional household travel survey data. The results suggest that exploration considering correlations can achieve better performance compared to greedy choices in general. In future work, the problem may incorporate more complexities such as demand elasticity to travel time, no limitations to the number of transfers, and costs for expansion.
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
false
false
364,637
1607.01864
A Quadratic Programming Relaxation Approach to Compute-and-Forward Network Coding Design
Using physical layer network coding, compute-and-forward is a promising relaying scheme that effectively exploits the interference between users and thus achieves high rates. In this paper, we consider the problem of finding the optimal integer-valued coefficient vector for a relay in the compute-and-forward scheme to maximize the computation rate at that relay. Although this problem turns out to be a shortest vector problem, which is suspected to be NP-hard, we show that it can be relaxed to a series of equality-constrained quadratic programmings. The solutions of the relaxed problems serve as real-valued approximations of the optimal coefficient vector, and are quantized to a set of integer-valued vectors, from which a coefficient vector is selected. The key to the efficiency of our method is that the closed-form expressions of the real-valued approximations can be derived with the Lagrange multiplier method. Numerical results demonstrate that compared with the existing methods, our method offers comparable rates at an impressively low complexity.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
58,269
2407.17505
Survey on biomarkers in human vocalizations
Recent years has witnessed an increase in technologies that use speech for the sensing of the health of the talker. This survey paper proposes a general taxonomy of the technologies and a broad overview of current progress and challenges. Vocal biomarkers are often secondary measures that are approximating a signal of another sensor or identifying an underlying mental, cognitive, or physiological state. Their measurement involve disturbances and uncertainties that may be considered as noise sources and the biomarkers are coarsely qualified in terms of the various sources of noise involved in their determination. While in some proposed biomarkers the error levels seem high, there are vocal biomarkers where the errors are expected to be low and thus are more likely to qualify as candidates for adoption in healthcare applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
476,011
2308.10758
DepreSym: A Depression Symptom Annotated Corpus and the Role of LLMs as Assessors of Psychological Markers
Computational methods for depression detection aim to mine traces of depression from online publications posted by Internet users. However, solutions trained on existing collections exhibit limited generalisation and interpretability. To tackle these issues, recent studies have shown that identifying depressive symptoms can lead to more robust models. The eRisk initiative fosters research on this area and has recently proposed a new ranking task focused on developing search methods to find sentences related to depressive symptoms. This search challenge relies on the symptoms specified by the Beck Depression Inventory-II (BDI-II), a questionnaire widely used in clinical practice. Based on the participant systems' results, we present the DepreSym dataset, consisting of 21580 sentences annotated according to their relevance to the 21 BDI-II symptoms. The labelled sentences come from a pool of diverse ranking methods, and the final dataset serves as a valuable resource for advancing the development of models that incorporate depressive markers such as clinical symptoms. Due to the complex nature of this relevance annotation, we designed a robust assessment methodology carried out by three expert assessors (including an expert psychologist). Additionally, we explore here the feasibility of employing recent Large Language Models (ChatGPT and GPT4) as potential assessors in this complex task. We undertake a comprehensive examination of their performance, determine their main limitations and analyze their role as a complement or replacement for human annotators.
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
386,867
2002.03236
Tactile Dexterity: Manipulation Primitives with Tactile Feedback
This paper develops closed-loop tactile controllers for dexterous robotic manipulation with a dual-palm robotic system. Tactile dexterity is an approach to dexterous manipulation that plans for robot/object interactions that render interpretable tactile information for control. We divide the role of tactile control into two goals: 1) control the contact state between the end-effector and the object (contact/no-contact, stick/slip) by regulating the stability of planned contact configurations and monitoring undesired slip events; and 2) control the object state by tactile-based tracking and iterative replanning of the object and robot trajectories. Key to this formulation is the decomposition of manipulation plans into sequences of manipulation primitives with simple mechanics and efficient planners. We consider the scenario of manipulating an object from an initial pose to a target pose on a flat surface while correcting for external perturbations and uncertainty in the initial pose of the object. We experimentally validate the approach with an ABB YuMi dual-arm robot and demonstrate the ability of the tactile controller to react to external perturbations.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
163,190
2206.07902
On Privacy and Personalization in Cross-Silo Federated Learning
While the application of differential privacy (DP) has been well-studied in cross-device federated learning (FL), there is a lack of work considering DP and its implications for cross-silo FL, a setting characterized by a limited number of clients each containing many data subjects. In cross-silo FL, usual notions of client-level DP are less suitable as real-world privacy regulations typically concern the in-silo data subjects rather than the silos themselves. In this work, we instead consider an alternative notion of silo-specific sample-level DP, where silos set their own privacy targets for their local examples. Under this setting, we reconsider the roles of personalization in federated learning. In particular, we show that mean-regularized multi-task learning (MR-MTL), a simple personalization framework, is a strong baseline for cross-silo FL: under stronger privacy requirements, silos are incentivized to federate more with each other to mitigate DP noise, resulting in consistent improvements relative to standard baseline methods. We provide an empirical study of competing methods as well as a theoretical characterization of MR-MTL for mean estimation, highlighting the interplay between privacy and cross-silo data heterogeneity. Our work serves to establish baselines for private cross-silo FL as well as identify key directions of future work in this area.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
302,930
1501.04183
Holographic Transformation, Belief Propagation and Loop Calculus for Generalized Probabilistic Theories
The holographic transformation, belief propagation and loop calculus are generalized to problems in generalized probabilistic theories including quantum mechanics. In this work, the partition function of classical factor graph is represented by an inner product of two high-dimensional vectors both of which can be decomposed to tensor products of low-dimensional vectors. On the representation, the holographic transformation is clearly understood by using adjoint linear maps. Furthermore, on the formulation using inner product, the belief propagation is naturally defined from the derivation of the loop calculus formula. As a consequence, the holographic transformation, the belief propagation and the loop calculus are generalized to measurement problems in quantum mechanics and generalized probabilistic theories.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
39,333
1003.0339
libtissue - implementing innate immunity
In a previous paper the authors argued the case for incorporating ideas from innate immunity into articficial immune systems (AISs) and presented an outline for a conceptual framework for such systems. A number of key general properties observed in the biological innate and adaptive immune systems were hughlighted, and how such properties might be instantiated in artificial systems was discussed in detail. The next logical step is to take these ideas and build a software system with which AISs with these properties can be implemented and experimentally evaluated. This paper reports on the results of that step - the libtissue system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
false
5,811
2407.16724
Structure-aware Domain Knowledge Injection for Large Language Models
This paper introduces a pioneering methodology, termed StructTuning, to efficiently transform foundation Large Language Models (LLMs) into domain specialists. It significantly reduces the training corpus needs to a mere 5% while achieving an impressive 100% of traditional knowledge injection performance. Motivated by structured human education, we propose a novel two-stage strategy for knowledge injection and alignment: Structure-aware Continual Pre-Training (SCPT) and Structure-aware Supervised Fine-Tuning (SSFT). In the SCPT phase, we automatically extract the domain knowledge taxonomy and reorganize the training corpora, enabling LLMs to effectively link textual segments to targeted knowledge points within the taxonomy. In the SSFT phase, we explicitly prompt models to elucidate the underlying knowledge structure in their outputs, leveraging the structured domain insight to address practical problems. Our ultimate method was extensively evaluated across model architectures and scales on LongBench and MMedBench datasets, demonstrating superior performance against other knowledge injection methods. We also explored our method's scalability across different training corpus sizes, laying the foundation to enhance domain-specific LLMs with better data utilization.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
475,704
2112.05134
A Shared Representation for Photorealistic Driving Simulators
A powerful simulator highly decreases the need for real-world tests when training and evaluating autonomous vehicles. Data-driven simulators flourished with the recent advancement of conditional Generative Adversarial Networks (cGANs), providing high-fidelity images. The main challenge is synthesizing photorealistic images while following given constraints. In this work, we propose to improve the quality of generated images by rethinking the discriminator architecture. The focus is on the class of problems where images are generated given semantic inputs, such as scene segmentation maps or human body poses. We build on successful cGAN models to propose a new semantically-aware discriminator that better guides the generator. We aim to learn a shared latent representation that encodes enough information to jointly do semantic segmentation, content reconstruction, along with a coarse-to-fine grained adversarial reasoning. The achieved improvements are generic and simple enough to be applied to any architecture of conditional image synthesis. We demonstrate the strength of our method on the scene, building, and human synthesis tasks across three different datasets. The code is available at https://github.com/vita-epfl/SemDisc.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
270,743
1508.05269
Modeling Radicalization Phenomena in Heterogeneous Populations
The phenomenon of radicalization is investigated within a mixed population composed of core and sensitive subpopulations. The latest includes first to third generation immigrants. Respective ways of life may be partially incompatible. In case of a conflict core agents behave as inflexible about the issue. In contrast, sensitive agents can decide either to live peacefully adjusting their way of life to the core one, or to oppose it with eventually joining violent activities. The interplay dynamics between peaceful and opponent sensitive agents is driven by pairwise interactions. These interactions occur both within the sensitive population and by mixing with core agents. The update process is monitored using a Lotka-Volterra-like Ordinary Differential Equation. Given an initial tiny minority of opponents that coexist with both inflexible and peaceful agents, we investigate implications on the emergence of radicalization. Opponents try to turn peaceful agents to opponents driving radicalization. However, inflexible core agents may step in to bring back opponents to a peaceful choice thus weakening the phenomenon. The required minimum individual core involvement to actually curb radicalization is calculated.It is found to be a function of both the majority or minority status of the sensitive subpopulation with respect to the core subpopulation and the degree of activeness of opponents. The results highlight the instrumental role core agents can have to hinder radicalization within the sensitive subpopulation. Some hints are outlined to favor novel public policies towards social integration.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
46,217
2201.08474
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Backdoor attacks (BAs) are an emerging threat to deep neural network classifiers. A victim classifier will predict to an attacker-desired target class whenever a test sample is embedded with the same backdoor pattern (BP) that was used to poison the classifier's training set. Detecting whether a classifier is backdoor attacked is not easy in practice, especially when the defender is, e.g., a downstream user without access to the classifier's training set. This challenge is addressed here by a reverse-engineering defense (RED), which has been shown to yield state-of-the-art performance in several domains. However, existing REDs are not applicable when there are only {\it two classes} or when {\it multiple attacks} are present. These scenarios are first studied in the current paper, under the practical constraints that the defender neither has access to the classifier's training set nor to supervision from clean reference classifiers trained for the same domain. We propose a detection framework based on BP reverse-engineering and a novel {\it expected transferability} (ET) statistic. We show that our ET statistic is effective {\it using the same detection threshold}, irrespective of the classification domain, the attack configuration, and the BP reverse-engineering algorithm that is used. The excellent performance of our method is demonstrated on six benchmark datasets. Notably, our detection framework is also applicable to multi-class scenarios with multiple attacks. Code is available at https://github.com/zhenxianglance/2ClassBADetection.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
276,350
0802.3535
Approximate Capacity of Gaussian Relay Networks
We present an achievable rate for general Gaussian relay networks. We show that the achievable rate is within a constant number of bits from the information-theoretic cut-set upper bound on the capacity of these networks. This constant depends on the topology of the network, but not the values of the channel gains. Therefore, we uniformly characterize the capacity of Gaussian relay networks within a constant number of bits, for all channel parameters.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
1,340
2407.14076
Domain-Specific Pretraining of Language Models: A Comparative Study in the Medical Field
There are many cases where LLMs are used for specific tasks in a single domain. These usually require less general, but more domain-specific knowledge. Highly capable, general-purpose state-of-the-art language models like GPT-4 or Claude-3-opus can often be used for such tasks, but they are very large and cannot be run locally, even if they were not proprietary. This can be a problem when working with sensitive data. This paper focuses on domain-specific and mixed-domain pretraining as potentially more efficient methods than general pretraining for specialized language models. We will take a look at work related to domain-specific pretraining, specifically in the medical area, and compare benchmark results of specialized language models to general-purpose language models.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
474,636
1703.00194
Stochastic Functional Gradient for Motion Planning in Continuous Occupancy Maps
Safe path planning is a crucial component in autonomous robotics. The many approaches to find a collision free path can be categorically divided into trajectory optimisers and sampling-based methods. When planning using occupancy maps, the sampling-based approach is the prevalent method. The main drawback of such techniques is that the reasoning about the expected cost of a plan is limited to the search heuristic used by each method. We introduce a novel planning method based on trajectory optimisation to plan safe and efficient paths in continuous occupancy maps. We extend the expressiveness of the state-of-the-art functional gradient optimisation methods by devising a stochastic gradient update rule to optimise a path represented as a Gaussian process. This approach avoids the need to commit to a specific resolution of the path representation, whether spatial or parametric. We utilise a continuous occupancy map representation in order to define our optimisation objective, which enables fast computation of occupancy gradients. We show that this approach is essential in order to ensure convergence to the optimal path, and present results and comparisons to other planning methods in both simulation and with real laser data. The experiments demonstrate the benefits of using this technique when planning for safe and efficient paths in continuous occupancy maps.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
69,126
2210.09055
Data-driven multi-scale modeling and robust optimization of composite structure with uncertainty quantification
It is important to accurately model materials' properties at lower length scales (micro-level) while translating the effects to the components and/or system level (macro-level) can significantly reduce the amount of experimentation required to develop new technologies. Robustness analysis of fuel and structural performance for harsh environments (such as power uprated reactor systems or aerospace applications) using machine learning-based multi-scale modeling and robust optimization under uncertainties are required. The fiber and matrix material characteristics are potential sources of uncertainty at the microscale. The stacking sequence (angles of stacking and thickness of layers) of composite layers causes meso-scale uncertainties. It is also possible for macro-scale uncertainties to arise from system properties, like the load or the initial conditions. This chapter demonstrates advanced data-driven methods and outlines the specific capability that must be developed/added for the multi-scale modeling of advanced composite materials. This chapter proposes a multi-scale modeling method for composite structures based on a finite element method (FEM) simulation driven by surrogate models/emulators based on microstructurally informed meso-scale materials models to study the impact of operational parameters/uncertainties using machine learning approaches. To ensure optimal composite materials, composite properties are optimized with respect to initial materials volume fraction using data-driven numerical algorithms.
false
false
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
324,408
2412.06015
siForest: Detecting Network Anomalies with Set-Structured Isolation Forest
As cyber threats continue to evolve in sophistication and scale, the ability to detect anomalous network behavior has become critical for maintaining robust cybersecurity defenses. Modern cybersecurity systems face the overwhelming challenge of analyzing billions of daily network interactions to identify potential threats, making efficient and accurate anomaly detection algorithms crucial for network defense. This paper investigates the use of variations of the Isolation Forest (iForest) machine learning algorithm for detecting anomalies in internet scan data. In particular, it presents the Set-Partitioned Isolation Forest (siForest), a novel extension of the iForest method designed to detect anomalies in set-structured data. By treating instances such as sets of multiple network scans with the same IP address as cohesive units, siForest effectively addresses some challenges of analyzing complex, multidimensional datasets. Extensive experiments on synthetic datasets simulating diverse anomaly scenarios in network traffic demonstrate that siForest has the potential to outperform traditional approaches on some types of internet scan data.
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
false
515,057
1802.08328
On Looking for Local Expansion Invariants in Argumentation Semantics: a Preliminary Report
We study invariant local expansion operators for conflict-free and admissible sets in Abstract Argumentation Frameworks (AFs). Such operators are directly applied on AFs, and are invariant with respect to a chosen "semantics" (that is w.r.t. each of the conflict free/admissible set of arguments). Accordingly, we derive a definition of robustness for AFs in terms of the number of times such operators can be applied without producing any change in the chosen semantics.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
91,079
1412.8736
Sharing Information Without Regret in Managed Stochastic Games
This paper considers information sharing in a multi-player repeated game. Every round, each player observes a subset of components of a random vector and then takes a control action. The utility earned by each player depends on the full random vector and on the actions of others. An example is a game where different rewards are placed over multiple locations, each player only knows the rewards in a subset of the locations, and players compete to collect the rewards. Sharing information can help others, but can also increase competition for desirable locations. Standard Nash equilibrium and correlated equilibrium concepts are inadequate in this scenario. Instead, this paper develops an algorithm where, every round, all players pass their information and intended actions to a game manager. The manager provides suggested actions for each player that, if taken, maximize a concave function of average utilities subject to the constraint that each player gets an average utility no worse than it would get without sharing. The algorithm acts online using information given at each round and does not require a specific model of random events or player actions. Thus, the analytical results of this paper apply in non-ergodic situations with any sequence of actions taken by human players.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
false
true
38,947
2409.16595
Robo-Platform: A Robotic System for Recording Sensors and Controlling Robots
Mobile smartphones compactly provide sensors such as cameras, IMUs, GNSS measurement units, and wireless and wired communication channels required for robotics projects. They are affordable, portable, and programmable, which makes them ideal for testing, data acquisition, controlling mobile robots, and many other robotic applications. A robotic system is proposed in this paper, consisting of an Android phone, a microcontroller board attached to the phone via USB, and a remote wireless controller station. In the data acquisition mode, the Android device can record a dataset of a diverse configuration of multiple cameras, IMUs, GNSS units, and external USB ADC channels in the rawest format used for, but not limited to, pose estimation and scene reconstruction applications. In robot control mode, the Android phone, a microcontroller board, and other peripherals constitute the mobile or stationary robotic system. This system is controlled using a remote server connected over Wi-Fi or Bluetooth. Experiments show that although the SLAM and AR applications can utilize the acquired data, the proposed system can pave the way for more advanced algorithms for processing these noisy and sporadic measurements. Moreover, the characteristics of the communication media are studied, and two example robotic projects, which involve controlling a toy car and a quadcopter, are included.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
491,410
1609.02462
An Eigenshapes Approach to Compressed Signed Distance Fields and Their Utility in Robot Mapping
In order to deal with the scaling problem of volumetric map representations we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to non-linear auto-encoder networks and novel mixed architectures that combine both. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily compressed distance fields used as cost functions for ego-motion estimation, can outperform their uncompressed counterparts in challenging scenarios from standard RGB-D data-sets.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
60,738
1804.01422
Unsupervised Semantic-based Aggregation of Deep Convolutional Features
In this paper, we propose a simple but effective semantic-based aggregation (SBA) method. The proposed SBA utilizes the discriminative filters of deep convolutional layers as semantic detectors. Moreover, we propose the effective unsupervised strategy to select some semantic detectors to generate the "probabilistic proposals", which highlight certain discriminative pattern of objects and suppress the noise of background. The final global SBA representation could then be acquired by aggregating the regional representations weighted by the selected "probabilistic proposals" corresponding to various semantic content. Our unsupervised SBA is easy to generalize and achieves excellent performance on various tasks. We conduct comprehensive experiments and show that our unsupervised SBA outperforms the state-of-the-art unsupervised and supervised aggregation methods on image retrieval, place recognition and cloud classification.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
94,220
2403.14364
WikiFactDiff: A Large, Realistic, and Temporally Adaptable Dataset for Atomic Factual Knowledge Update in Causal Language Models
The factuality of large language model (LLMs) tends to decay over time since events posterior to their training are "unknown" to them. One way to keep models up-to-date could be factual update: the task of inserting, replacing, or removing certain simple (atomic) facts within the model. To study this task, we present WikiFactDiff, a dataset that describes the evolution of factual knowledge between two dates as a collection of simple facts divided into three categories: new, obsolete, and static. We describe several update scenarios arising from various combinations of these three types of basic update. The facts are represented by subject-relation-object triples; indeed, WikiFactDiff was constructed by comparing the state of the Wikidata knowledge base at 4 January 2021 and 27 February 2023. Those fact are accompanied by verbalization templates and cloze tests that enable running update algorithms and their evaluation metrics. Contrary to other datasets, such as zsRE and CounterFact, WikiFactDiff constitutes a realistic update setting that involves various update scenarios, including replacements, archival, and new entity insertions. We also present an evaluation of existing update algorithms on WikiFactDiff.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
440,038
2106.05635
Contraction Analysis of Discrete-time Stochastic Systems
In this paper, we develop a novel contraction framework for stability analysis of discrete-time nonlinear systems with parameters following stochastic processes. For general stochastic processes, we first provide a sufficient condition for uniform incremental exponential stability (UIES) in the first moment with respect to a Riemannian metric. Then, focusing on the Euclidean distance, we present a necessary and sufficient condition for UIES in the second moment. By virtue of studying general stochastic processes, we can readily derive UIES conditions for special classes of processes, e.g., i.i.d. processes and Markov processes, which is demonstrated as selected applications of our results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
240,172
2409.16826
Learning phase-space flows using time-discrete implicit Runge-Kutta PINNs
We present a computational framework for obtaining multidimensional phase-space solutions of systems of non-linear coupled differential equations, using high-order implicit Runge-Kutta Physics- Informed Neural Networks (IRK-PINNs) schemes. Building upon foundational work originally solving differential equations for fields depending on coordinates [J. Comput. Phys. 378, 686 (2019)], we adapt the scheme to a context where the coordinates are treated as functions. This modification enables us to efficiently solve equations of motion for a particle in an external field. Our scheme is particularly useful for explicitly time-independent and periodic fields. We apply this approach to successfully solve the equations of motion for a mass particle placed in a central force field and a charged particle in a periodic electric field.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
true
491,532
2312.03993
Style Transfer to Calvin and Hobbes comics using Stable Diffusion
This project report summarizes our journey to perform stable diffusion fine-tuning on a dataset containing Calvin and Hobbes comics. The purpose is to convert any given input image into the comic style of Calvin and Hobbes, essentially performing style transfer. We train stable-diffusion-v1.5 using Low Rank Adaptation (LoRA) to efficiently speed up the fine-tuning process. The diffusion itself is handled by a Variational Autoencoder (VAE), which is a U-net. Our results were visually appealing for the amount of training time and the quality of input data that went into training.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
413,500
2502.06685
No Trick, No Treat: Pursuits and Challenges Towards Simulation-free Training of Neural Samplers
We consider the sampling problem, where the aim is to draw samples from a distribution whose density is known only up to a normalization constant. Recent breakthroughs in generative modeling to approximate a high-dimensional data distribution have sparked significant interest in developing neural network-based methods for this challenging problem. However, neural samplers typically incur heavy computational overhead due to simulating trajectories during training. This motivates the pursuit of simulation-free training procedures of neural samplers. In this work, we propose an elegant modification to previous methods, which allows simulation-free training with the help of a time-dependent normalizing flow. However, it ultimately suffers from severe mode collapse. On closer inspection, we find that nearly all successful neural samplers rely on Langevin preconditioning to avoid mode collapsing. We systematically analyze several popular methods with various objective functions and demonstrate that, in the absence of Langevin preconditioning, most of them fail to adequately cover even a simple target. Finally, we draw attention to a strong baseline by combining the state-of-the-art MCMC method, Parallel Tempering (PT), with an additional generative model to shed light on future explorations of neural samplers.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
532,188
1912.12874
Video Depth Estimation by Fusing Flow-to-Depth Proposals
Depth from a monocular video can enable billions of devices and robots with a single camera to see the world in 3D. In this paper, we present an approach with a differentiable flow-to-depth layer for video depth estimation. The model consists of a flow-to-depth layer, a camera pose refinement module, and a depth fusion network. Given optical flow and camera pose, our flow-to-depth layer generates depth proposals and the corresponding confidence maps by explicitly solving an epipolar geometry optimization problem. Our flow-to-depth layer is differentiable, and thus we can refine camera poses by maximizing the aggregated confidence in the camera pose refinement module. Our depth fusion network can utilize depth proposals and their confidence maps inferred from different adjacent frames to produce the final depth map. Furthermore, the depth fusion network can additionally take the depth proposals generated by other methods to improve the results further. The experiments on three public datasets show that our approach outperforms state-of-the-art depth estimation methods, and has reasonable cross dataset generalization capability: our model trained on KITTI still performs well on the unseen Waymo dataset.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
158,957
2311.17132
TransNeXt: Robust Foveal Visual Perception for Vision Transformers
Due to the depth degradation effect in residual connections, many efficient Vision Transformers models that rely on stacking layers for information exchange often fail to form sufficient information mixing, leading to unnatural visual perception. To address this issue, in this paper, we propose Aggregated Attention, a biomimetic design-based token mixer that simulates biological foveal vision and continuous eye movement while enabling each token on the feature map to have a global perception. Furthermore, we incorporate learnable tokens that interact with conventional queries and keys, which further diversifies the generation of affinity matrices beyond merely relying on the similarity between queries and keys. Our approach does not rely on stacking for information exchange, thus effectively avoiding depth degradation and achieving natural visual perception. Additionally, we propose Convolutional GLU, a channel mixer that bridges the gap between GLU and SE mechanism, which empowers each token to have channel attention based on its nearest neighbor image features, enhancing local modeling capability and model robustness. We combine aggregated attention and convolutional GLU to create a new visual backbone called TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves state-of-the-art performance across multiple model sizes. At a resolution of $224^2$, TransNeXt-Tiny attains an ImageNet accuracy of 84.0%, surpassing ConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet accuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of $384^2$, a COCO object detection mAP of 57.1, and an ADE20K semantic segmentation mIoU of 54.7.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
411,189
1603.07775
The Impact of Operators' Performance in the Reliability of Cyber-Physical Power Distribution Systems
Cyber-Physical Systems are the result of integrating information and communication technologies into physical systems. One particular case are Cyber-Physical Power Systems (CPPS), which use communication technologies to perform real-time monitoring and operation. These kinds of systems have become more complex, impacting on the systems' characteristics, such as their reliability. In addition, it is already known that in terms of the reliability of Cyber-Physical Power Distribution Systems (CPPDS), the failures of the communication network are just as relevant as the electrical network failures. However, some of the operators' performances, such as response time and decision quality, during CPPDS contingencies have not been investigated yet. In this paper, we introduce a model to the operator response time, present a Sequential Monte Carlo Simulation methodology that incorporates the response time in CPPDS reliability indices estimation, and evaluate the impact of the operator response time in reliability indices. Our method is tested on a CPPDS using different values for the average response time of operators. The results show that the response time of the operators affects the reliability indices that are related to the durations of the failure, indicating that a fast decision directly contributes to the system performance. We conclude that the improvement of CPPDS reliability is not only dependent on the electric and communication components, but also dependent on operators' performance.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
53,670
2407.11272
Differentiable Voxelization and Mesh Morphing
In this paper, we propose the differentiable voxelization of 3D meshes via the winding number and solid angles. The proposed approach achieves fast, flexible, and accurate voxelization of 3D meshes, admitting the computation of gradients with respect to the input mesh and GPU acceleration. We further demonstrate the application of the proposed voxelization in mesh morphing, where the voxelized mesh is deformed by a neural network. The proposed method is evaluated on the ShapeNet dataset and achieves state-of-the-art performance in terms of both accuracy and efficiency.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,374
1012.2628
Throughput and Latency in Finite-Buffer Line Networks
This work investigates the effect of finite buffer sizes on the throughput capacity and packet delay of line networks with packet erasure links that have perfect feedback. These performance measures are shown to be linked to the stationary distribution of an underlying irreducible Markov chain that models the system exactly. Using simple strategies, bounds on the throughput capacity are derived. The work then presents two iterative schemes to approximate the steady-state distribution of node occupancies by decoupling the chain to smaller queueing blocks. These approximate solutions are used to understand the effect of buffer sizes on throughput capacity and the distribution of packet delay. Using the exact modeling for line networks, it is shown that the throughput capacity is unaltered in the absence of hop-by-hop feedback provided packet-level network coding is allowed. Finally, using simulations, it is confirmed that the proposed framework yields accurate estimates of the throughput capacity and delay distribution and captures the vital trends and tradeoffs in these networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
8,514
1907.01203
Proposal, Tracking and Segmentation (PTS): A Cascaded Network for Video Object Segmentation
Video object segmentation (VOS) aims at pixel-level object tracking given only the annotations in the first frame. Due to the large visual variations of objects in video and the lack of training samples, it remains a difficult task despite the upsurging development of deep learning. Toward solving the VOS problem, we bring in several new insights by the proposed unified framework consisting of object proposal, tracking and segmentation components. The object proposal network transfers objectness information as generic knowledge into VOS; the tracking network identifies the target object from the proposals; and the segmentation network is performed based on the tracking results with a novel dynamic-reference based model adaptation scheme. Extensive experiments have been conducted on the DAVIS'17 dataset and the YouTube-VOS dataset, our method achieves the state-of-the-art performance on several video object segmentation benchmarks. We make the code publicly available at https://github.com/sydney0zq/PTSNet.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
137,261
1909.10972
Residual Reactive Navigation: Combining Classical and Learned Navigation Strategies For Deployment in Unknown Environments
In this work we focus on improving the efficiency and generalisation of learned navigation strategies when transferred from its training environment to previously unseen ones. We present an extension of the residual reinforcement learning framework from the robotic manipulation literature and adapt it to the vast and unstructured environments that mobile robots can operate in. The concept is based on learning a residual control effect to add to a typical sub-optimal classical controller in order to close the performance gap, whilst guiding the exploration process during training for improved data efficiency. We exploit this tight coupling and propose a novel deployment strategy, switching Residual Reactive Navigation (sRRN), which yields efficient trajectories whilst probabilistically switching to a classical controller in cases of high policy uncertainty. Our approach achieves improved performance over end-to-end alternatives and can be incorporated as part of a complete navigation stack for cluttered indoor navigation tasks in the real world. The code and training environment for this project is made publicly available at https://sites.google.com/view/srrn/home.
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
146,679