id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
2101.04840
Robustness Gym: Unifying the NLP Evaluation Landscape
Despite impressive performance on standard benchmarks, deep neural networks are often brittle when deployed in real-world systems. Consequently, recent research has focused on testing the robustness of such models, resulting in a diverse set of evaluation methodologies ranging from adversarial attacks to rule-based data transformations. In this work, we identify challenges with evaluating NLP systems and propose a solution in the form of Robustness Gym (RG), a simple and extensible evaluation toolkit that unifies 4 standard evaluation paradigms: subpopulations, transformations, evaluation sets, and adversarial attacks. By providing a common platform for evaluation, Robustness Gym enables practitioners to compare results from all 4 evaluation paradigms with just a few clicks, and to easily develop and share novel evaluation methods using a built-in set of abstractions. To validate Robustness Gym's utility to practitioners, we conducted a real-world case study with a sentiment-modeling team, revealing performance degradations of 18%+. To verify that Robustness Gym can aid novel research analyses, we perform the first study of state-of-the-art commercial and academic named entity linking (NEL) systems, as well as a fine-grained analysis of state-of-the-art summarization models. For NEL, commercial systems struggle to link rare entities and lag their academic counterparts by 10%+, while state-of-the-art summarization models struggle on examples that require abstraction and distillation, degrading by 9%+. Robustness Gym can be found at https://robustnessgym.com/
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
215,255
2106.03089
Referring Transformer: A One-step Approach to Multi-task Visual Grounding
As an important step towards visual reasoning, visual grounding (e.g., phrase localization, referring expression comprehension/segmentation) has been widely explored Previous approaches to referring expression comprehension (REC) or segmentation (RES) either suffer from limited performance, due to a two-stage setup, or require the designing of complex task-specific one-stage architectures. In this paper, we propose a simple one-stage multi-task framework for visual grounding tasks. Specifically, we leverage a transformer architecture, where two modalities are fused in a visual-lingual encoder. In the decoder, the model learns to generate contextualized lingual queries which are then decoded and used to directly regress the bounding box and produce a segmentation mask for the corresponding referred regions. With this simple but highly contextualized model, we outperform state-of-the-arts methods by a large margin on both REC and RES tasks. We also show that a simple pre-training schedule (on an external dataset) further improves the performance. Extensive experiments and ablations illustrate that our model benefits greatly from contextualized information and multi-task training.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
239,169
1206.4723
Downlink Coverage Analysis in a Heterogeneous Cellular Network
In this paper, we consider the downlink signal-to-interference-plus-noise ratio (SINR) analysis in a heterogeneous cellular network with K tiers. Each tier is characterized by a base-station (BS) arrangement according to a homogeneous Poisson point process with certain BS density, transmission power, random shadow fading factors with arbitrary distribution, arbitrary path-loss exponent and a certain bias towards admitting the mobile-station (MS). The MS associates with the BS that has the maximum SINR under the open access cell association scheme. For such a general setting, we provide an analytical characterization of the coverage probability at the MS.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
16,739
2211.05953
Breadth-First Pipeline Parallelism
We introduce Breadth-First Pipeline Parallelism, a novel training schedule which optimizes the combination of pipeline and data parallelism. Breadth-First Pipeline Parallelism lowers training time, cost and memory usage by combining a high GPU utilization with a small batch size per GPU, and by making use of fully sharded data parallelism. Experimentally, we observed an increase of up to 43% in training throughput for a 52 billion-parameter model using a small batch size per GPU compared to Megatron-LM, which would reduce the training time and cost by the same amount on a large GPU cluster.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
true
329,721
2408.03208
Personalizing Federated Instrument Segmentation with Visual Trait Priors in Robotic Surgery
Personalized federated learning (PFL) for surgical instrument segmentation (SIS) is a promising approach. It enables multiple clinical sites to collaboratively train a series of models in privacy, with each model tailored to the individual distribution of each site. Existing PFL methods rarely consider the personalization of multi-headed self-attention, and do not account for appearance diversity and instrument shape similarity, both inherent in surgical scenes. We thus propose PFedSIS, a novel PFL method with visual trait priors for SIS, incorporating global-personalized disentanglement (GPD), appearance-regulation personalized enhancement (APE), and shape-similarity global enhancement (SGE), to boost SIS performance in each site. GPD represents the first attempt at head-wise assignment for multi-headed self-attention personalization. To preserve the unique appearance representation of each site and gradually leverage the inter-site difference, APE introduces appearance regulation and provides customized layer-wise aggregation solutions via hypernetworks for each site's personalized parameters. The mutual shape information of instruments is maintained and shared via SGE, which enhances the cross-style shape consistency on the image level and computes the shape-similarity contribution of each site on the prediction level for updating the global parameters. PFedSIS outperforms state-of-the-art methods with +1.51% Dice, +2.11% IoU, -2.79 ASSD, -15.55 HD95 performance gains. The corresponding code and models will be released at https://github.com/wzjialang/PFedSIS.
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
478,932
2502.04507
Fast Video Generation with Sliding Tile Attention
Diffusion Transformers (DiTs) with 3D full attention power state-of-the-art video generation, but suffer from prohibitive compute cost -- when generating just a 5-second 720P video, attention alone takes 800 out of 945 seconds of total inference time. This paper introduces sliding tile attention (STA) to address this challenge. STA leverages the observation that attention scores in pretrained video diffusion models predominantly concentrate within localized 3D windows. By sliding and attending over the local spatial-temporal region, STA eliminates redundancy from full attention. Unlike traditional token-wise sliding window attention (SWA), STA operates tile-by-tile with a novel hardware-aware sliding window design, preserving expressiveness while being hardware-efficient. With careful kernel-level optimizations, STA offers the first efficient 2D/3D sliding-window-like attention implementation, achieving 58.79% MFU. Precisely, STA accelerates attention by 2.8-17x over FlashAttention-2 (FA2) and 1.6-10x over FlashAttention-3 (FA3). On the leading video DiT, HunyuanVideo, STA reduces end-to-end latency from 945s (FA3) to 685s without quality degradation, requiring no training. Enabling finetuning further lowers latency to 268s with only a 0.09% drop on VBench.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
531,188
2206.14268
BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language Models
It is crucial to automatically construct knowledge graphs (KGs) of diverse new relations to support knowledge discovery and broad applications. Previous KG construction methods, based on either crowdsourcing or text mining, are often limited to a small predefined set of relations due to manual cost or restrictions in text corpus. Recent research proposed to use pretrained language models (LMs) as implicit knowledge bases that accept knowledge queries with prompts. Yet, the implicit knowledge lacks many desirable properties of a full-scale symbolic KG, such as easy access, navigation, editing, and quality assurance. In this paper, we propose a new approach of harvesting massive KGs of arbitrary relations from pretrained LMs. With minimal input of a relation definition (a prompt and a few shot of example entity pairs), the approach efficiently searches in the vast entity pair space to extract diverse accurate knowledge of the desired relation. We develop an effective search-and-rescore mechanism for improved efficiency and accuracy. We deploy the approach to harvest KGs of over 400 new relations from different LMs. Extensive human and automatic evaluations show our approach manages to extract diverse accurate knowledge, including tuples of complex relations (e.g., "A is capable of but not good at B"). The resulting KGs as a symbolic interpretation of the source LMs also reveal new insights into the LMs' knowledge capacities.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
305,226
2209.05804
Analyzing the Impact of Varied Window Hyper-parameters on Deep CNN for sEMG based Motion Intent Classification
The use of deep neural networks in electromyogram (EMG) based prostheses control provides a promising alternative to the hand-crafted features by automatically learning muscle activation patterns from the EMG signals. Meanwhile, the use of raw EMG signals as input to convolution neural networks (CNN) offers a simple, fast, and ideal scheme for effective control of prostheses. Therefore, this study investigates the relationship between window length and overlap, which may influence the generation of robust raw EMG 2-dimensional (2D) signals for application in CNN. And a rule of thumb for a proper combination of these parameters that could guarantee optimal network performance was derived. Moreover, we investigate the relationship between the CNN receptive window size and the raw EMG signal size. Experimental results show that the performance of the CNN increases with the increase in overlap within the generated signals, with the highest improvement of 9.49% accuracy and 23.33% F1-score realized when the overlap is 75% of the window length. Similarly, the network performance increases with the increase in receptive window (kernel) size. Findings from this study suggest that a combination of 75% overlap in 2D EMG signals and wider network kernels may provide ideal motor intents classification for adequate EMG-CNN based prostheses control scheme.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
317,214
2405.08276
Scalable Subsampling Inference for Deep Neural Networks
Deep neural networks (DNN) has received increasing attention in machine learning applications in the last several years. Recently, a non-asymptotic error bound has been developed to measure the performance of the fully connected DNN estimator with ReLU activation functions for estimating regression models. The paper at hand gives a small improvement on the current error bound based on the latest results on the approximation ability of DNN. More importantly, however, a non-random subsampling technique--scalable subsampling--is applied to construct a `subagged' DNN estimator. Under regularity conditions, it is shown that the subagged DNN estimator is computationally efficient without sacrificing accuracy for either estimation or prediction tasks. Beyond point estimation/prediction, we propose different approaches to build confidence and prediction intervals based on the subagged DNN estimator. In addition to being asymptotically valid, the proposed confidence/prediction intervals appear to work well in finite samples. All in all, the scalable subsampling DNN estimator offers the complete package in terms of statistical inference, i.e., (a) computational efficiency; (b) point estimation/prediction accuracy; and (c) allowing for the construction of practically useful confidence and prediction intervals.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
454,040
2406.04950
Robotic in-hand manipulation with relaxed optimization
Dexterous in-hand manipulation is a unique and valuable human skill requiring sophisticated sensorimotor interaction with the environment while respecting stability constraints. Satisfying these constraints with generated motions is essential for a robotic platform to achieve reliable in-hand manipulation skills. Explicitly modelling these constraints can be challenging, but they can be implicitly modelled and learned through experience or human demonstrations. We propose a learning and control approach based on dictionaries of motion primitives generated from human demonstrations. To achieve this, we defined an optimization process that combines motion primitives to generate robot fingertip trajectories for moving an object from an initial to a desired final pose. Based on our experiments, our approach allows a robotic hand to handle objects like humans, adhering to stability constraints without requiring explicit formalization. In other words, the proposed motion primitive dictionaries learn and implicitly embed the constraints crucial to the in-hand manipulation task.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
461,928
2410.07250
Reconstruction of Particle Flow Energy Distribution Using Deep Learning Algorithms
In high-energy particle physics, extracting information from complex detector signals is crucial for energy reconstruction. Recent advancements involve using deep learning to process calorimeter images from various sub-detectors in experiments like the Large Hadron Collider (LHC) for energy map reconstruction. This paper compares classical algorithms\-MLP, CNN, U-Net, and RNN\-with variants that include self-attention and 3D convolution modules to evaluate their effectiveness in reconstructing the initial energy distribution. Additionally, a test dataset of jet events is utilized to analyze and compare models' performance in handling anomalous high-energy events. The analysis highlights the effectiveness of deep learning techniques for energy image reconstruction and explores their potential in this area.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
496,555
2004.01929
Empirical Evaluation of PRNU Fingerprint Variation for Mismatched Imaging Pipelines
We assess the variability of PRNU-based camera fingerprints with mismatched imaging pipelines (e.g., different camera ISP or digital darkroom software). We show that camera fingerprints exhibit non-negligible variations in this setup, which may lead to unexpected degradation of detection statistics in real-world use-cases. We tested 13 different pipelines, including standard digital darkroom software and recent neural-networks. We observed that correlation between fingerprints from mismatched pipelines drops on average to 0.38 and the PCE detection statistic drops by over 40%. The degradation in error rates is the strongest for small patches commonly used in photo manipulation detection, and when neural networks are used for photo development. At a fixed 0.5% FPR setting, the TPR drops by 17 ppt (percentage points) for 128 px and 256 px patches.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
171,060
2311.16125
Vision-Based Incoming Traffic Estimator Using Deep Neural Network on General Purpose Embedded Hardware
Traffic management is a serious problem in many cities around the world. Even the suburban areas are now experiencing regular traffic congestion. Inappropriate traffic control wastes fuel, time, and the productivity of nations. Though traffic signals are used to improve traffic flow, they often cause problems due to inappropriate or obsolete timing that does not tally with the actual traffic intensity at the intersection. Traffic intensity determination based on statistical methods only gives the average intensity expected at any given time. However, to control traffic accurately, it is required to know the real-time traffic intensity. In this research, image processing and machine learning have been used to estimate actual traffic intensity in real time. General-purpose electronic hardware has been used for in-situ image processing based on the edge-detection method. A deep neural network (DNN) was trained to infer traffic intensity in each image in real time. The trained DNN estimated traffic intensity accurately in 90% of the real-time images during road tests. The electronic system was implemented on a Raspberry Pi single-board computer; hence, it is cost-effective for large-scale deployment.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
410,777
2104.03920
Finding Experts in Social Media Data using a Hybrid Approach
Several approaches to the problem of expert finding have emerged in computer science research. In this work, three of these approaches - content analysis, social graph analysis and the use of Semantic Web technologies are examined. An integrated set of system requirements is then developed that uses all three approaches in one hybrid approach. To show the practicality of this hybrid approach, a usable prototype expert finding system called ExpertQuest is developed using a modern functional programming language (Clojure) to query social media data and Linked Data. This system is evaluated and discussed. Finally, a discussion and conclusions are presented which describe the benefits and shortcomings of the hybrid approach and the technologies used in this work.
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
false
false
229,223
1910.00478
Machine Translation for Machines: the Sentiment Classification Use Case
We propose a neural machine translation (NMT) approach that, instead of pursuing adequacy and fluency ("human-oriented" quality criteria), aims to generate translations that are best suited as input to a natural language processing component designed for a specific downstream task (a "machine-oriented" criterion). Towards this objective, we present a reinforcement learning technique based on a new candidate sampling strategy, which exploits the results obtained on the downstream task as weak feedback. Experiments in sentiment classification of Twitter data in German and Italian show that feeding an English classifier with machine-oriented translations significantly improves its performance. Classification results outperform those obtained with translations produced by general-purpose NMT models as well as by an approach based on reinforcement learning. Moreover, our results on both languages approximate the classification accuracy computed on gold standard English tweets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
147,675
2211.04786
Leveraging Sequentiality in Reinforcement Learning from a Single Demonstration
Deep Reinforcement Learning has been successfully applied to learn robotic control. However, the corresponding algorithms struggle when applied to problems where the agent is only rewarded after achieving a complex task. In this context, using demonstrations can significantly speed up the learning process, but demonstrations can be costly to acquire. In this paper, we propose to leverage a sequential bias to learn control policies for complex robotic tasks using a single demonstration. To do so, our method learns a goal-conditioned policy to control a system between successive low-dimensional goals. This sequential goal-reaching approach raises a problem of compatibility between successive goals: we need to ensure that the state resulting from reaching a goal is compatible with the achievement of the following goals. To tackle this problem, we present a new algorithm called DCIL-II. We show that DCIL-II can solve with unprecedented sample efficiency some challenging simulated tasks such as humanoid locomotion and stand-up as well as fast running with a simulated Cassie robot. Our method leveraging sequentiality is a step towards the resolution of complex robotic tasks under minimal specification effort, a key feature for the next generation of autonomous robots.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
329,352
2502.00131
Middleman Bias in Advertising: Aligning Relevance of Keyphrase Recommendations with Search
E-commerce sellers are recommended keyphrases based on their inventory on which they advertise to increase buyer engagement (clicks/sales). Keyphrases must be pertinent to items; otherwise, it can result in seller dissatisfaction and poor targeting -- towards that end relevance filters are employed. In this work, we describe the shortcomings of training relevance filter models on biased click/sales signals. We re-conceptualize advertiser keyphrase relevance as interaction between two dynamical systems -- Advertising which produces the keyphrases and Search which acts as a middleman to reach buyers. We discuss the bias of search relevance systems (middleman bias) and the need to align advertiser keyphrases with search relevance signals. We also compare the performance of cross encoders and bi-encoders in modeling this alignment and the scalability of such a solution for sellers at eBay.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
529,223
2406.10212
NeST: Neural Stress Tensor Tomography by leveraging 3D Photoelasticity
Photoelasticity enables full-field stress analysis in transparent objects through stress-induced birefringence. Existing techniques are limited to 2D slices and require destructively slicing the object. Recovering the internal 3D stress distribution of the entire object is challenging as it involves solving a tensor tomography problem and handling phase wrapping ambiguities. We introduce NeST, an analysis-by-synthesis approach for reconstructing 3D stress tensor fields as neural implicit representations from polarization measurements. Our key insight is to jointly handle phase unwrapping and tensor tomography using a differentiable forward model based on Jones calculus. Our non-linear model faithfully matches real captures, unlike prior linear approximations. We develop an experimental multi-axis polariscope setup to capture 3D photoelasticity and experimentally demonstrate that NeST reconstructs the internal stress distribution for objects with varying shape and force conditions. Additionally, we showcase novel applications in stress analysis, such as visualizing photoelastic fringes by virtually slicing the object and viewing photoelastic fringes from unseen viewpoints. NeST paves the way for scalable non-destructive 3D photoelastic analysis.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
true
464,277
1203.3128
Distributed Space Time Coding for Wireless Two-way Relaying
We consider the wireless two-way relay channel, in which two-way data transfer takes place between the end nodes with the help of a relay. For the Denoise-And-Forward (DNF) protocol, it was shown by Koike-Akino et. al. that adaptively changing the network coding map used at the relay greatly reduces the impact of Multiple Access interference at the relay. The harmful effect of the deep channel fade conditions can be effectively mitigated by proper choice of these network coding maps at the relay. Alternatively, in this paper we propose a Distributed Space Time Coding (DSTC) scheme, which effectively removes most of the deep fade channel conditions at the transmitting nodes itself without any CSIT and without any need to adaptively change the network coding map used at the relay. It is shown that the deep fades occur when the channel fade coefficient vector falls in a finite number of vector subspaces of $\mathbb{C}^2$, which are referred to as the singular fade subspaces. DSTC design criterion referred to as the \textit{singularity minimization criterion} under which the number of such vector subspaces are minimized is obtained. Also, a criterion to maximize the coding gain of the DSTC is obtained. Explicit low decoding complexity DSTC designs which satisfy the singularity minimization criterion and maximize the coding gain for QAM and PSK signal sets are provided. Simulation results show that at high Signal to Noise Ratio, the DSTC scheme provides large gains when compared to the conventional Exclusive OR network code and performs slightly better than the adaptive network coding scheme proposed by Koike-Akino et. al.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
14,881
1506.03949
New Results for Domineering from Combinatorial Game Theory Endgame Databases
We have constructed endgame databases for all single-component positions up to 15 squares for Domineering, filled with exact Combinatorial Game Theory (CGT) values in canonical form. The most important findings are as follows. First, as an extension of Conway's [8] famous Bridge Splitting Theorem for Domineering, we state and prove another theorem, dubbed the Bridge Destroying Theorem for Domineering. Together these two theorems prove very powerful in determining the CGT values of large positions as the sum of the values of smaller fragments, but also to compose larger positions with specified values from smaller fragments. Using the theorems, we then prove that for any dyadic rational number there exist Domineering positions with that value. Second, we investigate Domineering positions with infinitesimal CGT values, in particular ups and downs, tinies and minies, and nimbers. In the databases we find many positions with single or double up and down values, but no ups and downs with higher multitudes. However, we prove that such single-component ups and downs easily can be constructed. Further, we find Domineering positions with 11 different tinies and minies values. For each we give an example. Next, for nimbers we find many Domineering positions with values up to *3. This is surprising, since Drummond-Cole [10] suspected that no *2 and *3 positions in standard Domineering would exist. We show and characterize many *2 and *3 positions. Finally, we give some Domineering positions with values interesting for other reasons. Third, we have investigated the temperature of all positions in our databases. There appears to be exactly one position with temperature 2 (as already found before) and no positions with temperature larger than 2. This supports Berlekamp's conjecture that 2 is the highest possible temperature in Domineering.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
44,113
2407.06323
When in Doubt, Cascade: Towards Building Efficient and Capable Guardrails
Large language models (LLMs) have convincing performance in a variety of downstream tasks. However, these systems are prone to generating undesirable outputs such as harmful and biased text. In order to remedy such generations, the development of guardrail (or detector) models has gained traction. Motivated by findings from developing a detector for social bias, we adopt the notion of a use-mention distinction - which we identified as the primary source of under-performance in the preliminary versions of our social bias detector. Armed with this information, we describe a fully extensible and reproducible synthetic data generation pipeline which leverages taxonomy-driven instructions to create targeted and labeled data. Using this pipeline, we generate over 300K unique contrastive samples and provide extensive experiments to systematically evaluate performance on a suite of open source datasets. We show that our method achieves competitive performance with a fraction of the cost in compute and offers insight into iteratively developing efficient and capable guardrail models. Warning: This paper contains examples of text which are toxic, biased, and potentially harmful.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
471,357
2410.05557
Rethinking Weak-to-Strong Augmentation in Source-Free Domain Adaptive Object Detection
Source-Free domain adaptive Object Detection (SFOD) aims to transfer a detector (pre-trained on source domain) to new unlabelled target domains. Current SFOD methods typically follow the Mean Teacher framework, where weak-to-strong augmentation provides diverse and sharp contrast for self-supervised learning. However, this augmentation strategy suffers from an inherent problem called crucial semantics loss: Due to random, strong disturbance, strong augmentation is prone to losing typical visual components, hindering cross-domain feature extraction. To address this thus-far ignored limitation, this paper introduces a novel Weak-to-Strong Contrastive Learning (WSCoL) approach. The core idea is to distill semantics lossless knowledge in the weak features (from the weak/teacher branch) to guide the representation learning upon the strong features (from the strong/student branch). To achieve this, we project the original features into a shared space using a mapping network, thereby reducing the bias between the weak and strong features. Meanwhile, a weak features-guided contrastive learning is performed in a weak-to-strong manner alternatively. Specifically, we first conduct an adaptation-aware prototype-guided clustering on the weak features to generate pseudo labels for corresponding strong features matched through proposals. Sequentially, we identify positive-negative samples based on the pseudo labels and perform cross-category contrastive learning on the strong features where an uncertainty estimator encourages adaptive background contrast. Extensive experiments demonstrate that WSCoL yields new state-of-the-art performance, offering a built-in mechanism mitigating crucial semantics loss for traditional Mean Teacher framework. The code and data will be released soon.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
495,787
2403.02044
Time-Reversal of Stochastic Maximum Principle
Stochastic maximum principle (SMP) specifies a necessary condition for the solution of a stochastic optimal control problem. The condition involves a coupled system of forward and backward stochastic differential equations (FBSDE) for the state and the adjoint processes. Numerical solution of the FBSDE is challenging because the boundary condition of the adjoint process is specified at the terminal time, while the solution should be adaptable to the forward in time filtration of a Wiener process. In this paper, a "time-reversal" of the FBSDE system is proposed that involves integration with respect to a backward in time Wiener process. The time-reversal is used to propose an iterative Monte-Carlo procedure to solves the FBSDE system and its time-reversal simultaneously. The procedure involves approximating the {F\"ollmer's drift} and solving a regression problem between the state and its adjoint at each time. The procedure is illustrated for the linear quadratic (LQ) optimal control problem with a numerical example.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
434,683
2304.12302
Co-optimization of Operational Unit Commitment and Reserve Power Scheduling for Modern Grid
Modern power grids combine conventional generators with distributed energy resource (DER) generators in response to concerns over climate change and long-term energy security. Due to the intermittent nature of DERs, different types of energy storage devices (ESDs) must be installed to minimize unit commitment problems and accommodate spinning reserve power. ESDs have operational and resource constraints, such as charge and discharge rates or maximum and minimum state of charge (SoC). This paper proposes a linear programming (LP) optimization framework to maximize the unit-committed power for a specific optimum spinning reserve power for a particular power grid. Using this optimization framework, we also determine the total dispatchable power, non-dispatchable power, spinning reserve power, and arbitrage power using DER and ESD resource constraints. To describe the ESD and DER constraints, this paper evaluates several factors: availability, dispatchability, non-dispatchability, spinning reserve, and arbitrage factor. These factors are used as constraints in this LP optimization to determine the total optimal reserve power from the existing DERs. The proposed optimization framework maximizes the ratio of dispatchable to non-dispatchable power to minimize unit commitment problems within a specific range of spinning reserve power set to each DER. This optimization framework is implemented in the modified IEEE 34-bus distribution system, adding ten DERs in ten different buses to verify its efficacy.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
360,163
2211.10470
A mixed-reality dataset for category-level 6D pose and size estimation of hand-occluded containers
Estimating the 6D pose and size of household containers is challenging due to large intra-class variations in the object properties, such as shape, size, appearance, and transparency. The task is made more difficult when these objects are held and manipulated by a person due to varying degrees of hand occlusions caused by the type of grasps and by the viewpoint of the camera observing the person holding the object. In this paper, we present a mixed-reality dataset of hand-occluded containers for category-level 6D object pose and size estimation. The dataset consists of 138,240 images of rendered hands and forearms holding 48 synthetic objects, split into 3 grasp categories over 30 real backgrounds. We re-train and test an existing model for 6D object pose estimation on our mixed-reality dataset. We discuss the impact of the use of this dataset in improving the task of 6D pose and size estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
331,320
0904.0016
Stochastic Models of User-Contributory Web Sites
We describe a general stochastic processes-based approach to modeling user-contributory web sites, where users create, rate and share content. These models describe aggregate measures of activity and how they arise from simple models of individual users. This approach provides a tractable method to understand user activity on the web site and how this activity depends on web site design choices, especially the choice of what information about other users' behaviors is shown to each user. We illustrate this modeling approach in the context of user-created content on the news rating site Digg.
false
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
3,454
2402.14875
What's in a Name? Auditing Large Language Models for Race and Gender Bias
We employ an audit design to investigate biases in state-of-the-art large language models, including GPT-4. In our study, we prompt the models for advice involving a named individual across a variety of scenarios, such as during car purchase negotiations or election outcome predictions. We find that the advice systematically disadvantages names that are commonly associated with racial minorities and women. Names associated with Black women receive the least advantageous outcomes. The biases are consistent across 42 prompt templates and several models, indicating a systemic issue rather than isolated incidents. While providing numerical, decision-relevant anchors in the prompt can successfully counteract the biases, qualitative details have inconsistent effects and may even increase disparities. Our findings underscore the importance of conducting audits at the point of LLM deployment and implementation to mitigate their potential for harm against marginalized communities.
false
false
false
false
true
false
true
false
true
false
false
false
false
true
false
false
false
false
431,892
1402.4442
Artificial Mutation inspired Hyper-heuristic for Runtime Usage of Multi-objective Algorithms
In the last years, multi-objective evolutionary algorithms (MOEA) have been applied to different software engineering problems where many conflicting objectives have to be optimized simultaneously. In theory, evolutionary algorithms feature a nice property for runtime optimization as they can provide a solution in any execution time. In practice, based on a Darwinian inspired natural selection, these evolutionary algorithms produce many deadborn solutions whose computation results in a computational resources wastage: natural selection is naturally slow. In this paper, we reconsider this founding analogy to accelerate convergence of MOEA, by looking at modern biology studies: artificial selection has been used to achieve an anticipated specific purpose instead of only relying on crossover and natural selection (i.e., Muller et al [18] research on artificial mutation of fruits with X-Ray). Putting aside the analogy with natural selection , the present paper proposes an hyper-heuristic for MOEA algorithms named Sputnik 1 that uses artificial selective mutation to improve the convergence speed of MOEA. Sputnik leverages the past history of mutation efficiency to select the most relevant mutations to perform. We evaluate Sputnik on a cloud-reasoning engine, which drives on-demand provisioning while considering conflicting performance and cost objectives. We have conducted experiments to highlight the significant performance improvement of Sputnik in terms of resolution time.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
true
30,967
1312.5785
EXMOVES: Classifier-based Features for Scalable Action Recognition
This paper introduces EXMOVES, learned exemplar-based features for efficient recognition of actions in videos. The entries in our descriptor are produced by evaluating a set of movement classifiers over spatial-temporal volumes of the input sequence. Each movement classifier is a simple exemplar-SVM trained on low-level features, i.e., an SVM learned using a single annotated positive space-time volume and a large number of unannotated videos. Our representation offers two main advantages. First, since our mid-level features are learned from individual video exemplars, they require minimal amount of supervision. Second, we show that simple linear classification models trained on our global video descriptor yield action recognition accuracy approaching the state-of-the-art but at orders of magnitude lower cost, since at test-time no sliding window is necessary and linear models are efficient to train and test. This enables scalable action recognition, i.e., efficient classification of a large number of different actions even in large video databases. We show the generality of our approach by building our mid-level descriptors from two different low-level feature representations. The accuracy and efficiency of the approach are demonstrated on several large-scale action recognition benchmarks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
29,264
1903.10643
Study of Activity-Aware Multiple Feedback Successive Interference Cancellation for Massive Machine-Type Communications
In this work, we propose an activity-aware low-complexity multiple feedback successive interference cancellation (AA-MF-SIC) strategy for massive machine-type communications. The computational complexity of the proposed AA-MF-SIC is as low as the conventional SIC algorithm with very low additional complexity added. The simulation results show that the algorithm significantly outperforms the conventional SIC schemes and other proposals.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
125,324
1912.04250
Self-supervised Object Motion and Depth Estimation from Video
We present a self-supervised learning framework to estimate the individual object motion and monocular depth from video. We model the object motion as a 6 degree-of-freedom rigid-body transformation. The instance segmentation mask is leveraged to introduce the information of object. Compared with methods which predict dense optical flow map to model the motion, our approach significantly reduces the number of values to be estimated. Our system eliminates the scale ambiguity of motion prediction through imposing a novel geometric constraint loss term. Experiments on KITTI driving dataset demonstrate our system is capable to capture the object motion without external annotation. Our system outperforms previous self-supervised approaches in terms of 3D scene flow prediction, and contribute to the disparity prediction in dynamic area.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
156,799
1906.01761
Generalized Linear Rule Models
This paper considers generalized linear models using rule-based features, also referred to as rule ensembles, for regression and probabilistic classification. Rules facilitate model interpretation while also capturing nonlinear dependences and interactions. Our problem formulation accordingly trades off rule set complexity and prediction accuracy. Column generation is used to optimize over an exponentially large space of rules without pre-generating a large subset of candidates or greedily boosting rules one by one. The column generation subproblem is solved using either integer programming or a heuristic optimizing the same objective. In experiments involving logistic and linear regression, the proposed methods obtain better accuracy-complexity trade-offs than existing rule ensemble algorithms. At one end of the trade-off, the methods are competitive with less interpretable benchmark models.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
133,825
2412.05158
Gaining Explainability from a CNN for Stereotype Detection Based on Mice Stopping Behavior
Understanding the behavior of laboratory animals is a key to find answers about diseases and neurodevelopmental disorders that also affects humans. One behavior of interest is the stopping, as it correlates with exploration, feeding and sleeping habits of individuals. To improve comprehension of animal's behavior, we focus on identifying trait revealing age/sex of mice through the series of stopping spots of each individual. We track 4 mice using LiveMouseTracker (LMT) system during 3 days. Then, we build a stack of 2D histograms of the stop positions. This stack of histograms passes through a shallow CNN architecture to classify mice in terms of age and sex. We observe that female mice show more recognizable behavioral patterns, reaching a classification accuracy of more than 90%, while males, which do not present as many distinguishable patterns, reach an accuracy of 62.5%. To gain explainability from the model, we look at the activation function of the convolutional layers and found that some regions of the cage are preferentially explored by females. Males, especially juveniles, present behavior patterns that oscillate between juvenile female and adult male.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
514,714
2409.05206
SEF: A Method for Computing Prediction Intervals by Shifting the Error Function in Neural Networks
In today's era, Neural Networks (NN) are applied in various scientific fields such as robotics, medicine, engineering, etc. However, the predictions of neural networks themselves contain a degree of uncertainty that must always be taken into account before any decision is made. This is why many researchers have focused on developing different ways to quantify the uncertainty of neural network predictions. Some of these methods are based on generating prediction intervals (PI) via neural networks for the requested target values. The SEF (Shifting the Error Function) method presented in this paper is a new method that belongs to this category of methods. The proposed approach involves training a single neural network three times, thus generating an estimate along with the corresponding upper and lower bounds for a given problem. A pivotal aspect of the method is the calculation of a parameter from the initial network's estimates, which is then integrated into the loss functions of the other two networks. This innovative process effectively produces PIs, resulting in a robust and efficient technique for uncertainty quantification. To evaluate the effectiveness of our method, a comparison in terms of successful PI generation between the SEF, PI3NN and PIVEN methods was made using two synthetic datasets.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
486,677
2008.08900
Coded Caching over Multicast Routing Networks
The coded caching scheme originally proposed by Maddah-Ali and Niesen (MAN) transmits coded multicast messages from a server to users equipped with caches via a capacitated shared-link and was shown to be information theoretically optimal within a constant multiplicative factor. This work extends the MAN scheme to a class of two-hop wired-wireless networks including one server connected via fronthaul links to a layer of $H$ helper nodes (access points/base stations), which in turns communicate via a wireless access network to $K$ users, each equipped with its own cache. Two variants are considered, which differ in the modeling of the access segment. Both models should be regarded as abstractions at the network layer for physical scenarios such as local area networks and cellular networks, spatially distributed over a certain coverage area. The key focus of our approach consists of routing MAN-type multicast messages through the network and formulating the optimal routing scheme as an optimization problem that can be solved exactly or for which we give powerful heuristic algorithms. Our approach solves at once many of the open practical problems identified as stumbling blocks for the application of coded caching in practical scenarios, namely: asynchronous streaming sessions, finite file size, scalability of the scheme to large and spatially distributed networks, user mobility and random activity (users joining and leaving the system at arbitrary times), decentralized prefetching of the cache contents, end-to-end encryption of HTTPS requests, which renders the helper nodes oblivious of the user demands.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
true
192,539
2403.01977
TTA-Nav: Test-time Adaptive Reconstruction for Point-Goal Navigation under Visual Corruptions
Robot navigation under visual corruption presents a formidable challenge. To address this, we propose a Test-time Adaptation (TTA) method, named as TTA-Nav, for point-goal navigation under visual corruptions. Our "plug-and-play" method incorporates a top-down decoder to a pre-trained navigation model. Firstly, the pre-trained navigation model gets a corrupted image and extracts features. Secondly, the top-down decoder produces the reconstruction given the high-level features extracted by the pre-trained model. Then, it feeds the reconstruction of a corrupted image back to the pre-trained model. Finally, the pre-trained model does forward pass again to output action. Despite being trained solely on clean images, the top-down decoder can reconstruct cleaner images from corrupted ones without the need for gradient-based adaptation. The pre-trained navigation model with our top-down decoder significantly enhances navigation performance across almost all visual corruptions in our benchmarks. Our method improves the success rate of point-goal navigation from the state-of-the-art result of 46% to 94% on the most severe corruption. This suggests its potential for broader application in robotic visual navigation. Project page: https://sites.google.com/view/tta-nav
false
false
false
false
true
false
false
true
false
false
false
true
false
false
false
false
false
false
434,659
1711.02552
Explicit Error Bounds for Carleman Linearization
We revisit the method of Carleman linearization for systems of ordinary differential equations with polynomial right-hand sides. This transformation provides an approximate linearization in a higher-dimensional space through the exact embedding of polynomial nonlinearities into an infinite-dimensional linear system, which is then truncated to obtain a finite-dimensional representation with an additive error. To the best of our knowledge, no explicit calculation of the error bound has been studied. In this paper, we propose two strategies to obtain a time-dependent function that locally bounds the truncation error. In the first approach, we proceed by iterative backwards-integration of the truncated system. However, the resulting error bound requires an a priori estimate of the norm of the exact solution for the given time horizon. To overcome this difficulty, we construct a combinatorial approach and solve it using generating functions, obtaining a local error bound that can be computed effectively.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
84,081
2107.06525
RIS-Enhanced Spectrum Sensing: How Many Reflecting Elements Are Required to Achieve a Detection Probability Close to 1?
In this paper, we propose an reconfigurable intelligent surface (RIS) enhanced spectrum sensing system, in which the primary transmitter is equipped with single antenna, the secondary transmitter is equipped with multiple antennas, and the RIS is employed to improve the detection performance. Without loss of generality, we adopt the maximum eigenvalue detection approach, and propose a corresponding analytical framework based on large dimensional random matrix theory, to evaluate the detection probability in the asymptotic regime. Besides, the phase shift matrix of the RIS is designed with only the statistical channel state information (CSI), which is shown to be quite effective when the RIS-related channels are of Rician fading or line-of-sight (LoS). With the designed phase shift matrix, the asymptotic distributions of the equivalent channel gains are derived. Then, we provide the theoretical predictions about the number of reflecting elements (REs) required to achieve a detection probability close to 1. Finally, we present the Monte-Carlo simulation results to evaluate the accuracy of the proposed asymptotic analytical framework for the detection probability and the validity of the theoretical predictions about the number of REs required to achieve a detection probability close to 1. Moreover, the simulation results show that the proposed RIS-enhanced spectrum sensing system can substantially improve the detection performance.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
246,117
2302.06093
Learning-Based Defect Recognitions for Autonomous UAV Inspections
Automatic crack detection and segmentation play a significant role in the whole system of unmanned aerial vehicle inspections. In this paper, we have implemented a deep learning framework for crack detection based on classical network architectures including Alexnet, VGG, and Resnet. Moreover, inspired by the feature pyramid network architecture, a hierarchical convolutional neural network (CNN) deep learning framework which is efficient in crack segmentation is also proposed, and its performance of it is compared with other state-of-the-art network architecture. We have summarized the existing crack detection and segmentation datasets and established the largest existing benchmark dataset on the internet for crack detection and segmentation, which is open-sourced for the research community. Our feature pyramid crack segmentation network is tested on the benchmark dataset and gives satisfactory segmentation results. A framework for automatic unmanned aerial vehicle inspections is also proposed and will be established for the crack inspection tasks of various concrete structures. All our self-established datasets and codes are open-sourced at: https://github.com/KangchengLiu/Crack-Detection-and-Segmentation-Dataset-for-UAV-Inspection
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
345,288
1312.5202
Consensus in the presence of interference
This paper studies distributed strategies for average-consensus of arbitrary vectors in the presence of network interference. We assume that the underlying communication on any \emph{link} suffers from \emph{additive interference} caused due to the communication by other agents following their own consensus protocol. Additionally, no agent knows how many or which agents are interfering with its communication. Clearly, the standard consensus protocol does not remain applicable in such scenarios. In this paper, we cast an algebraic structure over the interference and show that the standard protocol can be modified such that the average is reachable in a subspace whose dimension is complimentary to the maximal dimension of the interference subspaces (over all of the communication links). To develop the results, we use \emph{information alignment} to align the intended transmission (over each link) to the null-space of the interference (on that link). We show that this alignment is indeed invertible, i.e. the intended transmission can be recovered over which, subsequently, consensus protocol is implemented. That \emph{local} protocols exist even when the collection of the interference subspaces span the entire vector space is somewhat surprising.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
29,214
2010.02986
Compositional Demographic Word Embeddings
Word embeddings are usually derived from corpora containing text from many individuals, thus leading to general purpose representations rather than individually personalized representations. While personalized embeddings can be useful to improve language model performance and other language processing tasks, they can only be computed for people with a large amount of longitudinal data, which is not the case for new users. We propose a new form of personalized word embeddings that use demographic-specific word representations derived compositionally from full or partial demographic information for a user (i.e., gender, age, location, religion). We show that the resulting demographic-aware word representations outperform generic word representations on two tasks for English: language modeling and word associations. We further explore the trade-off between the number of available attributes and their relative effectiveness and discuss the ethical implications of using them.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
199,224
1008.1770
A complex network approach to robustness and vulnerability of spatially organized water distribution networks
In this work, water distribution systems are regarded as large sparse planar graphs with complex network characteristics and the relationship between important topological features of the network (i.e. structural robustness and loop redundancy) and system resilience, viewed as the antonym to structural vulnerability, are assessed. Deterministic techniques from complex networks and spectral graph theory are utilized to quantify well-connectedness and estimate loop redundancy in the studied benchmark networks. By using graph connectivity and expansion properties, system robustness against node/link failures and isolation of the demand nodes from the source(s) are assessed and network tolerance against random failures and targeted attacks on their bridges and cut sets are analyzed. Among other measurements, two metrics of meshed-ness and algebraic connectivity are proposed as candidates for quantification of redundancy and robustness, respectively, in optimization design models. A brief discussion on the scope and limitations of the provided measurements in the analysis of operational reliability of water distribution systems is presented.
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
7,248
1008.3585
Ultrametric and Generalized Ultrametric in Computational Logic and in Data Analysis
Following a review of metric, ultrametric and generalized ultrametric, we review their application in data analysis. We show how they allow us to explore both geometry and topology of information, starting with measured data. Some themes are then developed based on the use of metric, ultrametric and generalized ultrametric in logic. In particular we study approximation chains in an ultrametric or generalized ultrametric context. Our aim in this work is to extend the scope of data analysis by facilitating reasoning based on the data analysis; and to show how quantitative and qualitative data analysis can be incorporated into logic programming.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
7,321
1909.05316
What Makes A Good Story? Designing Composite Rewards for Visual Storytelling
Previous storytelling approaches mostly focused on optimizing traditional metrics such as BLEU, ROUGE and CIDEr. In this paper, we re-examine this problem from a different angle, by looking deep into what defines a realistically-natural and topically-coherent story. To this end, we propose three assessment criteria: relevance, coherence and expressiveness, which we observe through empirical analysis could constitute a "high-quality" story to the human eye. Following this quality guideline, we propose a reinforcement learning framework, ReCo-RL, with reward functions designed to capture the essence of these quality criteria. Experiments on the Visual Storytelling Dataset (VIST) with both automatic and human evaluations demonstrate that our ReCo-RL model achieves better performance than state-of-the-art baselines on both traditional metrics and the proposed new criteria.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
145,055
1706.01678
Text Summarization using Abstract Meaning Representation
With an ever increasing size of text present on the Internet, automatic summary generation remains an important problem for natural language understanding. In this work we explore a novel full-fledged pipeline for text summarization with an intermediate step of Abstract Meaning Representation (AMR). The pipeline proposed by us first generates an AMR graph of an input story, through which it extracts a summary graph and finally, generate summary sentences from this summary graph. Our proposed method achieves state-of-the-art results compared to the other text summarization routines based on AMR. We also point out some significant problems in the existing evaluation methods, which make them unsuitable for evaluating summary quality.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
74,845
2312.07922
Memory-Efficient Reversible Spiking Neural Networks
Spiking neural networks (SNNs) are potential competitors to artificial neural networks (ANNs) due to their high energy-efficiency on neuromorphic hardware. However, SNNs are unfolded over simulation time steps during the training process. Thus, SNNs require much more memory than ANNs, which impedes the training of deeper SNN models. In this paper, we propose the reversible spiking neural network to reduce the memory cost of intermediate activations and membrane potentials during training. Firstly, we extend the reversible architecture along temporal dimension and propose the reversible spiking block, which can reconstruct the computational graph and recompute all intermediate variables in forward pass with a reverse process. On this basis, we adopt the state-of-the-art SNN models to the reversible variants, namely reversible spiking ResNet (RevSResNet) and reversible spiking transformer (RevSFormer). Through experiments on static and neuromorphic datasets, we demonstrate that the memory cost per image of our reversible SNNs does not increase with the network depth. On CIFAR10 and CIFAR100 datasets, our RevSResNet37 and RevSFormer-4-384 achieve comparable accuracies and consume 3.79x and 3.00x lower GPU memory per image than their counterparts with roughly identical model complexity and parameters. We believe that this work can unleash the memory constraints in SNN training and pave the way for training extremely large and deep SNNs. The code is available at https://github.com/mi804/RevSNN.git.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
true
false
false
415,118
2407.12446
Non-parametric regularization for class imbalance federated medical image classification
Limited training data and severe class imbalance pose significant challenges to developing clinically robust deep learning models. Federated learning (FL) addresses the former by enabling different medical clients to collaboratively train a deep model without sharing privacy-sensitive data. However, class imbalance worsens due to variation in inter-client class distribution. We propose federated learning with non-parametric regularization (FedNPR and FedNPR-Per, a personalized version of FedNPR) to regularize the feature extractor and enhance useful and discriminative signal in the feature space. Our extensive experiments show that FedNPR outperform the existing state-of-the art FL approaches in class imbalance skin lesion classification and intracranial hemorrhage identification. Additionally, the non-parametric regularization module consistently improves the performance of existing state-of-the-art FL approaches. We believe that NPR is a valuable tool in FL under clinical settings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,934
2309.00904
Developmental Scaffolding with Large Language Models
Exploratoration and self-observation are key mechanisms of infant sensorimotor development. These processes are further guided by parental scaffolding accelerating skill and knowledge acquisition. In developmental robotics, this approach has been adopted often by having a human acting as the source of scaffolding. In this study, we investigate whether Large Language Models (LLMs) can act as a scaffolding agent for a robotic system that aims to learn to predict the effects of its actions. To this end, an object manipulation setup is considered where one object can be picked and placed on top of or in the vicinity of another object. The adopted LLM is asked to guide the action selection process through algorithmically generated state descriptions and action selection alternatives in natural language. The simulation experiments that include cubes in this setup show that LLM-guided (GPT3.5-guided) learning yields significantly faster discovery of novel structures compared to random exploration. However, we observed that GPT3.5 fails to effectively guide the robot in generating structures with different affordances such as cubes and spheres. Overall, we conclude that even without fine-tuning, LLMs may serve as a moderate scaffolding agent for improving robot learning, however, they still lack affordance understanding which limits the applicability of the current LLMs in robotic scaffolding tasks.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
389,464
2201.09739
Dense Air Quality Maps Using Regressive Facility Location Based Drive By Sensing
Currently, fixed static sensing is a primary way to monitor environmental data like air quality in cities. However, to obtain a dense spatial coverage, a large number of static monitors are required, thereby making it a costly option. Dense spatiotemporal coverage can be achieved using only a fraction of static sensors by deploying them on the moving vehicles, known as the drive by sensing paradigm. The redundancy present in the air quality data can be exploited by processing the sparsely sampled data to impute the remaining unobserved data points using the matrix completion techniques. However, the accuracy of imputation is dependent on the extent to which the moving sensors capture the inherent structure of the air quality matrix. Therefore, the challenge is to pick those set of paths (using vehicles) that perform representative sampling in space and time. Most works in the literature for vehicle subset selection focus on maximizing the spatiotemporal coverage by maximizing the number of samples for different locations and time stamps which is not an effective representative sampling strategy. We present regressive facility location-based drive by sensing, an efficient vehicle selection framework that incorporates the smoothness in neighboring locations and autoregressive time correlation while selecting the optimal set of vehicles for effective spatiotemporal sampling. We show that the proposed drive by sensing problem is submodular, thereby lending itself to a greedy algorithm but with performance guarantees. We evaluate our framework on selecting a subset from the fleet of public transport in Delhi, India. We illustrate that the proposed method samples the representative spatiotemporal data against the baseline methods, reducing the extrapolation error on the simulated air quality data. Our method, therefore, has the potential to provide cost effective dense air quality maps.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
276,769
2405.13127
Towards Retrieval-Augmented Architectures for Image Captioning
The objective of image captioning models is to bridge the gap between the visual and linguistic modalities by generating natural language descriptions that accurately reflect the content of input images. In recent years, researchers have leveraged deep learning-based models and made advances in the extraction of visual features and the design of multimodal connections to tackle this task. This work presents a novel approach towards developing image captioning models that utilize an external kNN memory to improve the generation process. Specifically, we propose two model variants that incorporate a knowledge retriever component that is based on visual similarities, a differentiable encoder to represent input images, and a kNN-augmented language model to predict tokens based on contextual cues and text retrieved from the external memory. We experimentally validate our approach on COCO and nocaps datasets and demonstrate that incorporating an explicit external memory can significantly enhance the quality of captions, especially with a larger retrieval corpus. This work provides valuable insights into retrieval-augmented captioning models and opens up new avenues for improving image captioning at a larger scale.
false
false
false
false
true
false
false
false
true
false
false
true
false
false
false
false
false
true
455,815
1207.5152
Stator flux optimization on direct torque control with fuzzy logic
The Direct Torque Control (DTC) is well known as an effective control technique for high performance drives in a wide variety of industrial applications and conventional DTC technique uses two constant reference value: torque and stator flux. In this paper, fuzzy logic based stator flux optimization technique for DTC drives that has been proposed. The proposed fuzzy logic based stator flux optimizer self-regulates the stator flux reference using induction motor load situation without need of any motor parameters. Simulation studies have been carried out with Matlab/Simulink to compare the proposed system behaviors at vary load conditions. Simulation results show that the performance of the proposed DTC technique has been improved and especially at low-load conditions torque ripple are greatly reduced with respect to the conventional DTC.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
17,699
1906.00938
Big-Data Clustering: K-Means or K-Indicators?
The K-means algorithm is arguably the most popular data clustering method, commonly applied to processed datasets in some "feature spaces", as is in spectral clustering. Highly sensitive to initializations, however, K-means encounters a scalability bottleneck with respect to the number of clusters K as this number grows in big data applications. In this work, we promote a closely related model called K-indicators model and construct an efficient, semi-convex-relaxation algorithm that requires no randomized initializations. We present extensive empirical results to show advantages of the new algorithm when K is large. In particular, using the new algorithm to start the K-means algorithm, without any replication, can significantly outperform the standard K-means with a large number of currently state-of-the-art random replications.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
133,548
2212.14115
Certifying Safety in Reinforcement Learning under Adversarial Perturbation Attacks
Function approximation has enabled remarkable advances in applying reinforcement learning (RL) techniques in environments with high-dimensional inputs, such as images, in an end-to-end fashion, mapping such inputs directly to low-level control. Nevertheless, these have proved vulnerable to small adversarial input perturbations. A number of approaches for improving or certifying robustness of end-to-end RL to adversarial perturbations have emerged as a result, focusing on cumulative reward. However, what is often at stake in adversarial scenarios is the violation of fundamental properties, such as safety, rather than the overall reward that combines safety with efficiency. Moreover, properties such as safety can only be defined with respect to true state, rather than the high-dimensional raw inputs to end-to-end policies. To disentangle nominal efficiency and adversarial safety, we situate RL in deterministic partially-observable Markov decision processes (POMDPs) with the goal of maximizing cumulative reward subject to safety constraints. We then propose a partially-supervised reinforcement learning (PSRL) framework that takes advantage of an additional assumption that the true state of the POMDP is known at training time. We present the first approach for certifying safety of PSRL policies under adversarial input perturbations, and two adversarial training approaches that make direct use of PSRL. Our experiments demonstrate both the efficacy of the proposed approach for certifying safety in adversarial environments, and the value of the PSRL framework coupled with adversarial training in improving certified safety while preserving high nominal reward and high-quality predictions of true state.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
338,506
1701.08546
Survey on Models and Techniques for Root-Cause Analysis
Automation and computer intelligence to support complex human decisions becomes essential to manage large and distributed systems in the Cloud and IoT era. Understanding the root cause of an observed symptom in a complex system has been a major problem for decades. As industry dives into the IoT world and the amount of data generated per year grows at an amazing speed, an important question is how to find appropriate mechanisms to determine root causes that can handle huge amounts of data or may provide valuable feedback in real-time. While many survey papers aim at summarizing the landscape of techniques for modelling system behavior and infering the root cause of a problem based in the resulting models, none of those focuses on analyzing how the different techniques in the literature fit growing requirements in terms of performance and scalability. In this survey, we provide a review of root-cause analysis, focusing on these particular aspects. We also provide guidance to choose the best root-cause analysis strategy depending on the requirements of a particular system and application.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
67,487
1401.1916
Multiple-output support vector regression with a firefly algorithm for interval-valued stock price index forecasting
Highly accurate interval forecasting of a stock price index is fundamental to successfully making a profit when making investment decisions, by providing a range of values rather than a point estimate. In this study, we investigate the possibility of forecasting an interval-valued stock price index series over short and long horizons using multi-output support vector regression (MSVR). Furthermore, this study proposes a firefly algorithm (FA)-based approach, built on the established MSVR, for determining the parameters of MSVR (abbreviated as FA-MSVR). Three globally traded broad market indices are used to compare the performance of the proposed FA-MSVR method with selected counterparts. The quantitative and comprehensive assessments are performed on the basis of statistical criteria, economic criteria, and computational cost. In terms of statistical criteria, we compare the out-of-sample forecasting using goodness-of-forecast measures and testing approaches. In terms of economic criteria, we assess the relative forecast performance with a simple trading strategy. The results obtained in this study indicate that the proposed FA-MSVR method is a promising alternative for forecasting interval-valued financial time series.
false
true
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
29,697
1903.08550
OCGAN: One-class Novelty Detection Using GANs with Constrained Latent Representations
We present a novel model called OCGAN for the classical problem of one-class novelty detection, where, given a set of examples from a particular class, the goal is to determine if a query example is from the same class. Our solution is based on learning latent representations of in-class examples using a denoising auto-encoder network. The key contribution of our work is our proposal to explicitly constrain the latent space to exclusively represent the given class. In order to accomplish this goal, firstly, we force the latent space to have bounded support by introducing a tanh activation in the encoder's output layer. Secondly, using a discriminator in the latent space that is trained adversarially, we ensure that encoded representations of in-class examples resemble uniform random samples drawn from the same bounded space. Thirdly, using a second adversarial discriminator in the input space, we ensure all randomly drawn latent samples generate examples that look real. Finally, we introduce a gradient-descent based sampling technique that explores points in the latent space that generate potential out-of-class examples, which are fed back to the network to further train it to generate in-class examples from those points. The effectiveness of the proposed method is measured across four publicly available datasets using two one-class novelty detection protocols where we achieve state-of-the-art results.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
124,852
1908.08859
Dissent and Rebellion in the House of Commons: A Social Network Analysis of Brexit-Related Divisions in the 57$^{ th}$ Parliament
The British party system is known for its discipline and cohesion, but it remains wedged on one issue: European integration. We offer a methodology using social network analysis that considers the individual interactions of MPs in the voting process. Using public Parliamentary records, we scraped votes of individual MPs in the 57th parliament (June 2017 to April 2019), computed pairwise similarity scores and calculated rebellion metrics based on eigenvector centralities. Comparing the networks of Brexit- and non-Brexit divisions, our methodology was able to detect a significant difference in eurosceptic behaviour for the former, and using a rebellion metric we predicted how MPs would vote in a forthcoming Brexit deal with over 90% accuracy.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
142,671
1512.03025
Partial Reinitialisation for Optimisers
Heuristic optimisers which search for an optimal configuration of variables relative to an objective function often get stuck in local optima where the algorithm is unable to find further improvement. The standard approach to circumvent this problem involves periodically restarting the algorithm from random initial configurations when no further improvement can be found. We propose a method of partial reinitialization, whereby, in an attempt to find a better solution, only sub-sets of variables are re-initialised rather than the whole configuration. Much of the information gained from previous runs is hence retained. This leads to significant improvements in the quality of the solution found in a given time for a variety of optimisation problems in machine learning.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
false
false
49,994
2502.12631
Score-Based Diffusion Policy Compatible with Reinforcement Learning via Optimal Transport
Diffusion policies have shown promise in learning complex behaviors from demonstrations, particularly for tasks requiring precise control and long-term planning. However, they face challenges in robustness when encountering distribution shifts. This paper explores improving diffusion-based imitation learning models through online interactions with the environment. We propose OTPR (Optimal Transport-guided score-based diffusion Policy for Reinforcement learning fine-tuning), a novel method that integrates diffusion policies with RL using optimal transport theory. OTPR leverages the Q-function as a transport cost and views the policy as an optimal transport map, enabling efficient and stable fine-tuning. Moreover, we introduce masked optimal transport to guide state-action matching using expert keypoints and a compatibility-based resampling strategy to enhance training stability. Experiments on three simulation tasks demonstrate OTPR's superior performance and robustness compared to existing methods, especially in complex and sparse-reward environments. In sum, OTPR provides an effective framework for combining IL and RL, achieving versatile and reliable policy learning. The code will be released at https://github.com/Sunmmyy/OTPR.git.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
534,982
2109.14337
Deep Reinforcement Q-Learning for Intelligent Traffic Signal Control with Partial Detection
Intelligent traffic signal controllers, applying DQN algorithms to traffic light policy optimization, efficiently reduce traffic congestion by adjusting traffic signals to real-time traffic. Most propositions in the literature however consider that all vehicles at the intersection are detected, an unrealistic scenario. Recently, new wireless communication technologies have enabled cost-efficient detection of connected vehicles by infrastructures. With only a small fraction of the total fleet currently equipped, methods able to perform under low detection rates are desirable. In this paper, we propose a deep reinforcement Q-learning model to optimize traffic signal control at an isolated intersection, in a partially observable environment with connected vehicles. First, we present the novel DQN model within the RL framework. We introduce a new state representation for partially observable environments and a new reward function for traffic signal control, and provide a network architecture and tuned hyper-parameters. Second, we evaluate the performances of the model in numerical simulations on multiple scenarios, in two steps. At first in full detection against existing actuated controllers, then in partial detection with loss estimates for proportions of connected vehicles. Finally, from the obtained results, we define thresholds for detection rates with acceptable and optimal performance levels.
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
257,939
2412.12006
Agentic AI-Driven Technical Troubleshooting for Enterprise Systems: A Novel Weighted Retrieval-Augmented Generation Paradigm
Technical troubleshooting in enterprise environments often involves navigating diverse, heterogeneous data sources to resolve complex issues effectively. This paper presents a novel agentic AI solution built on a Weighted Retrieval-Augmented Generation (RAG) Framework tailored for enterprise technical troubleshooting. By dynamically weighting retrieval sources such as product manuals, internal knowledge bases, FAQs, and troubleshooting guides based on query context, the framework prioritizes the most relevant data. For instance, it gives precedence to product manuals for SKU-specific queries while incorporating general FAQs for broader issues. The system employs FAISS for efficient dense vector search, coupled with a dynamic aggregation mechanism to seamlessly integrate results from multiple sources. A Llama-based self-evaluator ensures the contextual accuracy and confidence of the generated responses before delivering them. This iterative cycle of retrieval and validation enhances precision, diversity, and reliability in response generation. Preliminary evaluations on large enterprise datasets demonstrate the framework's efficacy in improving troubleshooting accuracy, reducing resolution times, and adapting to varied technical challenges. Future research aims to enhance the framework by integrating advanced conversational AI capabilities, enabling more interactive and intuitive troubleshooting experiences. Efforts will also focus on refining the dynamic weighting mechanism through reinforcement learning to further optimize the relevance and precision of retrieved information. By incorporating these advancements, the proposed framework is poised to evolve into a comprehensive, autonomous AI solution, redefining technical service workflows across enterprise settings.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
517,682
1112.1484
POCS Based Super-Resolution Image Reconstruction Using an Adaptive Regularization Parameter
Crucial information barely visible to the human eye is often embedded in a series of low-resolution images taken of the same scene. Super-resolution enables the extraction of this information by reconstructing a single image, at a high resolution than is present in any of the individual images. This is particularly useful in forensic imaging, where the extraction of minute details in an image can help to solve a crime. Super-resolution image restoration has been one of the most important research areas in recent years which goals to obtain a high resolution (HR) image from several low resolutions (LR) blurred, noisy, under sampled and displaced images. Relation of the HR image and LR images can be modeled by a linear system using a transformation matrix and additive noise. However, a unique solution may not be available because of the singularity of transformation matrix. To overcome this problem, POCS method has been used. However, their performance is not good because the effect of noise energy has been ignored. In this paper, we propose an adaptive regularization approach based on the fact that the regularization parameter should be a linear function of noise variance. The performance of the proposed approach has been tested on several images and the obtained results demonstrate the superiority of our approach compared with existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
13,345
2011.11395
Semantic CPPS in Industry 4.0
Cyber-Physical Systems (CPS) play a crucial role in the era of the 4thIndustrial Revolution. Recently, the application of the CPS to industrial manufacturing leads to a specialization of them referred as Cyber-Physical Production Systems (CPPS). Among other challenges, CPS and CPPS should be able to address interoperability issues, since one of their intrinsic requirement is the capability to interface and cooperate with other systems. On the other hand, to fully realize theIndustry 4.0 vision, it is required to address horizontal, vertical, and end-to-end integration enabling a complete awareness through the entire supply chain. In this context, Semantic Web standards and technologies may have a promising role to represent manufacturing knowledge in a machine-interpretable way for enabling communications among heterogeneous Industrial assets. This paper proposes an integration of Semantic Web models available at state of the art for implementing a5C architecture mainly targeted to collect and process semantic data stream in a way that would unlock the potentiality of data yield in a smart manufacturing environment. The analysis of key industrial ontologies and semantic technologies allows us to instantiate an example scenario for monitoring Overall Equipment Effectiveness(OEE). The solution uses the SOSA ontology for representing the semantic datastream. Then, C-SPARQL queries are defined for periodically carrying out useful KPIs to address the proposed aim.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
207,816
1204.6081
Optimizing I/O for Big Array Analytics
Big array analytics is becoming indispensable in answering important scientific and business questions. Most analysis tasks consist of multiple steps, each making one or multiple passes over the arrays to be analyzed and generating intermediate results. In the big data setting, I/O optimization is a key to efficient analytics. In this paper, we develop a framework and techniques for capturing a broad range of analysis tasks expressible in nested-loop forms, representing them in a declarative way, and optimizing their I/O by identifying sharing opportunities. Experiment results show that our optimizer is capable of finding execution plans that exploit nontrivial I/O sharing opportunities with significant savings.
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
true
false
15,686
2306.00816
Versatile Backdoor Attack with Visible, Semantic, Sample-Specific, and Compatible Triggers
Deep neural networks (DNNs) can be manipulated to exhibit specific behaviors when exposed to specific trigger patterns, without affecting their performance on benign samples, dubbed \textit{backdoor attack}. Currently, implementing backdoor attacks in physical scenarios still faces significant challenges. Physical attacks are labor-intensive and time-consuming, and the triggers are selected in a manual and heuristic way. Moreover, expanding digital attacks to physical scenarios faces many challenges due to their sensitivity to visual distortions and the absence of counterparts in the real world. To address these challenges, we define a novel trigger called the \textbf{V}isible, \textbf{S}emantic, \textbf{S}ample-Specific, and \textbf{C}ompatible (VSSC) trigger, to achieve effective, stealthy and robust simultaneously, which can also be effectively deployed in the physical scenario using corresponding objects. To implement the VSSC trigger, we propose an automated pipeline comprising three modules: a trigger selection module that systematically identifies suitable triggers leveraging large language models, a trigger insertion module that employs generative models to seamlessly integrate triggers into images, and a quality assessment module that ensures the natural and successful insertion of triggers through vision-language models. Extensive experimental results and analysis validate the effectiveness, stealthiness, and robustness of the VSSC trigger. It can not only maintain robustness under visual distortions but also demonstrates strong practicality in the physical scenario. We hope that the proposed VSSC trigger and implementation approach could inspire future studies on designing more practical triggers in backdoor attacks.
false
false
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
370,174
2110.00452
SAM: A Self-adaptive Attention Module for Context-Aware Recommendation System
Recently, textual information has been proved to play a positive role in recommendation systems. However, most of the existing methods only focus on representation learning of textual information in ratings, while potential selection bias induced by the textual information is ignored. In this work, we propose a novel and general self-adaptive module, the Self-adaptive Attention Module (SAM), which adjusts the selection bias by capturing contextual information based on its representation. This module can be embedded into recommendation systems that contain learning components of contextual information. Experimental results on three real-world datasets demonstrate the effectiveness of our proposal, and the state-of-the-art models with SAM significantly outperform the original ones.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
258,391
2210.06575
GraspNeRF: Multiview-based 6-DoF Grasp Detection for Transparent and Specular Objects Using Generalizable NeRF
In this work, we tackle 6-DoF grasp detection for transparent and specular objects, which is an important yet challenging problem in vision-based robotic systems, due to the failure of depth cameras in sensing their geometry. We, for the first time, propose a multiview RGB-based 6-DoF grasp detection network, GraspNeRF, that leverages the generalizable neural radiance field (NeRF) to achieve material-agnostic object grasping in clutter. Compared to the existing NeRF-based 3-DoF grasp detection methods that rely on densely captured input images and time-consuming per-scene optimization, our system can perform zero-shot NeRF construction with sparse RGB inputs and reliably detect 6-DoF grasps, both in real-time. The proposed framework jointly learns generalizable NeRF and grasp detection in an end-to-end manner, optimizing the scene representation construction for the grasping. For training data, we generate a large-scale photorealistic domain-randomized synthetic dataset of grasping in cluttered tabletop scenes that enables direct transfer to the real world. Our extensive experiments in synthetic and real-world environments demonstrate that our method significantly outperforms all the baselines in all the experiments while remaining in real-time. Project page can be found at https://pku-epic.github.io/GraspNeRF
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
323,343
2307.04693
COMEX: A Tool for Generating Customized Source Code Representations
Learning effective representations of source code is critical for any Machine Learning for Software Engineering (ML4SE) system. Inspired by natural language processing, large language models (LLMs) like Codex and CodeGen treat code as generic sequences of text and are trained on huge corpora of code data, achieving state of the art performance on several software engineering (SE) tasks. However, valid source code, unlike natural language, follows a strict structure and pattern governed by the underlying grammar of the programming language. Current LLMs do not exploit this property of the source code as they treat code like a sequence of tokens and overlook key structural and semantic properties of code that can be extracted from code-views like the Control Flow Graph (CFG), Data Flow Graph (DFG), Abstract Syntax Tree (AST), etc. Unfortunately, the process of generating and integrating code-views for every programming language is cumbersome and time consuming. To overcome this barrier, we propose our tool COMEX - a framework that allows researchers and developers to create and combine multiple code-views which can be used by machine learning (ML) models for various SE tasks. Some salient features of our tool are: (i) it works directly on source code (which need not be compilable), (ii) it currently supports Java and C#, (iii) it can analyze both method-level snippets and program-level snippets by using both intra-procedural and inter-procedural analysis, and (iv) it is easily extendable to other languages as it is built on tree-sitter - a widely used incremental parser that supports over 40 languages. We believe this easy-to-use code-view generation and customization tool will give impetus to research in source code representation learning methods and ML4SE. Tool: https://pypi.org/project/comex - GitHub: https://github.com/IBM/tree-sitter-codeviews - Demo: https://youtu.be/GER6U87FVbU
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
378,490
2211.08426
Automatic penalty and degree continuation for parallel pre-conditioned mesh curving on virtual geometry
We present a distributed parallel mesh curving method for virtual geometry. The main application is to generate large-scale curved meshes on complex geometry suitable for analysis with unstructured high-order methods. Accordingly, we devise the technique to generate geometrically accurate meshes composed of high-quality elements. To this end, we advocate for degree continuation on a penalty-based second-order optimizer that uses global tight tolerances to converge the distortion residuals. To reduce the method memory footprint, waiting time, and energy consumption, we combine three main ingredients. First, we propose a matrix-free GMRES solver pre-conditioned with successive over-relaxation by blocks to reduce the memory footprint three times. We also propose an adaptive penalty technique, to reduce the number of non-linear iterations. Third, we propose an indicator of the required linear solver tolerance to reduce the number of linear iterations. On thousands of cores, the method curves meshes composed of millions of quartic elements featuring highly stretched elements while matching a virtual topology.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
330,621
0711.1605
Asymptotic Capacity of Wireless Ad Hoc Networks with Realistic Links under a Honey Comb Topology
We consider the effects of Rayleigh fading and lognormal shadowing in the physical interference model for all the successful transmissions of traffic across the network. New bounds are derived for the capacity of a given random ad hoc wireless network that reflect packet drop or capture probability of the transmission links. These bounds are based on a simplified network topology termed as honey-comb topology under a given routing and scheduling scheme.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
886
1910.02255
Constructions of MDS Self-dual Codes from Short length
Systematic constructions of MDS self-dual codes is widely concerned. In this paper, we consider the constructions of MDS Euclidean self-dual codes from short length. Indeed, the exact constructions of MDS Euclidean self-dual codes from short length ($n=3,4,5,6$) are given. In general, we construct more new of $q$-ary MDS Euclidean self-dual codes from MDS self-dual codes of known length via generalized Reed-Solomon (GRS for short) codes and extended GRS codes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
148,189
2001.08503
ExEm: Expert Embedding using dominating set theory with deep learning approaches
A collaborative network is a social network that is comprised of experts who cooperate with each other to fulfill a special goal. Analyzing this network yields meaningful information about the expertise of these experts and their subject areas. To perform the analysis, graph embedding techniques have emerged as an effective and promising tool. Graph embedding attempts to represent graph nodes as low-dimensional vectors. In this paper, we propose a graph embedding method, called ExEm, that uses dominating-set theory and deep learning approaches to capture node representations. ExEm finds dominating nodes of the collaborative network and constructs intelligent random walks that comprise of at least two dominating nodes. One dominating node should appear at the beginning of each path sampled to characterize the local neighborhoods. Moreover, the second dominating node reflects the global structure information. To learn the node embeddings, ExEm exploits three embedding methods including Word2vec, fastText and the concatenation of these two. The final result is the low-dimensional vectors of experts, called expert embeddings. The extracted expert embeddings can be applied to many applications. In order to extend these embeddings into the expert recommendation system, we introduce a novel strategy that uses expert vectors to calculate experts' scores and recommend experts. At the end, we conduct extensive experiments to validate the effectiveness of ExEm through assessing its performance over the multi-label classification, link prediction, and recommendation tasks on common datasets and our collected data formed by crawling the vast author Scopus profiles. The experiments show that ExEm outperforms the baselines especially in dense networks.
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
false
161,303
2307.10625
Learning Discriminative Visual-Text Representation for Polyp Re-Identification
Colonoscopic Polyp Re-Identification aims to match a specific polyp in a large gallery with different cameras and views, which plays a key role for the prevention and treatment of colorectal cancer in the computer-aided diagnosis. However, traditional methods mainly focus on the visual representation learning, while neglect to explore the potential of semantic features during training, which may easily leads to poor generalization capability when adapted the pretrained model into the new scenarios. To relieve this dilemma, we propose a simple but effective training method named VT-ReID, which can remarkably enrich the representation of polyp videos with the interchange of high-level semantic information. Moreover, we elaborately design a novel clustering mechanism to introduce prior knowledge from textual data, which leverages contrastive learning to promote better separation from abundant unlabeled text data. To the best of our knowledge, this is the first attempt to employ the visual-text feature with clustering mechanism for the colonoscopic polyp re-identification. Empirical results show that our method significantly outperforms current state-of-the art methods with a clear margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
380,622
2308.06299
Defensive Perception: Estimation and Monitoring of Neural Network Performance under Deployment
In this paper, we propose a method for addressing the issue of unnoticed catastrophic deployment and domain shift in neural networks for semantic segmentation in autonomous driving. Our approach is based on the idea that deep learning-based perception for autonomous driving is uncertain and best represented as a probability distribution. As autonomous vehicles' safety is paramount, it is crucial for perception systems to recognize when the vehicle is leaving its operational design domain, anticipate hazardous uncertainty, and reduce the performance of the perception system. To address this, we propose to encapsulate the neural network under deployment within an uncertainty estimation envelope that is based on the epistemic uncertainty estimation through the Monte Carlo Dropout approach. This approach does not require modification of the deployed neural network and guarantees expected model performance. Our defensive perception envelope has the capability to estimate a neural network's performance, enabling monitoring and notification of entering domains of reduced neural network performance under deployment. Furthermore, our envelope is extended by novel methods to improve the application in deployment settings, including reducing compute expenses and confining estimation noise. Finally, we demonstrate the applicability of our method for multiple different potential deployment shifts relevant to autonomous driving, such as transitions into the night, rainy, or snowy domain. Overall, our approach shows great potential for application in deployment settings and enables operational design domain recognition via uncertainty, which allows for defensive perception, safe state triggers, warning notifications, and feedback for testing or development and adaptation of the perception stack.
false
false
false
false
true
false
true
true
false
false
true
true
false
false
false
false
false
false
385,089
1708.06081
Block Markov Superposition Transmission of BCH Codes with Iterative Erasures-and-Errors Decoders
In this paper, we present the block Markov superposition transmission of BCH (BMST-BCH) codes, which can be constructed to obtain a very low error floor. To reduce the implementation complexity, we design a low complexity iterative sliding-window decoding algorithm, in which only binary and/or erasure messages are processed and exchanged between processing units. The error floor can be predicted by a genie-aided lower bound, while the waterfall performance can be analyzed by the density evolution method. To evaluate the error floor of the constructed BMST-BCH codes at a very low bit error rate (BER) region, we propose a fast simulation approach. Numerical results show that, at a target BER of $10^{-15}$, the hard-decision decoding of the BMST-BCH codes with overhead $25\%$ can achieve a net coding gain (NCG) of $10.55$ dB. Furthermore, the soft-decision decoding can yield an NCG of $10.74$ dB. The construction of BMST-BCH codes is flexible to trade off latency against performance at all overheads of interest and may find applications in optical transport networks as an attractive~candidate.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
79,272
1811.09150
MGANet: A Robust Model for Quality Enhancement of Compressed Video
In video compression, most of the existing deep learning approaches concentrate on the visual quality of a single frame, while ignoring the useful priors as well as the temporal information of adjacent frames. In this paper, we propose a multi-frame guided attention network (MGANet) to enhance the quality of compressed videos. Our network is composed of a temporal encoder that discovers inter-frame relations, a guided encoder-decoder subnet that encodes and enhances the visual patterns of target frame, and a multi-supervised reconstruction component that aggregates information to predict details. We design a bidirectional residual convolutional LSTM unit to implicitly discover frames variations over time with respect to the target frame. Meanwhile, the guided map is proposed to guide our network to concentrate more on the block boundary. Our approach takes advantage of intra-frame prior information and inter-frame information to improve the quality of compressed video. Experimental results show the robustness and superior performance of the proposed method.Code is available at https://github.com/mengab/MGANet
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,197
1810.04773
The IFF Foundation for Ontological Knowledge Organization
This paper discusses an axiomatic approach for the integration of ontologies, an approach that extends to first order logic a previous approach (Kent 2000) based on information flow. This axiomatic approach is represented in the Information Flow Framework (IFF), a metalevel framework for organizing the information that appears in digital libraries, distributed databases and ontologies (Kent 2001). The paper argues that the integration of ontologies is the two-step process of alignment and unification. Ontological alignment consists of the sharing of common terminology and semantics through a mediating ontology. Ontological unification, concentrated in a virtual ontology of community connections, is fusion of the alignment diagram of participant community ontologies - the quotient of the sum of the participant portals modulo the ontological alignment structure.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
110,102
2311.18286
SimulFlow: Simultaneously Extracting Feature and Identifying Target for Unsupervised Video Object Segmentation
Unsupervised video object segmentation (UVOS) aims at detecting the primary objects in a given video sequence without any human interposing. Most existing methods rely on two-stream architectures that separately encode the appearance and motion information before fusing them to identify the target and generate object masks. However, this pipeline is computationally expensive and can lead to suboptimal performance due to the difficulty of fusing the two modalities properly. In this paper, we propose a novel UVOS model called SimulFlow that simultaneously performs feature extraction and target identification, enabling efficient and effective unsupervised video object segmentation. Concretely, we design a novel SimulFlow Attention mechanism to bridege the image and motion by utilizing the flexibility of attention operation, where coarse masks predicted from fused feature at each stage are used to constrain the attention operation within the mask area and exclude the impact of noise. Because of the bidirectional information flow between visual and optical flow features in SimulFlow Attention, no extra hand-designed fusing module is required and we only adopt a light decoder to obtain the final prediction. We evaluate our method on several benchmark datasets and achieve state-of-the-art results. Our proposed approach not only outperforms existing methods but also addresses the computational complexity and fusion difficulties caused by two-stream architectures. Our models achieve 87.4% J & F on DAVIS-16 with the highest speed (63.7 FPS on a 3090) and the lowest parameters (13.7 M). Our SimulFlow also obtains competitive results on video salient object detection datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
411,629
2102.07370
Leveraging Acoustic and Linguistic Embeddings from Pretrained speech and language Models for Intent Classification
Intent classification is a task in spoken language understanding. An intent classification system is usually implemented as a pipeline process, with a speech recognition module followed by text processing that classifies the intents. There are also studies of end-to-end system that takes acoustic features as input and classifies the intents directly. Such systems don't take advantage of relevant linguistic information, and suffer from limited training data. In this work, we propose a novel intent classification framework that employs acoustic features extracted from a pretrained speech recognition system and linguistic features learned from a pretrained language model. We use knowledge distillation technique to map the acoustic embeddings towards linguistic embeddings. We perform fusion of both acoustic and linguistic embeddings through cross-attention approach to classify intents. With the proposed method, we achieve 90.86% and 99.07% accuracy on ATIS and Fluent speech corpus, respectively.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
220,089
2205.10660
Vision Transformers in 2022: An Update on Tiny ImageNet
The recent advances in image transformers have shown impressive results and have largely closed the gap between traditional CNN architectures. The standard procedure is to train on large datasets like ImageNet-21k and then finetune on ImageNet-1k. After finetuning, researches will often consider the transfer learning performance on smaller datasets such as CIFAR-10/100 but have left out Tiny ImageNet. This paper offers an update on vision transformers' performance on Tiny ImageNet. I include Vision Transformer (ViT) , Data Efficient Image Transformer (DeiT), Class Attention in Image Transformer (CaiT), and Swin Transformers. In addition, Swin Transformers beats the current state-of-the-art result with a validation accuracy of 91.35%. Code is available here: https://github.com/ehuynh1106/TinyImageNet-Transformers
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
297,801
2406.13864
Evaluating representation learning on the protein structure universe
We introduce ProteinWorkshop, a comprehensive benchmark suite for representation learning on protein structures with Geometric Graph Neural Networks. We consider large-scale pre-training and downstream tasks on both experimental and predicted structures to enable the systematic evaluation of the quality of the learned structural representation and their usefulness in capturing functional relationships for downstream tasks. We find that: (1) large-scale pretraining on AlphaFold structures and auxiliary tasks consistently improve the performance of both rotation-invariant and equivariant GNNs, and (2) more expressive equivariant GNNs benefit from pretraining to a greater extent compared to invariant models. We aim to establish a common ground for the machine learning and computational biology communities to rigorously compare and advance protein structure representation learning. Our open-source codebase reduces the barrier to entry for working with large protein structure datasets by providing: (1) storage-efficient dataloaders for large-scale structural databases including AlphaFoldDB and ESM Atlas, as well as (2) utilities for constructing new tasks from the entire PDB. ProteinWorkshop is available at: github.com/a-r-j/ProteinWorkshop.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
466,028
1803.09340
DeepVesselNet: Vessel Segmentation, Centerline Prediction, and Bifurcation Detection in 3-D Angiographic Volumes
We present DeepVesselNet, an architecture tailored to the challenges faced when extracting vessel networks or trees and corresponding features in 3-D angiographic volumes using deep learning. We discuss the problems of low execution speed and high memory requirements associated with full 3-D convolutional networks, high-class imbalance arising from the low percentage of vessel voxels, and unavailability of accurately annotated training data - and offer solutions as the building blocks of DeepVesselNet. First, we formulate 2-D orthogonal cross-hair filters which make use of 3-D context information at a reduced computational burden. Second, we introduce a class balancing cross-entropy loss function with false positive rate correction to handle the high-class imbalance and high false positive rate problems associated with existing loss functions. Finally, we generate synthetic dataset using a computational angiogenesis model capable of generating vascular trees under physiological constraints on local network structure and topology and use these data for transfer learning. DeepVesselNet is optimized for segmenting and analyzing vessels, and we test the performance on a range of angiographic volumes including clinical MRA data of the human brain, as well as X-ray tomographic microscopy scans of the rat brain. Our experiments show that, by replacing 3-D filters with cross-hair filters in our network, we achieve over 23% improvement in speed, lower memory footprint, lower network complexity which prevents overfitting and comparable accuracy (with a Cox-Wilcoxon paired sample significance test p-value of 0.07 when compared to full 3-D filters). Our class balancing metric is crucial for training the network and transfer learning with synthetic data is an efficient, robust, and very generalizable approach leading to a network that excels in a variety of angiography segmentation tasks.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
93,474
2109.14746
Improvising the Learning of Neural Networks on Hyperspherical Manifold
The impact of convolution neural networks (CNNs) in the supervised settings provided tremendous increment in performance. The representations learned from CNN's operated on hyperspherical manifold led to insightful outcomes in face recognition, face identification, and other supervised tasks. A broad range of activation functions were developed with hypersphere intuition which performs superior to softmax in euclidean space. The main motive of this research is to provide insights. First, the stereographic projection is implied to transform data from Euclidean space ($\mathbb{R}^{n}$) to hyperspherical manifold ($\mathbb{S}^{n}$) to analyze the performance of angular margin losses. Secondly, proving theoretically and practically that decision boundaries constructed on hypersphere using stereographic projection obliges the learning of neural networks. Experiments have demonstrated that applying stereographic projection on existing state-of-the-art angular margin objective functions improved performance for standard image classification data sets (CIFAR-10,100). Further, we ran our experiments on malaria-thin blood smear images, resulting in effective outcomes. The code is publicly available at:https://github.com/barulalithb/stereo-angular-margin.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
258,055
2305.19575
On the Linear Convergence of Policy Gradient under Hadamard Parameterization
The convergence of deterministic policy gradient under the Hadamard parameterization is studied in the tabular setting and the linear convergence of the algorithm is established. To this end, we first show that the error decreases at an $O(\frac{1}{k})$ rate for all the iterations. Based on this result, we further show that the algorithm has a faster local linear convergence rate after $k_0$ iterations, where $k_0$ is a constant that only depends on the MDP problem and the initialization. To show the local linear convergence of the algorithm, we have indeed established the contraction of the sub-optimal probability $b_s^k$ (i.e., the probability of the output policy $\pi^k$ on non-optimal actions) when $k\ge k_0$.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
369,589
1501.04878
A Light Transport Model for Mitigating Multipath Interference in TOF Sensors
Continuous-wave Time-of-flight (TOF) range imaging has become a commercially viable technology with many applications in computer vision and graphics. However, the depth images obtained from TOF cameras contain scene dependent errors due to multipath interference (MPI). Specifically, MPI occurs when multiple optical reflections return to a single spatial location on the imaging sensor. Many prior approaches to rectifying MPI rely on sparsity in optical reflections, which is an extreme simplification. In this paper, we correct MPI by combining the standard measurements from a TOF camera with information from direct and global light transport. We report results on both simulated experiments and physical experiments (using the Kinect sensor). Our results, evaluated against ground truth, demonstrate a quantitative improvement in depth accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
39,430
0705.0423
Encoding for the Blackwell Channel with Reinforced Belief Propagation
A key idea in coding for the broadcast channel (BC) is binning, in which the transmitter encode information by selecting a codeword from an appropriate bin (the messages are thus the bin indexes). This selection is normally done by solving an appropriate (possibly difficult) combinatorial problem. Recently it has been shown that binning for the Blackwell channel --a particular BC-- can be done by iterative schemes based on Survey Propagation (SP). This method uses decimation for SP and suffers a complexity of O(n^2). In this paper we propose a new variation of the Belief Propagation (BP) algorithm, named Reinforced BP algorithm, that turns BP into a solver. Our simulations show that this new algorithm has complexity O(n log n). Using this new algorithm together with a non-linear coding scheme, we can efficiently achieve rates close to the border of the capacity region of the Blackwell channel.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
144
2403.09997
Identifying Health Risks from Family History: A Survey of Natural Language Processing Techniques
Electronic health records include information on patients' status and medical history, which could cover the history of diseases and disorders that could be hereditary. One important use of family history information is in precision health, where the goal is to keep the population healthy with preventative measures. Natural Language Processing (NLP) and machine learning techniques can assist with identifying information that could assist health professionals in identifying health risks before a condition is developed in their later years, saving lives and reducing healthcare costs. We survey the literature on the techniques from the NLP field that have been developed to utilise digital health records to identify risks of familial diseases. We highlight that rule-based methods are heavily investigated and are still actively used for family history extraction. Still, more recent efforts have been put into building neural models based on large-scale pre-trained language models. In addition to the areas where NLP has successfully been utilised, we also identify the areas where more research is needed to unlock the value of patients' records regarding data collection, task formulation and downstream applications.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
437,992
2006.11419
FISAR: Forward Invariant Safe Reinforcement Learning with a Deep Neural Network-Based Optimize
This paper investigates reinforcement learning with constraints, which are indispensable in safety-critical environments. To drive the constraint violation monotonically decrease, we take the constraints as Lyapunov functions and impose new linear constraints on the policy parameters' updating dynamics. As a result, the original safety set can be forward-invariant. However, because the new guaranteed-feasible constraints are imposed on the updating dynamics instead of the original policy parameters, classic optimization algorithms are no longer applicable. To address this, we propose to learn a generic deep neural network (DNN)-based optimizer to optimize the objective while satisfying the linear constraints. The constraint-satisfaction is achieved via projection onto a polytope formulated by multiple linear inequality constraints, which can be solved analytically with our newly designed metric. To the best of our knowledge, this is the \textit{first} DNN-based optimizer for constrained optimization with the forward invariance guarantee. We show that our optimizer trains a policy to decrease the constraint violation and maximize the cumulative reward monotonically. Results on numerical constrained optimization and obstacle-avoidance navigation validate the theoretical findings.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
183,222
2302.06898
Take a Prior from Other Tasks for Severe Blur Removal
Recovering clear structures from severely blurry inputs is a challenging problem due to the large movements between the camera and the scene. Although some works apply segmentation maps on human face images for deblurring, they cannot handle natural scenes because objects and degradation are more complex, and inaccurate segmentation maps lead to a loss of details. For general scene deblurring, the feature space of the blurry image and corresponding sharp image under the high-level vision task is closer, which inspires us to rely on other tasks (e.g. classification) to learn a comprehensive prior in severe blur removal cases. We propose a cross-level feature learning strategy based on knowledge distillation to learn the priors, which include global contexts and sharp local structures for recovering potential details. In addition, we propose a semantic prior embedding layer with multi-level aggregation and semantic attention transformation to integrate the priors effectively. We introduce the proposed priors to various models, including the UNet and other mainstream deblurring baselines, leading to better performance on severe blur removal. Extensive experiments on natural image deblurring benchmarks and real-world images, such as GoPro and RealBlur datasets, demonstrate our method's effectiveness and generalization ability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
345,572
2303.07811
ICICLE: Interpretable Class Incremental Continual Learning
Continual learning enables incremental learning of new tasks without forgetting those previously learned, resulting in positive knowledge transfer that can enhance performance on both new and old tasks. However, continual learning poses new challenges for interpretability, as the rationale behind model predictions may change over time, leading to interpretability concept drift. We address this problem by proposing Interpretable Class-InCremental LEarning (ICICLE), an exemplar-free approach that adopts a prototypical part-based approach. It consists of three crucial novelties: interpretability regularization that distills previously learned concepts while preserving user-friendly positive reasoning; proximity-based prototype initialization strategy dedicated to the fine-grained setting; and task-recency bias compensation devoted to prototypical parts. Our experimental results demonstrate that ICICLE reduces the interpretability concept drift and outperforms the existing exemplar-free methods of common class-incremental learning when applied to concept-based models.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
351,390
2406.04301
Neural Surface Reconstruction from Sparse Views Using Epipolar Geometry
This paper addresses the challenge of reconstructing surfaces from sparse view inputs, where ambiguity and occlusions due to missing information pose significant hurdles. We present a novel approach, named EpiS, that incorporates Epipolar information into the reconstruction process. Existing methods in sparse-view neural surface learning have mainly focused on mean and variance considerations using cost volumes for feature extraction. In contrast, our method aggregates coarse information from the cost volume into Epipolar features extracted from multiple source views, enabling the generation of fine-grained Signal Distance Function (SDF)-aware features. Additionally, we employ an attention mechanism along the line dimension to facilitate feature fusion based on the SDF feature. Furthermore, to address the information gaps in sparse conditions, we integrate depth information from monocular depth estimation using global and local regularization techniques. The global regularization utilizes a triplet loss function, while the local regularization employs a derivative loss function. Extensive experiments demonstrate that our approach outperforms state-of-the-art methods, especially in cases with sparse and generalizable conditions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
461,612
1511.06853
TransCut: Transparent Object Segmentation from a Light-Field Image
The segmentation of transparent objects can be very useful in computer vision applications. However, because they borrow texture from their background and have a similar appearance to their surroundings, transparent objects are not handled well by regular image segmentation methods. We propose a method that overcomes these problems using the consistency and distortion properties of a light-field image. Graph-cut optimization is applied for the pixel labeling problem. The light-field linearity is used to estimate the likelihood of a pixel belonging to the transparent object or Lambertian background, and the occlusion detector is used to find the occlusion boundary. We acquire a light field dataset for the transparent object, and use this dataset to evaluate our method. The results demonstrate that the proposed method successfully segments transparent objects from the background.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
49,336
1812.04152
Duelling Bandits with Weak Regret in Adversarial Environments
Research on the multi-armed bandit problem has studied the trade-off of exploration and exploitation in depth. However, there are numerous applications where the cardinal absolute-valued feedback model (e.g. ratings from one to five) is not suitable. This has motivated the formulation of the duelling bandits problem, where the learner picks a pair of actions and observes a noisy binary feedback, indicating a relative preference between the two. There exist a multitude of different settings and interpretations of the problem for two reasons. First, due to the absence of a total order of actions, there is no natural definition of the best action. Existing work either explicitly assumes the existence of a linear order, or uses a custom definition for the winner. Second, there are multiple reasonable notions of regret to measure the learner's performance. Most prior work has been focussing on the $\textit{strong regret}$, which averages the quality of the two actions picked. This work focusses on the $\textit{weak regret}$, which is based on the quality of the better of the two actions selected. Weak regret is the more appropriate performance measure when the pair's inferior action has no significant detrimental effect on the pair's quality. We study the duelling bandits problem in the adversarial setting. We provide an algorithm which has theoretical guarantees in both the utility-based setting, which implies a total order, and the unrestricted setting. For the latter, we work with the $\textit{Borda winner}$, finding the action maximising the probability of winning against an action sampled uniformly at random. The thesis concludes with experimental results based on both real-world data and synthetic data, showing the algorithm's performance and limitations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
116,149
2206.03210
Deep Neural Patchworks: Coping with Large Segmentation Tasks
Convolutional neural networks are the way to solve arbitrary image segmentation tasks. However, when images are large, memory demands often exceed the available resources, in particular on a common GPU. Especially in biomedical imaging, where 3D images are common, the problems are apparent. A typical approach to solve this limitation is to break the task into smaller subtasks by dividing images into smaller image patches. Another approach, if applicable, is to look at the 2D image sections separately, and to solve the problem in 2D. Often, the loss of global context makes such approaches less effective; important global information might not be present in the current image patch, or the selected 2D image section. Here, we propose Deep Neural Patchworks (DNP), a segmentation framework that is based on hierarchical and nested stacking of patch-based networks that solves the dilemma between global context and memory limitations.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
301,183
2404.04587
Neuroevolving Electronic Dynamical Networks
Neuroevolution is a powerful method of applying an evolutionary algorithm to refine the performance of artificial neural networks through natural selection; however, the fitness evaluation of these networks can be time-consuming and computationally expensive, particularly for continuous time recurrent neural networks (CTRNNs) that necessitate the simulation of differential equations. To overcome this challenge, field programmable gate arrays (FPGAs) have emerged as an increasingly popular solution, due to their high performance and low power consumption. Further, their ability to undergo dynamic and partial reconfiguration enables the extremely rapid evaluation of the fitness of CTRNNs, effectively addressing the bottleneck associated with conventional methods of evolvable hardware. By incorporating fitness evaluation directly upon the programmable logic of the FPGA, hyper-parallel evaluation becomes feasible, dramatically reducing the time required for assessment. This inherent parallelism of FPGAs accelerates the entire neuroevolutionary process by several orders of magnitude, facilitating faster convergence to an optimal solution. The work presented in this study demonstrates the potential of utilizing dynamic and partial reconfiguration on capable FPGAs as a powerful platform for neuroevolving dynamic neural networks.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
false
true
444,713
1404.3626
Optimal Power Flow as a Polynomial Optimization Problem
Formulating the alternating current optimal power flow (ACOPF) as a polynomial optimization problem makes it possible to solve large instances in practice and to guarantee asymptotic convergence in theory.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
32,327
1509.02629
Error Bounds for Finite-Dimensional Approximations of Input-Output Open Quantum Systems by Subspace Truncation and Adiabatic Elimination
An important class of physical systems that are of interest in practice are input-output open quantum systems that can be described by quantum stochastic differential equations and defined on an infinite-dimensional underlying Hilbert space. Most commonly, these systems involve coupling to a quantum harmonic oscillator as a system component. This paper is concerned with error bounds in the finite-dimensional approximations of input-output open quantum systems defined on an infinite-dimensional Hilbert space. We develop a framework for developing error bounds between the time evolution of the state of a class of infinite-dimensional quantum systems and its approximation on a finite-dimensional subspace of the original, when both are initialized in the latter subspace. This framework is then applied to two approaches for obtaining finite-dimensional approximations: subspace truncation and adiabatic elimination. Applications of the bounds to some physical examples drawn from the literature are provided to illustrate our results.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
46,750
2111.01888
A dataset for multi-sensor drone detection
The use of small and remotely controlled unmanned aerial vehicles (UAVs), or drones, has increased in recent years. This goes in parallel with misuse episodes, with an evident threat to the safety of people or facilities. As a result, the detection of UAV has also emerged as a research topic. Most studies on drone detection fail to specify the type of acquisition device, the drone type, the detection range, or the dataset. The lack of proper UAV detection studies employing thermal infrared cameras is also an issue, despite its success with other targets. Besides, we have not found any previous study that addresses the detection task as a function of distance to the target. Sensor fusion is indicated as an open research issue as well, although research in this direction is scarce too. To counteract the mentioned issues and allow fundamental studies with a common public benchmark, we contribute with an annotated multi-sensor database for drone detection that includes infrared and visible videos and audio files. The database includes three different drones, of different sizes and other flying objects that can be mistakenly detected as drones, such as birds, airplanes or helicopters. In addition to using several different sensors, the number of classes is higher than in previous studies. To allow studies as a function of the sensor-to-target distance, the dataset is divided into three categories (Close, Medium, Distant) according to the industry-standard Detect, Recognize and Identify (DRI) requirements, built on the Johnson criteria. Given that the drones must be flown within visual range due to regulations, the largest sensor-to-target distance for a drone is 200 m, and acquisitions are made in daylight. The data has been obtained at three airports in Sweden: Halmstad Airport (IATA code: HAD/ICAO code: ESMT), Gothenburg City Airport (GSE/ESGP) and Malm\"o Airport (MMX/ESMS).
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
264,691
1911.12850
Quality analysis of DCGAN-generated mammography lesions
Medical image synthesis has gained a great focus recently, especially after the introduction of Generative Adversarial Networks (GANs). GANs have been used widely to provide anatomically-plausible and diverse samples for augmentation and other applications, including segmentation and super resolution. In our previous work, Deep Convolutional GANs were used to generate synthetic mammogram lesions, masses mainly, that could enhance the classification performance in imbalanced datasets. In this new work, a deeper investigation was carried out to explore other aspects of the generated images evaluation, i.e., realism, feature space distribution, and observers studies. t-Stochastic Neighbor Embedding (t-SNE) was used to reduce the dimensionality of real and fake images to enable 2D visualisations. Additionally, two expert radiologists performed a realism-evaluation study. Visualisations showed that the generated images have a similar feature distribution of the real ones, avoiding outliers. Moreover, Receiver Operating Characteristic (ROC) curve showed that the radiologists could not, in many cases, distinguish between synthetic and real lesions, giving 48% and 61% accuracies in a balanced sample set.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
155,504
1406.2150
ML Detection for MIMO Systems under Channel Estimation Errors
In wireless communication systems, the use of multiple antennas at both the transmitter and receiver is a widely known method for improving both reliability and data rates, as it increases the former through transmit or receive diversity and the latter by spatial multiplexing. In order to detect signals, channel state information (CSI) is typically required at the receiver; however, the estimation of CSI is not perfect in practical systems, which causes performance degradation. In this paper, we propose a novel maximum likelihood (ML) scheme that is robust to channel information errors. By assuming a bound on the total power of channel estimation errors, we apply an optimization method to estimate the instantaneous covariance of channel estimation errors in order to minimize the ML cost function. To reduce computational complexity, we also propose an iterative sphere decoding scheme based on the proposed ML detection method. Simulation results show that the proposed algorithm provides a performance gain in terms of error probability relative to existing algorithms.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
33,720