id
stringlengths
9
16
title
stringlengths
4
278
abstract
stringlengths
3
4.08k
cs.HC
bool
2 classes
cs.CE
bool
2 classes
cs.SD
bool
2 classes
cs.SI
bool
2 classes
cs.AI
bool
2 classes
cs.IR
bool
2 classes
cs.LG
bool
2 classes
cs.RO
bool
2 classes
cs.CL
bool
2 classes
cs.IT
bool
2 classes
cs.SY
bool
2 classes
cs.CV
bool
2 classes
cs.CR
bool
2 classes
cs.CY
bool
2 classes
cs.MA
bool
2 classes
cs.NE
bool
2 classes
cs.DB
bool
2 classes
Other
bool
2 classes
__index_level_0__
int64
0
541k
1906.01083
MelNet: A Generative Model for Audio in the Frequency Domain
Capturing high-level structure in audio waveforms is challenging because a single second of audio spans tens of thousands of timesteps. While long-range dependencies are difficult to model directly in the time domain, we show that they can be more tractably modelled in two-dimensional time-frequency representations such as spectrograms. By leveraging this representational advantage, in conjunction with a highly expressive probabilistic model and a multiscale generation procedure, we design a model capable of generating high-fidelity audio samples which capture structure at timescales that time-domain models have yet to achieve. We apply our model to a variety of audio generation tasks, including unconditional speech generation, music generation, and text-to-speech synthesis---showing improvements over previous approaches in both density estimates and human judgments.
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
133,586
2406.15252
VideoScore: Building Automatic Metrics to Simulate Fine-grained Human Feedback for Video Generation
The recent years have witnessed great advances in video generation. However, the development of automatic video metrics is lagging significantly behind. None of the existing metric is able to provide reliable scores over generated videos. The main barrier is the lack of large-scale human-annotated dataset. In this paper, we release VideoFeedback, the first large-scale dataset containing human-provided multi-aspect score over 37.6K synthesized videos from 11 existing video generative models. We train VideoScore (initialized from Mantis) based on VideoFeedback to enable automatic video quality assessment. Experiments show that the Spearman correlation between VideoScore and humans can reach 77.1 on VideoFeedback-test, beating the prior best metrics by about 50 points. Further result on other held-out EvalCrafter, GenAI-Bench, and VBench show that VideoScore has consistently much higher correlation with human judges than other metrics. Due to these results, we believe VideoScore can serve as a great proxy for human raters to (1) rate different video models to track progress (2) simulate fine-grained human feedback in Reinforcement Learning with Human Feedback (RLHF) to improve current video generation models.
false
false
false
false
true
false
false
false
false
false
false
true
false
false
false
false
false
false
466,672
2210.05888
Calibration and Uncertainty Characterization for Ultra-Wideband Two-Way-Ranging Measurements
Ultra-Wideband (UWB) systems are becoming increasingly popular for indoor localization, where range measurements are obtained by measuring the time-of-flight of radio signals. However, the range measurements typically suffer from a systematic error or bias that must be corrected for high-accuracy localization. In this paper, a ranging protocol is proposed alongside a robust and scalable antenna-delay calibration procedure to accurately and efficiently calibrate antenna delays for many UWB tags. Additionally, the bias and uncertainty of the measurements are modelled as a function of the received-signal power. The full calibration procedure is presented using experimental training data of 3 aerial robots fitted with 2 UWB tags each, and then evaluated on 2 test experiments. A localization problem is then formulated on the experimental test data, and the calibrated measurements and their modelled uncertainty are fed into an extended Kalman filter (EKF). The proposed calibration is shown to yield an average of 46% improvement in localization accuracy. Lastly, the paper is accompanied by an open-source UWB-calibration Python library, which can be found at https://github.com/decargroup/uwb_calibration.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
323,043
2111.07975
Semantically Grounded Object Matching for Robust Robotic Scene Rearrangement
Object rearrangement has recently emerged as a key competency in robot manipulation, with practical solutions generally involving object detection, recognition, grasping and high-level planning. Goal-images describing a desired scene configuration are a promising and increasingly used mode of instruction. A key outstanding challenge is the accurate inference of matches between objects in front of a robot, and those seen in a provided goal image, where recent works have struggled in the absence of object-specific training data. In this work, we explore the deterioration of existing methods' ability to infer matches between objects as the visual shift between observed and goal scenes increases. We find that a fundamental limitation of the current setting is that source and target images must contain the same $\textit{instance}$ of every object, which restricts practical deployment. We present a novel approach to object matching that uses a large pre-trained vision-language model to match objects in a cross-instance setting by leveraging semantics together with visual features as a more robust, and much more general, measure of similarity. We demonstrate that this provides considerably improved matching performance in cross-instance settings, and can be used to guide multi-object rearrangement with a robot manipulator from an image that shares no object $\textit{instances}$ with the robot's scene.
false
false
false
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
266,530
2309.08139
Multi-Scale Estimation for Omni-Directional Saliency Maps Using Learnable Equator Bias
Omni-directional images have been used in wide range of applications. For the applications, it would be useful to estimate saliency maps representing probability distributions of gazing points with a head-mounted display, to detect important regions in the omni-directional images. This paper proposes a novel saliency-map estimation model for the omni-directional images by extracting overlapping 2-dimensional (2D) plane images from omni-directional images at various directions and angles of view. While 2D saliency maps tend to have high probability at the center of images (center bias), the high-probability region appears at horizontal directions in omni-directional saliency maps when a head-mounted display is used (equator bias). Therefore, the 2D saliency model with a center-bias layer was fine-tuned with an omni-directional dataset by replacing the center-bias layer to an equator-bias layer conditioned on the elevation angle for the extraction of the 2D plane image. The limited availability of omni-directional images in saliency datasets can be compensated by using the well-established 2D saliency model pretrained by a large number of training images with the ground truth of 2D saliency maps. In addition, this paper proposes a multi-scale estimation method by extracting 2D images in multiple angles of view to detect objects of various sizes with variable receptive fields. The saliency maps estimated from the multiple angles of view were integrated by using pixel-wise attention weights calculated in an integration layer for weighting the optimal scale to each object. The proposed method was evaluated using a publicly available dataset with evaluation metrics for omni-directional saliency maps. It was confirmed that the accuracy of the saliency maps was improved by the proposed method.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
392,045
2311.14339
Towards Concept-based Interpretability of Skin Lesion Diagnosis using Vision-Language Models
Concept-based models naturally lend themselves to the development of inherently interpretable skin lesion diagnosis, as medical experts make decisions based on a set of visual patterns of the lesion. Nevertheless, the development of these models depends on the existence of concept-annotated datasets, whose availability is scarce due to the specialized knowledge and expertise required in the annotation process. In this work, we show that vision-language models can be used to alleviate the dependence on a large number of concept-annotated samples. In particular, we propose an embedding learning strategy to adapt CLIP to the downstream task of skin lesion classification using concept-based descriptions as textual embeddings. Our experiments reveal that vision-language models not only attain better accuracy when using concepts as textual embeddings, but also require a smaller number of concept-annotated samples to attain comparable performance to approaches specifically devised for automatic concept generation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
410,077
2210.04513
Improving Continual Relation Extraction through Prototypical Contrastive Learning
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data, of which the major challenge is the catastrophic forgetting of old tasks. In order to alleviate this critical problem for enhanced CRE performance, we propose a novel Continual Relation Extraction framework with Contrastive Learning, namely CRECL, which is built with a classification network and a prototypical contrastive network to achieve the incremental-class learning of CRE. Specifically, in the contrastive network a given instance is contrasted with the prototype of each candidate relations stored in the memory module. Such contrastive learning scheme ensures the data distributions of all tasks more distinguishable, so as to alleviate the catastrophic forgetting further. Our experiment results not only demonstrate our CRECL's advantage over the state-of-the-art baselines on two public datasets, but also verify the effectiveness of CRECL's contrastive learning on improving CRE performance.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
322,484
2502.00379
Latent Action Learning Requires Supervision in the Presence of Distractors
Recently, latent action learning, pioneered by Latent Action Policies (LAPO), have shown remarkable pre-training efficiency on observation-only data, offering potential for leveraging vast amounts of video available on the web for embodied AI. However, prior work has focused on distractor-free data, where changes between observations are primarily explained by ground-truth actions. Unfortunately, real-world videos contain action-correlated distractors that may hinder latent action learning. Using Distracting Control Suite (DCS) we empirically investigate the effect of distractors on latent action learning and demonstrate that LAPO struggle in such scenario. We propose LAOM, a simple LAPO modification that improves the quality of latent actions by 8x, as measured by linear probing. Importantly, we show that providing supervision with ground-truth actions, as few as 2.5% of the full dataset, during latent action learning improves downstream performance by 4.2x on average. Our findings suggest that integrating supervision during Latent Action Models (LAM) training is critical in the presence of distractors, challenging the conventional pipeline of first learning LAM and only then decoding from latent to ground-truth actions.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
529,353
2002.03839
Adversarial Attacks on Linear Contextual Bandits
Contextual bandit algorithms are applied in a wide range of domains, from advertising to recommender systems, from clinical trials to education. In many of these domains, malicious agents may have incentives to attack the bandit algorithm to induce it to perform a desired behavior. For instance, an unscrupulous ad publisher may try to increase their own revenue at the expense of the advertisers; a seller may want to increase the exposure of their products, or thwart a competitor's advertising campaign. In this paper, we study several attack scenarios and show that a malicious agent can force a linear contextual bandit algorithm to pull any desired arm $T - o(T)$ times over a horizon of $T$ steps, while applying adversarial modifications to either rewards or contexts that only grow logarithmically as $O(\log T)$. We also investigate the case when a malicious agent is interested in affecting the behavior of the bandit algorithm in a single context (e.g., a specific user). We first provide sufficient conditions for the feasibility of the attack and we then propose an efficient algorithm to perform the attack. We validate our theoretical results on experiments performed on both synthetic and real-world datasets.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
163,422
1812.08131
Endogenous Epistemic Factionalization
Why do people who disagree about one subject tend to disagree about other subjects as well? In this paper, we introduce a model to explore this phenomenon of "epistemic factionization". Agents attempt to discover the truth about multiple propositions by testing the world and sharing evidence gathered. But agents tend to mistrust evidence shared by those who do not hold similar beliefs. This mistrust leads to the endogenous emergence of factions of agents with multiple, highly correlated, polarized beliefs.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
116,949
2406.13892
Adaptable Logical Control for Large Language Models
Despite the success of Large Language Models (LLMs) on various tasks following human instructions, controlling model generation at inference time poses a persistent challenge. In this paper, we introduce Ctrl-G, an adaptable framework that facilitates tractable and flexible control of LLM generation to reliably follow logical constraints. Ctrl-G combines any production-ready LLM with a Hidden Markov Model, enabling LLM outputs to adhere to logical constraints represented as deterministic finite automata. We show that Ctrl-G, when applied to a TULU2-7B model, outperforms GPT3.5 and GPT4 on the task of interactive text editing: specifically, for the task of generating text insertions/continuations following logical constraints, Ctrl-G achieves over 30% higher satisfaction rate in human evaluation compared to GPT4. When applied to medium-size language models (e.g., GPT2-large), Ctrl-G also beats its counterparts for constrained generation by large margins on standard benchmarks. Additionally, as a proof-of-concept study, we experiment Ctrl-G on the Grade School Math benchmark to assist LLM reasoning, foreshadowing the application of Ctrl-G, as well as other constrained generation approaches, beyond traditional language generation tasks.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
466,044
1811.11718
Partial Convolution based Padding
In this paper, we present a simple yet effective padding scheme that can be used as a drop-in module for existing convolutional neural networks. We call it partial convolution based padding, with the intuition that the padded region can be treated as holes and the original input as non-holes. Specifically, during the convolution operation, the convolution results are re-weighted near image borders based on the ratios between the padded area and the convolution sliding window area. Extensive experiments with various deep network models on ImageNet classification and semantic segmentation demonstrate that the proposed padding scheme consistently outperforms standard zero padding with better accuracy.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,839
2404.01012
Query Performance Prediction using Relevance Judgments Generated by Large Language Models
Query performance prediction (QPP) aims to estimate the retrieval quality of a search system for a query without human relevance judgments. Previous QPP methods typically return a single scalar value and do not require the predicted values to approximate a specific information retrieval (IR) evaluation measure, leading to certain drawbacks: (i) a single scalar is insufficient to accurately represent different IR evaluation measures, especially when metrics do not highly correlate, and (ii) a single scalar limits the interpretability of QPP methods because solely using a scalar is insufficient to explain QPP results. To address these issues, we propose a QPP framework using automatically generated relevance judgments (QPP-GenRE), which decomposes QPP into independent subtasks of predicting the relevance of each item in a ranked list to a given query. This allows us to predict any IR evaluation measure using the generated relevance judgments as pseudo-labels. This also allows us to interpret predicted IR evaluation measures, and identify, track and rectify errors in generated relevance judgments to improve QPP quality. We predict an item's relevance by using open-source large language models (LLMs) to ensure scientific reproducibility. We face two main challenges: (i) excessive computational costs of judging an entire corpus for predicting a metric considering recall, and (ii) limited performance in prompting open-source LLMs in a zero-/few-shot manner. To solve the challenges, we devise an approximation strategy to predict an IR measure considering recall and propose to fine-tune open-source LLMs using human-labeled relevance judgments. Experiments on the TREC 2019-2022 deep learning tracks show that QPP-GenRE achieves state-of-the-art QPP quality for both lexical and neural rankers.
false
false
false
false
true
true
true
false
true
false
false
false
false
false
false
false
false
false
443,214
2501.17399
MultiChallenge: A Realistic Multi-Turn Conversation Evaluation Benchmark Challenging to Frontier LLMs
We present MultiChallenge, a pioneering benchmark evaluating large language models (LLMs) on conducting multi-turn conversations with human users, a crucial yet underexamined capability for their applications. MultiChallenge identifies four categories of challenges in multi-turn conversations that are not only common and realistic among current human-LLM interactions, but are also challenging to all current frontier LLMs. All 4 challenges require accurate instruction-following, context allocation, and in-context reasoning at the same time. We also develop LLM as judge with instance-level rubrics to facilitate an automatic evaluation method with fair agreement with experienced human raters. Despite achieving near-perfect scores on existing multi-turn evaluation benchmarks, all frontier models have less than 50% accuracy on MultiChallenge, with the top-performing Claude 3.5 Sonnet (June 2024) achieving just a 41.4% average accuracy.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
528,324
2206.10211
FEAT: Fair Coordinated Iterative Water-Filling Algorithm
In this paper, we consider a perfect coordinated water-filling game, where each user transmits solely on a given carrier. The main goal of the proposed algorithm (which we call FEAT) is to get close to the optimal, while keeping a decent level of fairness. The key idea within FEAT is to minimize the ratio between the best and the worst utilities of the users. This is done by ensuring that, at each iteration (channel assignment), a user is satisfied with this assignment as long as he does not loose much more than other users in the system. It has been shown that FEAT outperforms most related algorithms in many aspects, especially in interference-limited systems. Indeed, with FEAT we can ensure a near-optimal, fair and energy efficient solution with low computational complexity. In terms of robustness, it turns out that the balance between being nearly globally optimal and good from individual point of view seems hard to sustain with a significant number of users. Also notice that, in this regard, global optimality gets less affected than the individual one, which offers hope that such an accurate water-filling algorithm can be designed around competition in interference-limited systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
303,835
2004.03554
Fine-Grained Named Entity Typing over Distantly Supervised Data Based on Refined Representations
Fine-Grained Named Entity Typing (FG-NET) is a key component in Natural Language Processing (NLP). It aims at classifying an entity mention into a wide range of entity types. Due to a large number of entity types, distant supervision is used to collect training data for this task, which noisily assigns type labels to entity mentions irrespective of the context. In order to alleviate the noisy labels, existing approaches on FGNET analyze the entity mentions entirely independent of each other and assign type labels solely based on mention sentence-specific context. This is inadequate for highly overlapping and noisy type labels as it hinders information passing across sentence boundaries. For this, we propose an edge-weighted attentive graph convolution network that refines the noisy mention representations by attending over corpus-level contextual clues prior to the end classification. Experimental evaluation shows that the proposed model outperforms the existing research by a relative score of upto 10.2% and 8.3% for macro f1 and micro f1 respectively.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
171,606
2502.02444
Generative Psycho-Lexical Approach for Constructing Value Systems in Large Language Models
Values are core drivers of individual and collective perception, cognition, and behavior. Value systems, such as Schwartz's Theory of Basic Human Values, delineate the hierarchy and interplay among these values, enabling cross-disciplinary investigations into decision-making and societal dynamics. Recently, the rise of Large Language Models (LLMs) has raised concerns regarding their elusive intrinsic values. Despite growing efforts in evaluating, understanding, and aligning LLM values, a psychologically grounded LLM value system remains underexplored. This study addresses the gap by introducing the Generative Psycho-Lexical Approach (GPLA), a scalable, adaptable, and theoretically informed method for constructing value systems. Leveraging GPLA, we propose a psychologically grounded five-factor value system tailored for LLMs. For systematic validation, we present three benchmarking tasks that integrate psychological principles with cutting-edge AI priorities. Our results reveal that the proposed value system meets standard psychological criteria, better captures LLM values, improves LLM safety prediction, and enhances LLM alignment, when compared to the canonical Schwartz's values.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
530,318
2406.19188
Averaging log-likelihoods in direct alignment
To better align Large Language Models (LLMs) with human judgment, Reinforcement Learning from Human Feedback (RLHF) learns a reward model and then optimizes it using regularized RL. Recently, direct alignment methods were introduced to learn such a fine-tuned model directly from a preference dataset without computing a proxy reward function. These methods are built upon contrastive losses involving the log-likelihood of (dis)preferred completions according to the trained model. However, completions have various lengths, and the log-likelihood is not length-invariant. On the other side, the cross-entropy loss used in supervised training is length-invariant, as batches are typically averaged token-wise. To reconcile these approaches, we introduce a principled approach for making direct alignment length-invariant. Formally, we introduce a new averaging operator, to be composed with the optimality operator giving the best policy for the underlying RL problem. It translates into averaging the log-likelihood within the loss. We empirically study the effect of such averaging, observing a trade-off between the length of generations and their scores.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
468,327
2112.11491
Adversarial Neural Networks for Error Correcting Codes
Error correcting codes are a fundamental component in modern day communication systems, demanding extremely high throughput, ultra-reliability and low latency. Recent approaches using machine learning (ML) models as the decoders offer both improved performance and great adaptability to unknown environments, where traditional decoders struggle. We introduce a general framework to further boost the performance and applicability of ML models. We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words, and, hence, guides the decoding models to recover transmitted codewords. Our framework is game-theoretic, motivated by generative adversarial networks (GANs), with the decoder and discriminator competing in a zero-sum game. The decoder learns to simultaneously decode and generate codewords while the discriminator learns to tell the differences between decoded outputs and codewords. Thus, the decoder is able to decode noisy received signals into codewords, increasing the probability of successful decoding. We show a strong connection of our framework with the optimal maximum likelihood decoder by proving that this decoder defines a Nash equilibrium point of our game. Hence, training to equilibrium has a good possibility of achieving the optimal maximum likelihood performance. Moreover, our framework does not require training labels, which are typically unavailable during communications, and, thus, seemingly can be trained online and adapt to channel dynamics. To demonstrate the performance of our framework, we combine it with the very recent neural decoders and show improved performance compared to the original models and traditional decoding algorithms on various codes.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
272,722
2005.04619
A Unified Weight Learning and Low-Rank Regression Model for Robust Complex Error Modeling
One of the most important problems in regression-based error model is modeling the complex representation error caused by various corruptions and environment changes in images. For example, in robust face recognition, images are often affected by varying types and levels of corruptions, such as random pixel corruptions, block occlusions, or disguises. However, existing works are not robust enough to solve this problem due to they cannot model the complex corrupted errors very well. In this paper, we address this problem by a unified sparse weight learning and low-rank approximation regression model, which enables the random noises and contiguous occlusions in images to be treated simultaneously. For the random noise, we define a generalized correntropy (GC) function to match the error distribution. For the structured error caused by occlusions or disguises, we propose a GC function based rank approximation to measure the rank of error matrices. Since the proposed objective function is non-convex, an effective iterative optimization algorithm is developed to achieve the optimal weight learning and low-rank approximation. Extensive experimental results on three public face databases show that the proposed model can fit the error distribution and structure very well, thus obtain better recognition accuracies in comparison with the existing methods.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
176,522
2502.09787
TableTalk: Scaffolding Spreadsheet Development with a Language Agent
Despite its ubiquity in the workforce, spreadsheet programming remains challenging as programmers need both spreadsheet-specific knowledge (e.g., APIs to write formulas) and problem-solving skills to create complex spreadsheets. Large language models (LLMs) can help automate aspects of this process, and recent advances in planning and reasoning have enabled language agents, which dynamically plan, use tools, and take iterative actions to complete complex tasks. These agents observe, plan, and act, making them well-suited to scaffold spreadsheet programming by following expert processes. We present TableTalk, a language agent that helps programmers build spreadsheets conversationally. Its design reifies three design principles -- scaffolding, flexibility, and incrementality -- which we derived from two studies of seven programmers and 62 Excel templates. TableTalk structures spreadsheet development by generating step-by-step plans and suggesting three next steps users can choose from. It also integrates tools that enable incremental spreadsheet construction. A user study with 20 programmers shows that TableTalk produces spreadsheets 2.3 times more likely to be preferred over a baseline agent, while reducing cognitive load and time spent reasoning about spreadsheet actions by 12.6%. TableTalk's approach has implications for human-agent collaboration. This includes providing persistent direct manipulation interfaces for stopping or undoing agent actions, while ensuring that such interfaces for accepting actions can be deactivated.
true
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
true
533,597
2411.11562
MSSIDD: A Benchmark for Multi-Sensor Denoising
The cameras equipped on mobile terminals employ different sensors in different photograph modes, and the transferability of raw domain denoising models between these sensors is significant but remains sufficient exploration. Industrial solutions either develop distinct training strategies and models for different sensors or ignore the differences between sensors and simply extend existing models to new sensors, which leads to tedious training or unsatisfactory performance. In this paper, we introduce a new benchmark, the Multi-Sensor SIDD (MSSIDD) dataset, which is the first raw-domain dataset designed to evaluate the sensor transferability of denoising models. The MSSIDD dataset consists of 60,000 raw images of six distinct sensors, derived through the degeneration of sRGB images via different camera sensor parameters. Furthermore, we propose a sensor consistency training framework that enables denoising models to learn the sensor-invariant features, thereby facilitating the generalization of the consistent model to unseen sensors. We evaluate previous arts on the newly proposed MSSIDD dataset, and the experimental results validate the effectiveness of our proposed method. Our dataset is available at https://www.kaggle.com/datasets/sjtuwh/mssidd.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
509,101
2006.07744
Exploiting the ConvLSTM: Human Action Recognition using Raw Depth Video-Based Recurrent Neural Networks
As in many other different fields, deep learning has become the main approach in most computer vision applications, such as scene understanding, object recognition, computer-human interaction or human action recognition (HAR). Research efforts within HAR have mainly focused on how to efficiently extract and process both spatial and temporal dependencies of video sequences. In this paper, we propose and compare, two neural networks based on the convolutional long short-term memory unit, namely ConvLSTM, with differences in the architecture and the long-term learning strategy. The former uses a video-length adaptive input data generator (\emph{stateless}) whereas the latter explores the \emph{stateful} ability of general recurrent neural networks but applied in the particular case of HAR. This stateful property allows the model to accumulate discriminative patterns from previous frames without compromising computer memory. Experimental results on the large-scale NTU RGB+D dataset show that the proposed models achieve competitive recognition accuracies with lower computational cost compared with state-of-the-art methods and prove that, in the particular case of videos, the rarely-used stateful mode of recurrent neural networks significantly improves the accuracy obtained with the standard mode. The recognition accuracies obtained are 75.26\% (CS) and 75.45\% (CV) for the stateless model, with an average time consumption per video of 0.21 s, and 80.43\% (CS) and 79.91\%(CV) with 0.89 s for the stateful version.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
181,922
2409.05243
Mamba-Enhanced Text-Audio-Video Alignment Network for Emotion Recognition in Conversations
Emotion Recognition in Conversations (ERCs) is a vital area within multimodal interaction research, dedicated to accurately identifying and classifying the emotions expressed by speakers throughout a conversation. Traditional ERC approaches predominantly rely on unimodal cues\-such as text, audio, or visual data\-leading to limitations in their effectiveness. These methods encounter two significant challenges: 1) Consistency in multimodal information. Before integrating various modalities, it is crucial to ensure that the data from different sources is aligned and coherent. 2) Contextual information capture. Successfully fusing multimodal features requires a keen understanding of the evolving emotional tone, especially in lengthy dialogues where emotions may shift and develop over time. To address these limitations, we propose a novel Mamba-enhanced Text-Audio-Video alignment network (MaTAV) for the ERC task. MaTAV is with the advantages of aligning unimodal features to ensure consistency across different modalities and handling long input sequences to better capture contextual multimodal information. The extensive experiments on the MELD and IEMOCAP datasets demonstrate that MaTAV significantly outperforms existing state-of-the-art methods on the ERC task with a big margin.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
486,690
2404.10218
Autonomous Implicit Indoor Scene Reconstruction with Frontier Exploration
Implicit neural representations have demonstrated significant promise for 3D scene reconstruction. Recent works have extended their applications to autonomous implicit reconstruction through the Next Best View (NBV) based method. However, the NBV method cannot guarantee complete scene coverage and often necessitates extensive viewpoint sampling, particularly in complex scenes. In the paper, we propose to 1) incorporate frontier-based exploration tasks for global coverage with implicit surface uncertainty-based reconstruction tasks to achieve high-quality reconstruction. and 2) introduce a method to achieve implicit surface uncertainty using color uncertainty, which reduces the time needed for view selection. Further with these two tasks, we propose an adaptive strategy for switching modes in view path planning, to reduce time and maintain superior reconstruction quality. Our method exhibits the highest reconstruction quality among all planning methods and superior planning efficiency in methods involving reconstruction tasks. We deploy our method on a UAV and the results show that our method can plan multi-task views and reconstruct a scene with high quality.
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
false
false
447,001
2307.07893
Anomaly Detection in Automated Fibre Placement: Learning with Data Limitations
Conventional defect detection systems in Automated Fibre Placement (AFP) typically rely on end-to-end supervised learning, necessitating a substantial number of labelled defective samples for effective training. However, the scarcity of such labelled data poses a challenge. To overcome this limitation, we present a comprehensive framework for defect detection and localization in Automated Fibre Placement. Our approach combines unsupervised deep learning and classical computer vision algorithms, eliminating the need for labelled data or manufacturing defect samples. It efficiently detects various surface issues while requiring fewer images of composite parts for training. Our framework employs an innovative sample extraction method leveraging AFP's inherent symmetry to expand the dataset. By inputting a depth map of the fibre layup surface, we extract local samples aligned with each composite strip (tow). These samples are processed through an autoencoder, trained on normal samples for precise reconstructions, highlighting anomalies through reconstruction errors. Aggregated values form an anomaly map for insightful visualization. The framework employs blob detection on this map to locate manufacturing defects. The experimental findings reveal that despite training the autoencoder with a limited number of images, our proposed method exhibits satisfactory detection accuracy and accurately identifies defect locations. Our framework demonstrates comparable performance to existing methods, while also offering the advantage of detecting all types of anomalies without relying on an extensive labelled dataset of defects.
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
379,582
2409.19606
Hyper-Connections
We present hyper-connections, a simple yet effective method that can serve as an alternative to residual connections. This approach specifically addresses common drawbacks observed in residual connection variants, such as the seesaw effect between gradient vanishing and representation collapse. Theoretically, hyper-connections allow the network to adjust the strength of connections between features at different depths and dynamically rearrange layers. We conduct experiments focusing on the pre-training of large language models, including dense and sparse models, where hyper-connections show significant performance improvements over residual connections. Additional experiments conducted on vision tasks also demonstrate similar improvements. We anticipate that this method will be broadly applicable and beneficial across a wide range of AI problems.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
true
false
false
492,765
2203.08852
Learning the Dynamics of Physical Systems from Sparse Observations with Finite Element Networks
We propose a new method for spatio-temporal forecasting on arbitrarily distributed points. Assuming that the observed system follows an unknown partial differential equation, we derive a continuous-time model for the dynamics of the data via the finite element method. The resulting graph neural network estimates the instantaneous effects of the unknown dynamics on each cell in a meshing of the spatial domain. Our model can incorporate prior knowledge via assumptions on the form of the unknown PDE, which induce a structural bias towards learning specific processes. Through this mechanism, we derive a transport variant of our model from the convection equation and show that it improves the transfer performance to higher-resolution meshes on sea surface temperature and gas flow forecasting against baseline models representing a selection of spatio-temporal forecasting methods. A qualitative analysis shows that our model disentangles the data dynamics into their constituent parts, which makes it uniquely interpretable.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
285,929
2307.02047
CAME: Confidence-guided Adaptive Memory Efficient Optimization
Adaptive gradient methods, such as Adam and LAMB, have demonstrated excellent performance in the training of large language models. Nevertheless, the need for adaptivity requires maintaining second-moment estimates of the per-parameter gradients, which entails a high cost of extra memory overheads. To solve this problem, several memory-efficient optimizers (e.g., Adafactor) have been proposed to obtain a drastic reduction in auxiliary memory usage, but with a performance penalty. In this paper, we first study a confidence-guided strategy to reduce the instability of existing memory efficient optimizers. Based on this strategy, we propose CAME to simultaneously achieve two goals: fast convergence as in traditional adaptive methods, and low memory usage as in memory-efficient methods. Extensive experiments demonstrate the training stability and superior performance of CAME across various NLP tasks such as BERT and GPT-2 training. Notably, for BERT pre-training on the large batch size of 32,768, our proposed optimizer attains faster convergence and higher accuracy compared with the Adam optimizer. The implementation of CAME is publicly available.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
377,567
1407.7083
Sampled-Data H-infinity Design of Coupling Wave Cancelers in Single-Frequency Full-Duplex Relay Stations
In this article, we propose sampled-data H-infinity design of digital filters that cancel the continuous-time effect of coupling waves in a single-frequency full-duplex relay station. In this study, we model a relay station as a continuous-time system while conventional researches treat it as a discrete-time system. For a continuous-time model, we propose digital feedforward and feedback cancelers based on the sampled-data control theory to cancel coupling waves taking intersample behavior into account. Simulation results are shown to illustrate the effectiveness of the proposed method.
false
false
false
false
false
false
false
false
false
true
true
false
false
false
false
false
false
false
34,907
2312.06872
ELSA: Partial Weight Freezing for Overhead-Free Sparse Network Deployment
We present ELSA, a practical solution for creating deep networks that can easily be deployed at different levels of sparsity. The core idea is to embed one or more sparse networks within a single dense network as a proper subset of the weights. At prediction time, any sparse model can be extracted effortlessly simply be zeroing out weights according to a predefined mask. ELSA is simple, powerful and highly flexible. It can use essentially any existing technique for network sparsification and network training. In particular, it does not restrict the loss function, architecture or the optimization technique. Our experiments show that ELSA's advantages of flexible deployment comes with no or just a negligible reduction in prediction quality compared to the standard way of using multiple sparse networks that are trained and stored independently.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
414,705
2306.07059
A Distribution Optimization Framework for Confidence Bounds of Risk Measures
We present a distribution optimization framework that significantly improves confidence bounds for various risk measures compared to previous methods. Our framework encompasses popular risk measures such as the entropic risk measure, conditional value at risk (CVaR), spectral risk measure, distortion risk measure, equivalent certainty, and rank-dependent expected utility, which are well established in risk-sensitive decision-making literature. To achieve this, we introduce two estimation schemes based on concentration bounds derived from the empirical distribution, specifically using either the Wasserstein distance or the supremum distance. Unlike traditional approaches that add or subtract a confidence radius from the empirical risk measures, our proposed schemes evaluate a specific transformation of the empirical distribution based on the distance. Consequently, our confidence bounds consistently yield tighter results compared to previous methods. We further verify the efficacy of the proposed framework by providing tighter problem-dependent regret bound for the CVaR bandit.
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
372,863
2406.13411
Composite Concept Extraction through Backdooring
Learning composite concepts, such as \textquotedbl red car\textquotedbl , from individual examples -- like a white car representing the concept of \textquotedbl car\textquotedbl{} and a red strawberry representing the concept of \textquotedbl red\textquotedbl -- is inherently challenging. This paper introduces a novel method called Composite Concept Extractor (CoCE), which leverages techniques from traditional backdoor attacks to learn these composite concepts in a zero-shot setting, requiring only examples of individual concepts. By repurposing the trigger-based model backdooring mechanism, we create a strategic distortion in the manifold of the target object (e.g., \textquotedbl car\textquotedbl ) induced by example objects with the target property (e.g., \textquotedbl red\textquotedbl ) from objects \textquotedbl red strawberry\textquotedbl , ensuring the distortion selectively affects the target objects with the target property. Contrastive learning is then employed to further refine this distortion, and a method is formulated for detecting objects that are influenced by the distortion. Extensive experiments with in-depth analysis across different datasets demonstrate the utility and applicability of our proposed approach.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
465,841
1801.04589
Deep Reinforcement Fuzzing
Fuzzing is the process of finding security vulnerabilities in input-processing code by repeatedly testing the code with modified inputs. In this paper, we formalize fuzzing as a reinforcement learning problem using the concept of Markov decision processes. This in turn allows us to apply state-of-the-art deep Q-learning algorithms that optimize rewards, which we define from runtime properties of the program under test. By observing the rewards caused by mutating with a specific set of actions performed on an initial program input, the fuzzing agent learns a policy that can next generate new higher-reward inputs. We have implemented this new approach, and preliminary empirical evidence shows that reinforcement fuzzing can outperform baseline random fuzzing.
false
false
false
false
true
false
false
false
false
false
false
false
true
false
false
false
false
false
88,310
2105.10362
Functionals in the Clouds: An abstract architecture of serverless Cloud-Native Apps
Cloud Native Application CNApp (as a distributed system) is a collection of independent components (micro-services) interacting via communication protocols. This gives rise to present an abstract architecture of CNApp as dynamically re-configurable acyclic directed multi graph where vertices are microservices, and edges are the protocols. Generic mechanisms for such reconfigurations evidently correspond to higher-level functions (functionals). This implies also internal abstract architecture of microservice as a collection of event-triggered serverless functions (including functions implementing the protocols) that are dynamically composed into event-dependent data-flow graphs. Again, generic mechanisms for such compositions correspond to calculus of functionals and relations.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
true
236,372
2203.08746
CLUE-AI: A Convolutional Three-stream Anomaly Identification Framework for Robot Manipulation
Robot safety has been a prominent research topic in recent years since robots are more involved in daily tasks. It is crucial to devise the required safety mechanisms to enable service robots to be aware of and react to anomalies (i.e., unexpected deviations from intended outcomes) that arise during the execution of these tasks. Detection and identification of these anomalies is an essential step towards fulfilling these requirements. Although several architectures are proposed for anomaly detection, identification is not yet thoroughly investigated. This task is challenging since indicators may appear long before anomalies are detected. In this paper, we propose a ConvoLUtional threE-stream Anomaly Identification (CLUE-AI) framework to address this problem. The framework fuses visual, auditory and proprioceptive data streams to identify everyday object manipulation anomalies. A stream of 2D images gathered through an RGB-D camera placed on the head of the robot is processed within a self-attention enabled visual stage to capture visual anomaly indicators. The auditory modality provided by the microphone placed on the robot's lower torso is processed within a designed convolutional neural network (CNN) in the auditory stage. Last, the force applied by the gripper and the gripper state are processed within a CNN to obtain proprioceptive features. These outputs are then combined with a late fusion scheme. Our novel three-stream framework design is analyzed on everyday object manipulation tasks with a Baxter humanoid robot in a semi-structured setting. The results indicate that the framework achieves an f-score of 94% outperforming the other baselines in classifying anomalies that arise during runtime.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
285,898
1912.02315
12-in-1: Multi-Task Vision and Language Representation Learning
Much of vision-and-language research focuses on a small but diverse set of independent tasks and supporting datasets often studied in isolation; however, the visually-grounded language understanding skills required for success at these tasks overlap significantly. In this work, we investigate these relationships between vision-and-language tasks by developing a large-scale, multi-task training regime. Our approach culminates in a single model on 12 datasets from four broad categories of task including visual question answering, caption-based image retrieval, grounding referring expressions, and multi-modal verification. Compared to independently trained single-task models, this represents a reduction from approximately 3 billion parameters to 270 million while simultaneously improving performance by 2.05 points on average across tasks. We use our multi-task framework to perform in-depth analysis of the effect of joint training diverse tasks. Further, we show that finetuning task-specific models from our single multi-task model can lead to further improvements, achieving performance at or above the state-of-the-art.
false
false
false
false
false
false
true
false
true
false
false
true
false
false
false
false
false
false
156,314
2501.18190
Economic Rationality under Specialization: Evidence of Decision Bias in AI Agents
In the study by Chen et al. (2023) [01], the large language model GPT demonstrated economic rationality comparable to or exceeding the average human level in tasks such as budget allocation and risk preference. Building on this finding, this paper further incorporates specialized agents, such as biotechnology experts and economists, for a horizontal comparison to explore whether specialization can enhance or maintain economic rationality equivalent to that of GPT in similar decision-making scenarios. The results indicate that when agents invest more effort in specialized fields, their decision-making behavior is more prone to 'rationality shift,' specifically manifested as increased violations of GARP (Generalized Axiom of Revealed Preference), decreased CCEI (Critical Cost Efficiency Index), and more significant decision deviations under high-risk conditions. In contrast, GPT and more generalized basic agents maintain a more stable and consistent level of rationality across multiple tasks. This study reveals the inherent conflict between specialization and economic rationality, providing new insights for constructing AI decision-making systems that balance specialization and generalization across various scenarios.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
528,609
1703.10155
CVAE-GAN: Fine-Grained Image Generation through Asymmetric Training
We present variational generative adversarial networks, a general learning framework that combines a variational auto-encoder with a generative adversarial network, for synthesizing images in fine-grained categories, such as faces of a specific person or objects in a category. Our approach models an image as a composition of label and latent attributes in a probabilistic model. By varying the fine-grained category label fed into the resulting generative model, we can generate images in a specific category with randomly drawn values on a latent attribute vector. Our approach has two novel aspects. First, we adopt a cross entropy loss for the discriminative and classifier network, but a mean discrepancy objective for the generative network. This kind of asymmetric loss function makes the GAN training more stable. Second, we adopt an encoder network to learn the relationship between the latent space and the real image space, and use pairwise feature matching to keep the structure of generated images. We experiment with natural images of faces, flowers, and birds, and demonstrate that the proposed models are capable of generating realistic and diverse samples with fine-grained category labels. We further show that our models can be applied to other tasks, such as image inpainting, super-resolution, and data augmentation for training better face recognition models.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
70,872
2403.08185
Perceive With Confidence: Statistical Safety Assurances for Navigation with Learning-Based Perception
Rapid advances in perception have enabled large pre-trained models to be used out of the box for transforming high-dimensional, noisy, and partial observations of the world into rich occupancy representations. However, the reliability of these models and consequently their safe integration onto robots remains unknown when deployed in environments unseen during training. In this work, we address this challenge by rigorously quantifying the uncertainty of pre-trained perception systems for object detection via a novel calibration technique based on conformal prediction. Crucially, this procedure guarantees robustness to distribution shifts in states when perceptual outputs are used in conjunction with a planner. As a result, the calibrated perception system can be used in combination with any safe planner to provide an end-to-end statistical assurance on safety in unseen environments. We evaluate the resulting approach, Perceive with Confidence (PwC), in simulation and on hardware where a quadruped robot navigates through previously unseen indoor, static environments. These experiments validate the safety assurances for obstacle avoidance provided by PwC and demonstrate up to $40\%$ improvements in empirical safety compared to baselines.
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
437,214
2411.19305
LD-EnSF: Synergizing Latent Dynamics with Ensemble Score Filters for Fast Data Assimilation with Sparse Observations
Data assimilation techniques are crucial for correcting the trajectory when modeling complex physical systems. A recently developed data assimilation method, Latent Ensemble Score Filter (Latent-EnSF), has shown great promise in addressing the key limitation of EnSF for highly sparse observations in high-dimensional and nonlinear data assimilation problems. It performs data assimilation in a latent space for encoded states and observations in every assimilation step, and requires costly full dynamics to be evolved in the original space. In this paper, we introduce Latent Dynamics EnSF (LD-EnSF), a novel methodology that completely avoids the full dynamics evolution and significantly accelerates the data assimilation process, which is especially valuable for complex dynamical problems that require fast data assimilation in real time. To accomplish this, we introduce a novel variant of Latent Dynamics Networks (LDNets) to effectively capture and preserve the system's dynamics within a very low-dimensional latent space. Additionally, we propose a new method for encoding sparse observations into the latent space using Long Short-Term Memory (LSTM) networks, which leverage not only the current step's observations, as in Latent-EnSF, but also all previous steps, thereby improving the accuracy and robustness of the observation encoding. We demonstrate the robustness, accuracy, and efficiency of the proposed method for two challenging dynamical systems with highly sparse (in both space and time) and noisy observations.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
512,197
2404.11180
Causal Deconfounding via Confounder Disentanglement for Dual-Target Cross-Domain Recommendation
In recent years, dual-target Cross-Domain Recommendation (CDR) has been proposed to capture comprehensive user preferences in order to ultimately enhance the recommendation accuracy in both data-richer and data-sparser domains simultaneously. However, in addition to users' true preferences, the user-item interactions might also be affected by confounders (e.g., free shipping, sales promotion). As a result, dual-target CDR has to meet two challenges: (1) how to effectively decouple observed confounders, including single-domain confounders and cross-domain confounders, and (2) how to preserve the positive effects of observed confounders on predicted interactions, while eliminating their negative effects on capturing comprehensive user preferences. To address the above two challenges, we propose a Causal Deconfounding framework via Confounder Disentanglement for dual-target Cross-Domain Recommendation, called CD2CDR. In CD2CDR, we first propose a confounder disentanglement module to effectively decouple observed single-domain and cross-domain confounders. We then propose a causal deconfounding module to preserve the positive effects of such observed confounders and eliminate their negative effects via backdoor adjustment, thereby enhancing the recommendation accuracy in each domain. Extensive experiments conducted on five real-world datasets demonstrate that CD2CDR significantly outperforms the state-of-the-art methods.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
447,413
2411.19888
FlowCLAS: Enhancing Normalizing Flow Via Contrastive Learning For Anomaly Segmentation
Anomaly segmentation is a valuable computer vision task for safety-critical applications that need to be aware of unexpected events. Current state-of-the-art (SOTA) scene-level anomaly segmentation approaches rely on diverse inlier class labels during training, limiting their ability to leverage vast unlabeled datasets and pre-trained vision encoders. These methods may underperform in domains with reduced color diversity and limited object classes. Conversely, existing unsupervised methods struggle with anomaly segmentation with the diverse scenes of less restricted domains. To address these challenges, we introduce FlowCLAS, a novel self-supervised framework that utilizes vision foundation models to extract rich features and employs a normalizing flow network to learn their density distribution. We enhance the model's discriminative power by incorporating Outlier Exposure and contrastive learning in the latent space. FlowCLAS significantly outperforms all existing methods on the ALLO anomaly segmentation benchmark for space robotics and demonstrates competitive results on multiple road anomaly segmentation benchmarks for autonomous driving, including Fishyscapes Lost&Found and Road Anomaly. These results highlight FlowCLAS's effectiveness in addressing the unique challenges of space anomaly segmentation while retaining SOTA performance in the autonomous driving domain without reliance on inlier segmentation labels.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
512,419
1811.10452
Context-Aware Crowd Counting
State-of-the-art methods for counting people in crowded scenes rely on deep networks to estimate crowd density. They typically use the same filters over the whole image or over large image patches. Only then do they estimate local scale to compensate for perspective distortion. This is typically achieved by training an auxiliary classifier to select, for predefined image patches, the best kernel size among a limited set of choices. As such, these methods are not end-to-end trainable and restricted in the scope of context they can leverage. In this paper, we introduce an end-to-end trainable deep architecture that combines features obtained using multiple receptive field sizes and learns the importance of each such feature at each image location. In other words, our approach adaptively encodes the scale of the contextual information required to accurately predict crowd density. This yields an algorithm that outperforms state-of-the-art crowd counting methods, especially when perspective effects are strong.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
114,495
2111.01768
Nearly Optimal Algorithms for Level Set Estimation
The level set estimation problem seeks to find all points in a domain ${\cal X}$ where the value of an unknown function $f:{\cal X}\rightarrow \mathbb{R}$ exceeds a threshold $\alpha$. The estimation is based on noisy function evaluations that may be acquired at sequentially and adaptively chosen locations in ${\cal X}$. The threshold value $\alpha$ can either be \emph{explicit} and provided a priori, or \emph{implicit} and defined relative to the optimal function value, i.e. $\alpha = (1-\epsilon)f(x_\ast)$ for a given $\epsilon > 0$ where $f(x_\ast)$ is the maximal function value and is unknown. In this work we provide a new approach to the level set estimation problem by relating it to recent adaptive experimental design methods for linear bandits in the Reproducing Kernel Hilbert Space (RKHS) setting. We assume that $f$ can be approximated by a function in the RKHS up to an unknown misspecification and provide novel algorithms for both the implicit and explicit cases in this setting with strong theoretical guarantees. Moreover, in the linear (kernel) setting, we show that our bounds are nearly optimal, namely, our upper bounds match existing lower bounds for threshold linear bandits. To our knowledge this work provides the first instance-dependent, non-asymptotic upper bounds on sample complexity of level-set estimation that match information theoretic lower bounds.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
264,661
0811.0952
Raptor Codes and Cryptographic Issues
In this paper two cryptographic methods are introduced. In the first method the presence of a certain size subgroup of persons can be checked for an action to take place. For this we use fragments of Raptor codes delivered to the group members. In the other method a selection of a subset of objects can be made secret. Also, it can be proven afterwards, what the original selection was.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
2,640
1907.03809
Competing Models
Different agents need to make a prediction. They observe identical data, but have different models: they predict using different explanatory variables. We study which agent believes they have the best predictive ability -- as measured by the smallest subjective posterior mean squared prediction error -- and show how it depends on the sample size. With small samples, we present results suggesting it is an agent using a low-dimensional model. With large samples, it is generally an agent with a high-dimensional model, possibly including irrelevant variables, but never excluding relevant ones. We apply our results to characterize the winning model in an auction of productive assets, to argue that entrepreneurs and investors with simple models will be over-represented in new sectors, and to understand the proliferation of "factors" that explain the cross-sectional variation of expected stock returns in the asset-pricing literature.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
137,938
2210.11846
Redefining Counterfactual Explanations for Reinforcement Learning: Overview, Challenges and Opportunities
While AI algorithms have shown remarkable success in various fields, their lack of transparency hinders their application to real-life tasks. Although explanations targeted at non-experts are necessary for user trust and human-AI collaboration, the majority of explanation methods for AI are focused on developers and expert users. Counterfactual explanations are local explanations that offer users advice on what can be changed in the input for the output of the black-box model to change. Counterfactuals are user-friendly and provide actionable advice for achieving the desired output from the AI system. While extensively researched in supervised learning, there are few methods applying them to reinforcement learning (RL). In this work, we explore the reasons for the underrepresentation of a powerful explanation method in RL. We start by reviewing the current work in counterfactual explanations in supervised learning. Additionally, we explore the differences between counterfactual explanations in supervised learning and RL and identify the main challenges that prevent the adoption of methods from supervised in reinforcement learning. Finally, we redefine counterfactuals for RL and propose research directions for implementing counterfactuals in RL.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
325,483
2308.08930
Point-aware Interaction and CNN-induced Refinement Network for RGB-D Salient Object Detection
By integrating complementary information from RGB image and depth map, the ability of salient object detection (SOD) for complex and challenging scenes can be improved. In recent years, the important role of Convolutional Neural Networks (CNNs) in feature extraction and cross-modality interaction has been fully explored, but it is still insufficient in modeling global long-range dependencies of self-modality and cross-modality. To this end, we introduce CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network with Point-aware Interaction and CNN-induced Refinement (PICR-Net). On the one hand, considering the prior correlation between RGB modality and depth modality, an attention-triggered cross-modality point-aware interaction (CmPI) module is designed to explore the feature interaction of different modalities with positional constraints. On the other hand, in order to alleviate the block effect and detail destruction problems brought by the Transformer naturally, we design a CNN-induced refinement (CNNR) unit for content refinement and supplementation. Extensive experiments on five RGB-D SOD datasets show that the proposed network achieves competitive results in both quantitative and qualitative comparisons.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
386,090
1108.4297
Why is language well-designed for communication? (Commentary on Christiansen and Chater: 'Language as shaped by the brain')
Selection through iterated learning explains no more than other non-functional accounts, such as universal grammar, why language is so well-designed for communicative efficiency. It does not predict several distinctive features of language like central embedding, large lexicons or the lack of iconicity, that seem to serve communication purposes at the expense of learnability.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
11,762
2201.10297
RadiOrchestra: Proactive Management of Millimeter-wave Self-backhauled Small Cells via Joint Optimization of Beamforming, User Association, Rate Selection, and Admission Control
Millimeter-wave self-backhauled small cells are a key component of next-generation wireless networks. Their dense deployment will increase data rates, reduce latency, and enable efficient data transport between the access and backhaul networks, providing greater flexibility not previously possible with optical fiber. Despite their high potential, operating dense self-backhauled networks optimally is an open challenge, particularly for radio resource management (RRM). This paper presents, RadiOrchestra, a holistic RRM framework that models and optimizes beamforming, rate selection as well as user association and admission control for self-backhauled networks. The framework is designed to account for practical challenges such as hardware limitations of base stations (e.g., computational capacity, discrete rates), the need for adaptability of backhaul links, and the presence of interference. Our framework is formulated as a nonconvex mixed-integer nonlinear program, which is challenging to solve. To approach this problem, we propose three algorithms that provide a trade-off between complexity and optimality. Furthermore, we derive upper and lower bounds to characterize the performance limits of the system. We evaluate the developed strategies in various scenarios, showing the feasibility of deploying practical self-backhauling in future networks.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
276,944
2412.16291
Benchmarking LLMs and SLMs for patient reported outcomes
LLMs have transformed the execution of numerous tasks, including those in the medical domain. Among these, summarizing patient-reported outcomes (PROs) into concise natural language reports is of particular interest to clinicians, as it enables them to focus on critical patient concerns and spend more time in meaningful discussions. While existing work with LLMs like GPT-4 has shown impressive results, real breakthroughs could arise from leveraging SLMs as they offer the advantage of being deployable locally, ensuring patient data privacy and compliance with healthcare regulations. This study benchmarks several SLMs against LLMs for summarizing patient-reported Q\&A forms in the context of radiotherapy. Using various metrics, we evaluate their precision and reliability. The findings highlight both the promise and limitations of SLMs for high-stakes medical tasks, fostering more efficient and privacy-preserving AI-driven healthcare solutions.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
519,459
2308.11148
LLaMA-Reviewer: Advancing Code Review Automation with Large Language Models through Parameter-Efficient Fine-Tuning
The automation of code review activities, a long-standing pursuit in software engineering, has been primarily addressed by numerous domain-specific pre-trained models. Despite their success, these models frequently demand extensive resources for pre-training from scratch. In contrast, Large Language Models (LLMs) provide an intriguing alternative, given their remarkable capabilities when supplemented with domain-specific knowledge. However, their potential for automating code review tasks remains largely unexplored. In response to this research gap, we present LLaMA-Reviewer, an innovative framework that leverages the capabilities of LLaMA, a popular LLM, in the realm of code review. Mindful of resource constraints, this framework employs parameter-efficient fine-tuning (PEFT) methods, delivering high performance while using less than 1% of trainable parameters. An extensive evaluation of LLaMA-Reviewer is conducted on two diverse, publicly available datasets. Notably, even with the smallest LLaMA base model consisting of 6.7B parameters and a limited number of tuning epochs, LLaMA-Reviewer equals the performance of existing code-review-focused models. The ablation experiments provide insights into the influence of various fine-tuning process components, including input representation, instruction tuning, and different PEFT methods. To foster continuous progress in this field, the code and all PEFT-weight plugins have been made open-source.
false
false
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
true
387,023
2104.02019
From Classical to Quantum: Uniform Continuity Bounds on Entropies in Infinite Dimensions
We prove a variety of new and refined uniform continuity bounds for entropies of both classical random variables on an infinite state space and of quantum states of infinite-dimensional systems. We obtain the first tight continuity estimate on the Shannon entropy of random variables with a countably infinite alphabet. The proof relies on a new mean-constrained Fano-type inequality and the notion of maximal coupling of random variables. We then employ this classical result to derive the first tight energy-constrained continuity bound for the von Neumann entropy of states of infinite-dimensional quantum systems, when the Hamiltonian is the number operator, which is arguably the most relevant Hamiltonian in the study of infinite-dimensional quantum systems in the context of quantum information theory. The above scheme works only for Shannon- and von Neumann entropies. Hence, to deal with more general entropies, e.g. $\alpha$-R\'enyi and $\alpha$-Tsallis entropies, with $\alpha \in (0,1)$, for which continuity bounds are known only for finite-dimensional systems, we develop a novel approximation scheme which relies on recent results on operator H\"older continuous functions and the equivalence of all Schatten norms in special spectral subspaces of the Hamiltonian. This approach is, as we show, motivated by continuity bounds for $\alpha$-R\'enyi and $\alpha$-Tsallis entropies of random variables that follow from the H\"older continuity of the entropy functionals. Bounds for $\alpha>1$ are provided, too. Finally, we settle an open problem on related approximation questions posed in the recent works by Shirokov on the so-called Finite-dimensional Approximation (FA) property.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
228,561
1801.05243
Rank Selection of CP-decomposed Convolutional Layers with Variational Bayesian Matrix Factorization
Convolutional Neural Networks (CNNs) is one of successful method in many areas such as image classification tasks. However, the amount of memory and computational cost needed for CNNs inference obstructs them to run efficiently in mobile devices because of memory and computational ability limitation. One of the method to compress CNNs is compressing the layers iteratively, i.e. by layer-by-layer compression and fine-tuning, with CP-decomposition in convolutional layers. To compress with CP-decomposition, rank selection is important. In the previous approach rank selection that is based on sensitivity of each layer, the average rank of the network was still arbitrarily selected. Additionally, the rank of all layers were decided before whole process of iterative compression, while the rank of a layer can be changed after fine-tuning. Therefore, this paper proposes selecting rank of each layer using Variational Bayesian Matrix Factorization (VBMF) which is more systematic than arbitrary approach. Furthermore, to consider the change of each layer's rank after fine-tuning of previous iteration, the method is applied just before compressing the target layer, i.e. after fine-tuning of the previous iteration. The results show better accuracy while also having more compression rate in AlexNet's convolutional layers compression.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
88,425
2008.06952
A Functional Perspective on Learning Symmetric Functions with Neural Networks
Symmetric functions, which take as input an unordered, fixed-size set, are known to be universally representable by neural networks that enforce permutation invariance. These architectures only give guarantees for fixed input sizes, yet in many practical applications, including point clouds and particle physics, a relevant notion of generalization should include varying the input size. In this work we treat symmetric functions (of any size) as functions over probability measures, and study the learning and representation of neural networks defined on measures. By focusing on shallow architectures, we establish approximation and generalization bounds under different choices of regularization (such as RKHS and variation norms), that capture a hierarchy of functional spaces with increasing degree of non-linear learning. The resulting models can be learned efficiently and enjoy generalization guarantees that extend across input sizes, as we verify empirically.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
191,944
2407.10590
Deep-Learning-Based Markerless Pose Estimation Systems in Gait Analysis: DeepLabCut Custom Training and the Refinement Function
The current gold standard for the study of human movement is the marker-based motion capture system that offers high precision but constrained by costs and controlled environments. Markerless pose estimation systems emerge as ecological alternatives, allowing unobtrusive data acquisition in natural settings. This study compares the performance of two popular markerless systems, OpenPose (OP) and DeepLabCut (DLC), in assessing locomotion. Forty healthy subjects walked along a 5 meters walkway equipped with four force platforms and a camera. Gait parameters were obtained using OP BODY 25 Pre-Trained model (OPPT), DLC Model Zoo full human Pre-Trained model (DLCPT) and DLC Custom-Trained model (DLCCT), then compared with those acquired from the force platforms as reference system. Our results demonstrated that DLCCT outperformed DLCPT and OPPT, highlighting the importance of leveraging DeepLabCut transfer learning to enhance the pose estimation performance with a custom-trained neural networks. Moreover, DLCCT, with the implementation of the DLC refinement function, offers the most promising markerless pose estimation solution for evaluating locomotion. Therefore, our data provide insights into the DLC training and refinement processes required to achieve optimal performance. This study offers perspectives for clinicians and practitioners seeking accurate low-cost methods for movement assessment beyond laboratory settings.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
473,043
2407.12211
On the Calibration of Epistemic Uncertainty: Principles, Paradoxes and Conflictual Loss
The calibration of predictive distributions has been widely studied in deep learning, but the same cannot be said about the more specific epistemic uncertainty as produced by Deep Ensembles, Bayesian Deep Networks, or Evidential Deep Networks. Although measurable, this form of uncertainty is difficult to calibrate on an objective basis as it depends on the prior for which a variety of choices exist. Nevertheless, epistemic uncertainty must in all cases satisfy two formal requirements: first, it must decrease when the training dataset gets larger and, second, it must increase when the model expressiveness grows. Despite these expectations, our experimental study shows that on several reference datasets and models, measures of epistemic uncertainty violate these requirements, sometimes presenting trends completely opposite to those expected. These paradoxes between expectation and reality raise the question of the true utility of epistemic uncertainty as estimated by these models. A formal argument suggests that this disagreement is due to a poor approximation of the posterior distribution rather than to a flaw in the measure itself. Based on this observation, we propose a regularization function for deep ensembles, called conflictual loss in line with the above requirements. We emphasize its strengths by showing experimentally that it restores both requirements of epistemic uncertainty, without sacrificing either the performance or the calibration of the deep ensembles.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
473,812
2501.01924
Transformer-Driven Inverse Problem Transform for Fast Blind Hyperspectral Image Dehazing
Hyperspectral dehazing (HyDHZ) has become a crucial signal processing technology to facilitate the subsequent identification and classification tasks, as the airborne visible/infrared imaging spectrometer (AVIRIS) data portal reports a massive portion of haze-corrupted areas in typical hyperspectral remote sensing images. The idea of inverse problem transform (IPT) has been proposed in recent remote sensing literature in order to reformulate a hardly tractable inverse problem (e.g., HyDHZ) into a relatively simple one. Considering the emerging spectral super-resolution (SSR) technique, which spectrally upsamples multispectral data to hyperspectral data, we aim to solve the challenging HyDHZ problem by reformulating it as an SSR problem. Roughly speaking, the proposed algorithm first automatically selects some uncorrupted/informative spectral bands, from which SSR is applied to spectrally upsample the selected bands in the feature space, thereby obtaining a clean hyperspectral image (HSI). The clean HSI is then further refined by a deep transformer network to obtain the final dehazed HSI, where a global attention mechanism is designed to capture nonlocal information. There are very few HyDHZ works in existing literature, and this article introduces the powerful spatial-spectral transformer into HyDHZ for the first time. Remarkably, the proposed transformer-driven IPT-based HyDHZ (T2HyDHZ) is a blind algorithm without requiring the user to manually select the corrupted region. Extensive experiments demonstrate the superiority of T2HyDHZ with less color distortion.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
522,262
2303.05754
Decomposed Diffusion Sampler for Accelerating Large-Scale Inverse Problems
Krylov subspace, which is generated by multiplying a given vector by the matrix of a linear transformation and its successive powers, has been extensively studied in classical optimization literature to design algorithms that converge quickly for large linear inverse problems. For example, the conjugate gradient method (CG), one of the most popular Krylov subspace methods, is based on the idea of minimizing the residual error in the Krylov subspace. However, with the recent advancement of high-performance diffusion solvers for inverse problems, it is not clear how classical wisdom can be synergistically combined with modern diffusion models. In this study, we propose a novel and efficient diffusion sampling strategy that synergistically combines the diffusion sampling and Krylov subspace methods. Specifically, we prove that if the tangent space at a denoised sample by Tweedie's formula forms a Krylov subspace, then the CG initialized with the denoised data ensures the data consistency update to remain in the tangent space. This negates the need to compute the manifold-constrained gradient (MCG), leading to a more efficient diffusion sampling method. Our method is applicable regardless of the parametrization and setting (i.e., VE, VP). Notably, we achieve state-of-the-art reconstruction quality on challenging real-world medical inverse imaging problems, including multi-coil MRI reconstruction and 3D CT reconstruction. Moreover, our proposed method achieves more than 80 times faster inference time than the previous state-of-the-art method. Code is available at https://github.com/HJ-harry/DDS
false
false
false
false
true
false
true
false
false
false
false
true
false
false
false
false
false
false
350,587
2310.07786
Non-Stationary Contextual Bandit Learning via Neural Predictive Ensemble Sampling
Real-world applications of contextual bandits often exhibit non-stationarity due to seasonality, serendipity, and evolving social trends. While a number of non-stationary contextual bandit learning algorithms have been proposed in the literature, they excessively explore due to a lack of prioritization for information of enduring value, or are designed in ways that do not scale in modern applications with high-dimensional user-specific features and large action set, or both. In this paper, we introduce a novel non-stationary contextual bandit algorithm that addresses these concerns. It combines a scalable, deep-neural-network-based architecture with a carefully designed exploration mechanism that strategically prioritizes collecting information with the most lasting value in a non-stationary environment. Through empirical evaluations on two real-world recommendation datasets, which exhibit pronounced non-stationarity, we demonstrate that our approach significantly outperforms the state-of-the-art baselines.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
399,119
1906.11052
Further advantages of data augmentation on convolutional neural networks
Data augmentation is a popular technique largely used to enhance the training of convolutional neural networks. Although many of its benefits are well known by deep learning researchers and practitioners, its implicit regularization effects, as compared to popular explicit regularization techniques, such as weight decay and dropout, remain largely unstudied. As a matter of fact, convolutional neural networks for image object classification are typically trained with both data augmentation and explicit regularization, assuming the benefits of all techniques are complementary. In this paper, we systematically analyze these techniques through ablation studies of different network architectures trained with different amounts of training data. Our results unveil a largely ignored advantage of data augmentation: networks trained with just data augmentation more easily adapt to different architectures and amount of training data, as opposed to weight decay and dropout, which require specific fine-tuning of their hyperparameters.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
136,574
1910.08336
KerCNNs: biologically inspired lateral connections for classification of corrupted images
The state of the art in many computer vision tasks is represented by Convolutional Neural Networks (CNNs). Although their hierarchical organization and local feature extraction are inspired by the structure of primate visual systems, the lack of lateral connections in such architectures critically distinguishes their analysis from biological object processing. The idea of enriching CNNs with recurrent lateral connections of convolutional type has been put into practice in recent years, in the form of learned recurrent kernels with no geometrical constraints. In the present work, we introduce biologically plausible lateral kernels encoding a notion of correlation between the feedforward filters of a CNN: at each layer, the associated kernel acts as a transition kernel on the space of activations. The lateral kernels are defined in terms of the filters, thus providing a parameter-free approach to assess the geometry of horizontal connections based on the feedforward structure. We then test this new architecture, which we call KerCNN, on a generalization task related to global shape analysis and pattern completion: once trained for performing basic image classification, the network is evaluated on corrupted testing images. The image perturbations examined are designed to undermine the recognition of the images via local features, thus requiring an integration of context information - which in biological vision is critically linked to lateral connectivity. Our KerCNNs turn out to be far more stable than CNNs and recurrent CNNs to such degradations, thus validating this biologically inspired approach to reinforce object recognition under challenging conditions.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
149,848
2502.03877
Advanced Object Detection and Pose Estimation with Hybrid Task Cascade and High-Resolution Networks
In the field of computer vision, 6D object detection and pose estimation are critical for applications such as robotics, augmented reality, and autonomous driving. Traditional methods often struggle with achieving high accuracy in both object detection and precise pose estimation simultaneously. This study proposes an improved 6D object detection and pose estimation pipeline based on the existing 6D-VNet framework, enhanced by integrating a Hybrid Task Cascade (HTC) and a High-Resolution Network (HRNet) backbone. By leveraging the strengths of HTC's multi-stage refinement process and HRNet's ability to maintain high-resolution representations, our approach significantly improves detection accuracy and pose estimation precision. Furthermore, we introduce advanced post-processing techniques and a novel model integration strategy that collectively contribute to superior performance on public and private benchmarks. Our method demonstrates substantial improvements over state-of-the-art models, making it a valuable contribution to the domain of 6D object detection and pose estimation.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
530,897
2009.06408
A block-coupled Finite Volume methodology for problems of large strain and large displacement
A nonlinear block-coupled Finite Volume methodology is developed for large displacement and large strain regime. The new methodology uses the same normal and tangential face derivative discretisations found in the original fully coupled cell-centred Finite Volume solution methodology for linear elasticity, meaning that existing block-coupled implementations may easily be extended to include finite strains. Details are given of the novel approach, including use of the Newton-Raphson procedure on a residual functional defined using the linear momentum equation. A number of 2-D benchmark cases have shown that, compared with a segregated procedure, the new approach exhibits errors with many orders of magnitude smaller and a much higher convergence rate.
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
false
195,635
2411.10081
Influence of Depth Camera Noise Models on Respiration Estimation
Depth cameras are an interesting modality for capturing vital signs such as respiratory rate. Plenty approaches exist to extract vital signs in a controlled setting, but in order to apply them more flexibly for example in multi-camera settings, a simulated environment is needed to generate enough data for training and testing of new algorithms. We show first results of a 3D-rendering simulation pipeline that focuses on different noise models in order to generate realistic, depth-camera based respiratory signals using both synthetic and real respiratory signals as a baseline. While most noise can be accurately modelled as Gaussian in this context, we can show that as soon as the available image resolution is too low, the differences between different noise models surface.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
508,494
1408.5294
Distributed Time-Varying Stochastic Optimization and Utility-Based Communication
We devise a distributed asynchronous stochastic epsilon-gradient-based algorithm to enable a network of computing and communicating nodes to solve a constrained discrete-time time-varying stochastic convex optimization problem. Each node updates its own decision variable only once every discrete time step. Under some assumptions (among which, strong convexity, Lipschitz continuity of the gradient, persistent excitation), we prove the algorithm's asymptotic convergence in expectation to an error bound whose size is related to the constant stepsize choice alpha, the variability in time of the optimization problem, and to the accuracy epsilon. Moreover, the convergence rate is linear. Then, we show how to compute locally stochastic epsilon-gradients that depend also on the time-varying noise probability density function (PDF) of the neighboring nodes, without requiring the neighbors to send such PDFs at each time step. We devise utility-based policies to allow each node to decide whether to send or not the most up-to-date PDF, which guarantee a given user-specified error level epsilon in the computation of the stochastic epsilon-gradient. Numerical simulations display the added value of the proposed approach and its relevance for estimation and control of time-varying processes and networked systems.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
35,528
1506.01153
Distance estimation with efference copies and optical flow maneuvers: a stability-based strategy
The visual cue of optical flow plays a major role in the navigation of flying insects, and is increasingly studied for use by small flying robots as well. A major problem is that successful optical flow control seems to require distance estimates, while optical flow is known to provide only the ratio of velocity to distance. In this article, a novel, stability-based strategy is proposed to estimate distances with monocular optical flow and knowledge of the control inputs (efference copies). It is shown analytically that given a fixed control gain, the stability of a constant divergence control loop only depends on the distance to the approached surface. At close distances, the control loop first starts to exhibit self-induced oscillations, eventually leading to instability. The proposed stability-based strategy for estimating distances has two major attractive characteristics. First, self-induced oscillations are easy for the robot to detect and are hardly influenced by wind. Second, the distance can be estimated during a zero divergence maneuver, i.e., around hover. The stability-based strategy is implemented and tested both in simulation and with a Parrot AR drone 2.0. It is shown that it can be used to: (1) trigger a final approach response during a constant divergence landing with fixed gain, (2) estimate the distance in hover, and (3) estimate distances during an entire landing if the robot uses adaptive gain control to continuously stay on the 'edge of oscillation'.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
43,769
2306.14875
A Fully Unsupervised Instance Segmentation Technique for White Blood Cell Images
White blood cells, also known as leukocytes are group of heterogeneously nucleated cells which act as salient immune system cells. These are originated in the bone marrow and are found in blood, plasma, and lymph tissues. Leukocytes kill the bacteria, virus and other kind of pathogens which invade human body through phagocytosis that in turn results immunity. Detection of a white blood cell count can reveal camouflaged infections and warn doctors about chronic medical conditions such as autoimmune diseases, immune deficiencies, and blood disorders. Segmentation plays an important role in identification of white blood cells (WBC) from microscopic image analysis. The goal of segmentation in a microscopic image is to divide the image into different distinct regions. In our paper, we tried to propose a novel instance segmentation method for segmenting the WBCs containing both the nucleus and the cytoplasm, from bone marrow images.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
375,843
2305.15956
Anomaly Detection with Conditioned Denoising Diffusion Models
Traditional reconstruction-based methods have struggled to achieve competitive performance in anomaly detection. In this paper, we introduce Denoising Diffusion Anomaly Detection (DDAD), a novel denoising process for image reconstruction conditioned on a target image. This ensures a coherent restoration that closely resembles the target image. Our anomaly detection framework employs the conditioning mechanism, where the target image is set as the input image to guide the denoising process, leading to a defectless reconstruction while maintaining nominal patterns. Anomalies are then localised via a pixel-wise and feature-wise comparison of the input and reconstructed image. Finally, to enhance the effectiveness of the feature-wise comparison, we introduce a domain adaptation method that utilises nearly identical generated examples from our conditioned denoising process to fine-tune the pretrained feature extractor. The veracity of DDAD is demonstrated on various datasets including MVTec and VisA benchmarks, achieving state-of-the-art results of \(99.8 \%\) and \(98.9 \%\) image-level AUROC respectively.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
367,868
2310.06275
High-Fidelity 3D Head Avatars Reconstruction through Spatially-Varying Expression Conditioned Neural Radiance Field
One crucial aspect of 3D head avatar reconstruction lies in the details of facial expressions. Although recent NeRF-based photo-realistic 3D head avatar methods achieve high-quality avatar rendering, they still encounter challenges retaining intricate facial expression details because they overlook the potential of specific expression variations at different spatial positions when conditioning the radiance field. Motivated by this observation, we introduce a novel Spatially-Varying Expression (SVE) conditioning. The SVE can be obtained by a simple MLP-based generation network, encompassing both spatial positional features and global expression information. Benefiting from rich and diverse information of the SVE at different positions, the proposed SVE-conditioned neural radiance field can deal with intricate facial expressions and achieve realistic rendering and geometry details of high-fidelity 3D head avatars. Additionally, to further elevate the geometric and rendering quality, we introduce a new coarse-to-fine training strategy, including a geometry initialization strategy at the coarse stage and an adaptive importance sampling strategy at the fine stage. Extensive experiments indicate that our method outperforms other state-of-the-art (SOTA) methods in rendering and geometry quality on mobile phone-collected and public datasets.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
398,504
2206.07798
Gaussian Blue Noise
Among the various approaches for producing point distributions with blue noise spectrum, we argue for an optimization framework using Gaussian kernels. We show that with a wise selection of optimization parameters, this approach attains unprecedented quality, provably surpassing the current state of the art attained by the optimal transport (BNOT) approach. Further, we show that our algorithm scales smoothly and feasibly to high dimensions while maintaining the same quality, realizing unprecedented high-quality high-dimensional blue noise sets. Finally, we show an extension to adaptive sampling.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
true
302,888
1711.02123
Consistency of Maximum Likelihood for Continuous-Space Network Models I
A very popular class of models for networks posits that each node is represented by a point in a continuous latent space, and that the probability of an edge between nodes is a decreasing function of the distance between them in this latent space. We study the embedding problem for these models, of recovering the latent positions from the observed graph. Assuming certain natural symmetry and smoothness properties, we establish the uniform convergence of the log-likelihood of latent positions as the number of nodes grows. A consequence is that the maximum likelihood embedding converges on the true positions in a certain information-theoretic sense. Extensions of these results, to recovering distributions in the latent space, and so distributions over arbitrarily large graphs, will be treated in the sequel.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
84,005
1712.00376
Polar Coding for the Large Hadron Collider: Challenges in Code Concatenation
In this work, we present a concatenated repetition-polar coding scheme that is aimed at applications requiring highly unbalanced unequal bit-error protection, such as the Beam Interlock System of the Large Hadron Collider at CERN. Even though this concatenation scheme is simple, it reveals significant challenges that may be encountered when designing a concatenated scheme that uses a polar code as an inner code, such as error correlation and unusual decision log-likelihood ratio distributions. We explain and analyze these challenges and we propose two ways to overcome them.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
85,886
1606.02718
Learning Thermodynamics with Boltzmann Machines
A Boltzmann machine is a stochastic neural network that has been extensively used in the layers of deep architectures for modern machine learning applications. In this paper, we develop a Boltzmann machine that is capable of modelling thermodynamic observables for physical systems in thermal equilibrium. Through unsupervised learning, we train the Boltzmann machine on data sets constructed with spin configurations importance-sampled from the partition function of an Ising Hamiltonian at different temperatures using Monte Carlo (MC) methods. The trained Boltzmann machine is then used to generate spin states, for which we compare thermodynamic observables to those computed by direct MC sampling. We demonstrate that the Boltzmann machine can faithfully reproduce the observables of the physical system. Further, we observe that the number of neurons required to obtain accurate results increases as the system is brought close to criticality.
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
57,006
2206.10727
Toward Ethical Robotic Behavior in Human-Robot Interaction Scenarios
This paper describes current progress on developing an ethical architecture for robots that are designed to follow human ethical decision-making processes. We surveyed both regular adults (folks) and ethics experts (experts) on what they consider to be ethical behavior in two specific scenarios: pill-sorting with an older adult and game playing with a child. A key goal of the surveys is to better understand human ethical decision-making. In the first survey, folk responses were based on the subject's ethical choices ("folk morality"); in the second survey, expert responses were based on the expert's application of different formal ethical frameworks to each scenario. We observed that most of the formal ethical frameworks we included in the survey (Utilitarianism, Kantian Ethics, Ethics of Care and Virtue Ethics) and "folk morality" were conservative toward deception in the high-risk task with an older adult when both the adult and the child had significant performance deficiencies.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
304,012
2205.11602
Seeded Hierarchical Clustering for Expert-Crafted Taxonomies
Practitioners from many disciplines (e.g., political science) use expert-crafted taxonomies to make sense of large, unlabeled corpora. In this work, we study Seeded Hierarchical Clustering (SHC): the task of automatically fitting unlabeled data to such taxonomies using only a small set of labeled examples. We propose HierSeed, a novel weakly supervised algorithm for this task that uses only a small set of labeled seed examples. It is both data and computationally efficient. HierSeed assigns documents to topics by weighing document density against topic hierarchical structure. It outperforms both unsupervised and supervised baselines for the SHC task on three real-world datasets.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
298,200
2411.05900
Enhancing Cardiovascular Disease Prediction through Multi-Modal Self-Supervised Learning
Accurate prediction of cardiovascular diseases remains imperative for early diagnosis and intervention, necessitating robust and precise predictive models. Recently, there has been a growing interest in multi-modal learning for uncovering novel insights not available through uni-modal datasets alone. By combining cardiac magnetic resonance images, electrocardiogram signals, and available medical information, our approach enables the capture of holistic status about individuals' cardiovascular health by leveraging shared information across modalities. Integrating information from multiple modalities and benefiting from self-supervised learning techniques, our model provides a comprehensive framework for enhancing cardiovascular disease prediction with limited annotated datasets. We employ a masked autoencoder to pre-train the electrocardiogram ECG encoder, enabling it to extract relevant features from raw electrocardiogram data, and an image encoder to extract relevant features from cardiac magnetic resonance images. Subsequently, we utilize a multi-modal contrastive learning objective to transfer knowledge from expensive and complex modality, cardiac magnetic resonance image, to cheap and simple modalities such as electrocardiograms and medical information. Finally, we fine-tuned the pre-trained encoders on specific predictive tasks, such as myocardial infarction. Our proposed method enhanced the image information by leveraging different available modalities and outperformed the supervised approach by 7.6% in balanced accuracy.
false
false
false
false
false
false
true
false
false
false
false
true
false
false
false
false
false
false
506,884
cs/9311102
Software Agents: Completing Patterns and Constructing User Interfaces
To support the goal of allowing users to record and retrieve information, this paper describes an interactive note-taking system for pen-based computers with two distinctive features. First, it actively predicts what the user is going to write. Second, it automatically constructs a custom, button-box user interface on request. The system is an example of a learning-apprentice software- agent. A machine learning component characterizes the syntax and semantics of the user's information. A performance system uses this learned information to generate completion strings and construct a user interface. Description of Online Appendix: People like to record information. Doing this on paper is initially efficient, but lacks flexibility. Recording information on a computer is less efficient but more powerful. In our new note taking softwre, the user records information directly on a computer. Behind the interface, an agent acts for the user. To help, it provides defaults and constructs a custom user interface. The demonstration is a QuickTime movie of the note taking agent in action. The file is a binhexed self-extracting archive. Macintosh utilities for binhex are available from mac.archive.umich.edu. QuickTime is available from ftp.apple.com in the dts/mac/sys.soft/quicktime.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
540,284
1703.01053
Skin Lesion Classification using Class Activation Map
We proposed a two stage framework with only one network to analyze skin lesion images, we firstly trained a convolutional network to classify these images, and cropped the import regions which the network has the maximum activation value. In the second stage, we retrained this CNN with the image regions extracted from stage one and output the final probabilities. The two stage framework achieved a mean AUC of 0.857 in ISIC-2017 skin lesion validation set and is 0.04 higher than that of the original inputs, 0.821.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
69,287
2412.08548
Bilevel Joint Unsupervised and Supervised Training for Automatic Speech Recognition
In this paper, we propose a bilevel joint unsupervised and supervised training (BL-JUST) framework for automatic speech recognition. Compared to the conventional pre-training and fine-tuning strategy which is a disconnected two-stage process, BL-JUST tries to optimize an acoustic model such that it simultaneously minimizes both the unsupervised and supervised loss functions. Because BL-JUST seeks matched local optima of both loss functions, acoustic representations learned by the acoustic model strike a good balance between being generic and task-specific. We solve the BL-JUST problem using penalty-based bilevel gradient descent and evaluate the trained deep neural network acoustic models on various datasets with a variety of architectures and loss functions. We show that BL-JUST can outperform the widely-used pre-training and fine-tuning strategy and some other popular semi-supervised techniques.
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
516,136
1506.05574
Information Diffusion issues
In this report there will be a discussion for Information Diffusion. There will be discussions on what information diffusion is, its key characteristics and on several other aspects of these kinds of networks. This report will focus on peer to peer models in information diffusion. There will be discussions on epidemic model, OSN and other details related to information diffusion.
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
false
44,316
1907.11917
Triangulation: Why Optimize?
For decades, it has been widely accepted that the gold standard for two-view triangulation is to minimize the cost based on reprojection errors. In this work, we challenge this idea. We propose a novel alternative to the classic midpoint method that leads to significantly lower 2D errors and parallax errors. It provides a numerically stable closed-form solution based solely on a pair of backprojected rays. Since our solution is rotationally invariant, it can also be applied for fisheye and omnidirectional cameras. We show that for small parallax angles, our method outperforms the state-of-the-art in terms of combined 2D, 3D and parallax accuracy, while achieving comparable speed.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
139,986
1507.00921
Coherent 100G Nonlinear Compensation with Single-Step Digital Backpropagation
Enhanced-SSFM digital backpropagation (DBP) is experimentally demonstrated and compared to conventional DBP. A 112 Gb/s PM-QPSK signal is transmitted over a 3200 km dispersion-unmanaged link. The intradyne coherent receiver includes single-step digital backpropagation based on the enhanced-SSFM algorithm. In comparison, conventional DBP requires twenty steps to achieve the same performance. An analysis of the computational complexity and structure of the two algorithms reveals that the overall complexity and power consumption of DBP are reduced by a factor of 16 with respect to a conventional implementation, while the computation time is reduced by a factor of 20. As a result, the proposed algorithm enables a practical and effective implementation of DBP in real-time optical receivers, with only a moderate increase of the computational complexity, power consumption, and latency with respect to a simple feed-forward equalizer for dispersion compensation.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
true
44,799
2106.02250
A General Method for Event Detection on Social Media
Event detection on social media has attracted a number of researches, given the recent availability of large volumes of social media discussions. Previous works on social media event detection either assume a specific type of event, or assume certain behavior of observed variables. In this paper, we propose a general method for event detection on social media that makes few assumptions. The main assumption we make is that when an event occurs, affected semantic aspects will behave differently from its usual behavior. We generalize the representation of time units based on word embeddings of social media text, and propose an algorithm to detect events in time series in a general sense. In the experimental evaluation, we use a novel setting to test if our method and baseline methods can exhaustively catch all real-world news in the test period. The evaluation results show that when the event is quite unusual with regard to the base social media discussion, it can be captured more effectively with our method. Our method can be easily implemented and can be treated as a starting point for more specific applications.
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
238,791
2305.07709
Using Language Models to Detect Alarming Student Responses
This article details the advances made to a system that uses artificial intelligence to identify alarming student responses. This system is built into our assessment platform to assess whether a student's response indicates they are a threat to themselves or others. Such responses may include details concerning threats of violence, severe depression, suicide risks, and descriptions of abuse. Driven by advances in natural language processing, the latest model is a fine-tuned language model trained on a large corpus consisting of student responses and supplementary texts. We demonstrate that the use of a language model delivers a substantial improvement in accuracy over the previous iterations of this system.
false
false
false
false
false
true
true
false
true
false
false
false
false
false
false
false
false
false
363,994
1911.08148
Denial of Service Attacks on Control Systems with Packet Loss
The performance of control systems with packet loss as a result of an attack over the actuation communication channel is analysed. The operator is assumed to monitor the state of the channel by measuring the average number of packet losses and an attack detection criteria is established based on the statistic. The performance of the attacker is measured in terms of the increase of the linear quadratic cost function of the operator subject to a given detection constraint. Within that setting, the optimal denial of service (DoS) attack strategy is formulated for UDP-like and TCP-like communication protocols. {For both communication protocols,} DoS attack constructions that are independent and identically distributed (IID) are compared to those that are non-stationary. The main contributions of this paper are (i) explicit characterisation of the expected cost increase of the optimal attack constructions and the associated packet loss parameter for the IID case, (ii) proof, by example, that non-stationary random attacks outperform IID attacks in the presence of detection constraints.
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
154,110
2502.10230
ProReco: A Process Discovery Recommender System
Process discovery aims to automatically derive process models from historical execution data (event logs). While various process discovery algorithms have been proposed in the last 25 years, there is no consensus on a dominating discovery algorithm. Selecting the most suitable discovery algorithm remains a challenge due to competing quality measures and diverse user requirements. Manually selecting the most suitable process discovery algorithm from a range of options for a given event log is a time-consuming and error-prone task. This paper introduces ProReco, a Process discovery Recommender system designed to recommend the most appropriate algorithm based on user preferences and event log characteristics. ProReco incorporates state-of-the-art discovery algorithms, extends the feature pools from previous work, and utilizes eXplainable AI (XAI) techniques to provide explanations for its recommendations.
false
false
false
false
false
true
true
false
false
false
false
false
false
false
false
false
false
false
533,788
2501.08925
Disentangling Exploration of Large Language Models by Optimal Exploitation
Exploration is a crucial skill for self-improvement and open-ended problem-solving. However, it remains unclear if large language models can effectively explore the state-space within an unknown environment. This work isolates exploration as the sole objective, tasking the agent with delivering information that enhances future returns. Within this framework, we argue that measuring agent returns is not sufficient for a fair evaluation and decompose missing rewards into exploration and exploitation components based on the optimal achievable return. Comprehensive experiments with various models reveal that most struggle to sufficiently explore the state-space and weak exploration is insufficient. We observe a positive correlation between parameter count and exploration performance, with larger models demonstrating superior capabilities. Furthermore, we show that our decomposition provides insights into differences in behaviors driven by prompt engineering, offering a valuable tool for refining performance in exploratory tasks.
false
false
false
false
true
false
true
false
true
false
false
false
false
false
false
false
false
false
524,946
2106.14391
Joint Channel Estimation and Signal Recovery for RIS-Empowered Multi-User Communications
Reconfigurable intelligent surfaces (RISs) have been recently considered as a promising candidate for energy-efficient solutions in future wireless networks. Their dynamic and lowpower configuration enables coverage extension, massive connectivity, and low-latency communications. Due to a large number of unknown variables referring to the RIS unit elements and the transmitted signals, channel estimation and signal recovery in RIS-based systems are the ones of the most critical technical challenges. To address this problem, we focus on the RIS-assisted wireless communication system and present two joint channel estimation and signal recovery schemes based on message passing algorithms in this paper. Specifically, the proposed bidirectional scheme applies the Taylor series expansion and Gaussian approximation to simplify the sum-product procedure in the formulated problem. In addition, the inner iteration that adopts two variants of approximate message passing algorithms is incorporated to ensure robustness and convergence. Two ambiguities removal methods are also discussed in this paper. Our simulation results show that the proposed schemes show the superiority over the state-of-art benchmark method. We also provide insights on the impact of different RIS parameter settings on the proposed schemes.
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
243,397
cs/0207045
Compilation of Propositional Weighted Bases
In this paper, we investigate the extent to which knowledge compilation can be used to improve inference from propositional weighted bases. We present a general notion of compilation of a weighted base that is parametrized by any equivalence--preserving compilation function. Both negative and positive results are presented. On the one hand, complexity results are identified, showing that the inference problem from a compiled weighted base is as difficult as in the general case, when the prime implicates, Horn cover or renamable Horn cover classes are targeted. On the other hand, we show that the inference problem becomes tractable whenever DNNF-compilations are used and clausal queries are considered. Moreover, we show that the set of all preferred models of a DNNF-compilation of a weighted base can be computed in time polynomial in the output size. Finally, we sketch how our results can be used in model-based diagnosis in order to compute the most probable diagnoses of a system.
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
false
false
false
537,651
1803.02122
Smartphone-based Home Robotics
Humanoid robotics is a promising field because the strong human preference to interact with anthropomorphic interfaces. Despite this, humanoid robots are far from reaching main stream adoption and the features available in such robots seem to lag that of the latest smartphones. A fragmented robot ecosystem and low incentives to developers do not help to foster the creation of Robot-Apps either. In contrast, smartphones enjoy high adoption rates and a vibrant app ecosystem (4M apps published). Given this, it seems logical to apply the mobile SW and HW development model to humanoid robots. One way is to use a smartphone to power the robot. Smartphones have been embedded in toys and drones before. However, they have never been used as the main compute unit in a humanoid embodiment. Here, we introduce a novel robot architecture based on smartphones that demonstrates x3 cost reduction and that is compatible with iOS/Android.
true
false
false
false
false
false
false
true
false
false
false
false
false
true
false
false
false
false
91,999
2009.14259
Visually-Grounded Planning without Vision: Language Models Infer Detailed Plans from High-level Instructions
The recently proposed ALFRED challenge task aims for a virtual robotic agent to complete complex multi-step everyday tasks in a virtual home environment from high-level natural language directives, such as "put a hot piece of bread on a plate". Currently, the best-performing models are able to complete less than 5% of these tasks successfully. In this work we focus on modeling the translation problem of converting natural language directives into detailed multi-step sequences of actions that accomplish those goals in the virtual environment. We empirically demonstrate that it is possible to generate gold multi-step plans from language directives alone without any visual input in 26% of unseen cases. When a small amount of visual information is incorporated, namely the starting location in the virtual environment, our best-performing GPT-2 model successfully generates gold command sequences in 58% of cases. Our results suggest that contextualized language models may provide strong visual semantic planning modules for grounded virtual agents.
false
false
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
197,975
1803.04464
False Discovery Rate Control via Debiased Lasso
We consider the problem of variable selection in high-dimensional statistical models where the goal is to report a set of variables, out of many predictors $X_1, \dotsc, X_p$, that are relevant to a response of interest. For linear high-dimensional model, where the number of parameters exceeds the number of samples $(p>n)$, we propose a procedure for variables selection and prove that it controls the "directional" false discovery rate (FDR) below a pre-assigned significance level $q\in [0,1]$. We further analyze the statistical power of our framework and show that for designs with subgaussian rows and a common precision matrix $\Omega\in\mathbb{R}^{p\times p}$, if the minimum nonzero parameter $\theta_{\min}$ satisfies $$\sqrt{n} \theta_{\min} - \sigma \sqrt{2(\max_{i\in [p]}\Omega_{ii})\log\left(\frac{2p}{qs_0}\right)} \to \infty\,,$$ then this procedure achieves asymptotic power one. Our framework is built upon the debiasing approach and assumes the standard condition $s_0 = o(\sqrt{n}/(\log p)^2)$, where $s_0$ indicates the number of true positives among the $p$ features. Notably, this framework achieves exact directional FDR control without any assumption on the amplitude of unknown regression parameters, and does not require any knowledge of the distribution of covariates or the noise level. We test our method in synthetic and real data experiments to assess its performance and to corroborate our theoretical results.
false
false
false
false
false
false
true
false
false
true
false
false
false
false
false
false
false
false
92,455
2411.10585
Autonomous Sensor Exchange and Calibration for Cornstalk Nitrate Monitoring Robot
Interactive sensors are an important component of robotic systems but often require manual replacement due to wear and tear. Automating this process can enhance system autonomy and facilitate long-term deployment. We developed an autonomous sensor exchange and calibration system for an agriculture crop monitoring robot that inserts a nitrate sensor into cornstalks. A novel gripper and replacement mechanism, featuring a reliable funneling design, were developed to enable efficient and reliable sensor exchanges. To maintain consistent nitrate sensor measurement, an on-board sensor calibration station was integrated to provide in-field sensor cleaning and calibration. The system was deployed at the Ames Curtis Farm in June 2024, where it successfully inserted nitrate sensors with high accuracy into 30 cornstalks with a 77$\%$ success rate.
false
false
false
false
false
false
false
true
false
false
false
false
false
false
false
false
false
false
508,703
2202.06538
QA4QG: Using Question Answering to Constrain Multi-Hop Question Generation
Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage. Most existing work on MQG has focused on exploring graph-based networks to equip the traditional Sequence-to-sequence framework with reasoning ability. However, these models do not take full advantage of the constraint between questions and answers. Furthermore, studies on multi-hop question answering (QA) suggest that Transformers can replace the graph structure for multi-hop reasoning. Therefore, in this work, we propose a novel framework, QA4QG, a QA-augmented BART-based framework for MQG. It augments the standard BART model with an additional multi-hop QA module to further constrain the generated question. Our results on the HotpotQA dataset show that QA4QG outperforms all state-of-the-art models, with an increase of 8 BLEU-4 and 8 ROUGE points compared to the best results previously reported. Our work suggests the advantage of introducing pre-trained language models and QA module for the MQG task.
false
false
false
false
true
false
false
false
true
false
false
false
false
false
false
false
false
false
280,262
2502.04139
Beyond the Final Layer: Hierarchical Query Fusion Transformer with Agent-Interpolation Initialization for 3D Instance Segmentation
3D instance segmentation aims to predict a set of object instances in a scene and represent them as binary foreground masks with corresponding semantic labels. Currently, transformer-based methods are gaining increasing attention due to their elegant pipelines, reduced manual selection of geometric properties, and superior performance. However, transformer-based methods fail to simultaneously maintain strong position and content information during query initialization. Additionally, due to supervision at each decoder layer, there exists a phenomenon of object disappearance with the deepening of layers. To overcome these hurdles, we introduce Beyond the Final Layer: Hierarchical Query Fusion Transformer with Agent-Interpolation Initialization for 3D Instance Segmentation (BFL). Specifically, an Agent-Interpolation Initialization Module is designed to generate resilient queries capable of achieving a balance between foreground coverage and content learning. Additionally, a Hierarchical Query Fusion Decoder is designed to retain low overlap queries, mitigating the decrease in recall with the deepening of layers. Extensive experiments on ScanNetV2, ScanNet200, ScanNet++ and S3DIS datasets demonstrate the superior performance of BFL.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
530,997
2302.04476
Towards Geospatial Foundation Models via Continual Pretraining
Geospatial technologies are becoming increasingly essential in our world for a wide range of applications, including agriculture, urban planning, and disaster response. To help improve the applicability and performance of deep learning models on these geospatial tasks, various works have begun investigating foundation models for this domain. Researchers have explored two prominent approaches for introducing such models in geospatial applications, but both have drawbacks in terms of limited performance benefit or prohibitive training cost. Therefore, in this work, we propose a novel paradigm for building highly effective geospatial foundation models with minimal resource cost and carbon impact. We first construct a compact yet diverse dataset from multiple sources to promote feature diversity, which we term GeoPile. Then, we investigate the potential of continual pretraining from large-scale ImageNet-22k models and propose a multi-objective continual pretraining paradigm, which leverages the strong representations of ImageNet while simultaneously providing the freedom to learn valuable in-domain features. Our approach outperforms previous state-of-the-art geospatial pretraining methods in an extensive evaluation on seven downstream datasets covering various tasks such as change detection, classification, multi-label classification, semantic segmentation, and super-resolution.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
344,720
2108.01785
Weakly Supervised Foreground Learning for Weakly Supervised Localization and Detection
Modern deep learning models require large amounts of accurately annotated data, which is often difficult to satisfy. Hence, weakly supervised tasks, including weakly supervised object localization~(WSOL) and detection~(WSOD), have recently received attention in the computer vision community. In this paper, we motivate and propose the weakly supervised foreground learning (WSFL) task by showing that both WSOL and WSOD can be greatly improved if groundtruth foreground masks are available. More importantly, we propose a complete WSFL pipeline with low computational cost, which generates pseudo boxes, learns foreground masks, and does not need any localization annotations. With the help of foreground masks predicted by our WSFL model, we achieve 72.97% correct localization accuracy on CUB for WSOL, and 55.7% mean average precision on VOC07 for WSOD, thereby establish new state-of-the-art for both tasks. Our WSFL model also shows excellent transfer ability.
false
false
false
false
false
false
false
false
false
false
false
true
false
false
false
false
false
false
249,123
1711.07478
Implementing the Deep Q-Network
The Deep Q-Network proposed by Mnih et al. [2015] has become a benchmark and building point for much deep reinforcement learning research. However, replicating results for complex systems is often challenging since original scientific publications are not always able to describe in detail every important parameter setting and software engineering solution. In this paper, we present results from our work reproducing the results of the DQN paper. We highlight key areas in the implementation that were not covered in great detail in the original paper to make it easier for researchers to replicate these results, including termination conditions and gradient descent algorithms. Finally, we discuss methods for improving the computational performance and provide our own implementation that is designed to work with a range of domains, and not just the original Arcade Learning Environment [Bellemare et al., 2013].
false
false
false
false
true
false
true
false
false
false
false
false
false
false
false
false
false
false
85,005