id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1910.13257 | Deep Learning for Plasma Tomography and Disruption Prediction from
Bolometer Data | The use of deep learning is facilitating a wide range of data processing tasks in many areas. The analysis of fusion data is no exception, since there is a need to process large amounts of data collected from the diagnostic systems attached to a fusion device. Fusion data involves images and time series, and are a natural candidate for the use of convolutional and recurrent neural networks. In this work, we describe how CNNs can be used to reconstruct the plasma radiation profile, and we discuss the potential of using RNNs for disruption prediction based on the same input data. Both approaches have been applied at JET using data from a multi-channel diagnostic system. Similar approaches can be applied to other fusion devices and diagnostics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 151,343 |
2408.03692 | Asynchronous Credit Assignment Framework for Multi-Agent Reinforcement
Learning | Credit assignment is a core problem that distinguishes agents' marginal contributions for optimizing cooperative strategies in multi-agent reinforcement learning (MARL). Current credit assignment methods usually assume synchronous decision-making among agents. However, a prerequisite for many realistic cooperative tasks is asynchronous decision-making by agents, without waiting for others to avoid disastrous consequences. To address this issue, we propose an asynchronous credit assignment framework with a problem model called ADEX-POMDP and a multiplicative value decomposition (MVD) algorithm. ADEX-POMDP is an asynchronous problem model with extra virtual agents for a decentralized partially observable markov decision process. We prove that ADEX-POMDP preserves both the task equilibrium and the algorithm convergence. MVD utilizes multiplicative interaction to efficiently capture the interactions of asynchronous decisions, and we theoretically demonstrate its advantages in handling asynchronous tasks. Experimental results show that on two asynchronous decision-making benchmarks, Overcooked and POAC, MVD not only consistently outperforms state-of-the-art MARL methods but also provides the interpretability for asynchronous cooperation. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 479,120 |
1907.06066 | The Use of Gaussian Processes in System Identification | Gaussian processes are used in machine learning to learn input-output mappings from observed data. Gaussian process regression is based on imposing a Gaussian process prior on the unknown regressor function and statistically conditioning it on the observed data. In system identification, Gaussian processes are used to form time series prediction models such as non-linear finite-impulse response (NFIR) models as well as non-linear autoregressive (NARX) models. Gaussian process state-space models (GPSS) can be used to learn the dynamic and measurement models for a state-space representation of the input-output data. Temporal and spatio-temporal Gaussian processes can be directly used to form regressor on the data in the time domain. The aim of this article is to briefly outline the main directions in system identification methods using Gaussian processes. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 138,519 |
2103.10427 | The Low-Rank Simplicity Bias in Deep Networks | Modern deep neural networks are highly over-parameterized compared to the data on which they are trained, yet they often generalize remarkably well. A flurry of recent work has asked: why do deep networks not overfit to their training data? In this work, we make a series of empirical observations that investigate and extend the hypothesis that deeper networks are inductively biased to find solutions with lower effective rank embeddings. We conjecture that this bias exists because the volume of functions that maps to low effective rank embedding increases with depth. We show empirically that our claim holds true on finite width linear and non-linear models on practical learning paradigms and show that on natural data, these are often the solutions that generalize well. We then show that the simplicity bias exists at both initialization and after training and is resilient to hyper-parameters and learning methods. We further demonstrate how linear over-parameterization of deep non-linear models can be used to induce low-rank bias, improving generalization performance on CIFAR and ImageNet without changing the modeling capacity. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 225,449 |
2410.10508 | Everyday Speech in the Indian Subcontinent | India has 1369 languages of which 22 are official. About 13 different scripts are used to represent these languages. A Common Label Set (CLS) was developed based on phonetics to address the issue of large vocabulary of units required in the End to End (E2E) framework for multilingual synthesis. This reduced the footprint of the synthesizer and also enabled fast adaptation to new languages which had similar phonotactics, provided language scripts belonged to the same family. In this paper, we provide new insights into speech synthesis, where the script belongs to one family, while the phonotactics comes from another. Indian language text is first converted to CLS, and then a synthesizer that matches the phonotactics of the language is used. Quality akin to that of a native speaker is obtained for Sanskrit and Konkani with zero adaptation data, using Kannada and Marathi synthesizers respectively. Further, this approach also lends itself seamless code switching across 13 Indian languages and English in a given native speaker's voice. | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 498,109 |
2305.03796 | Transformer Working Memory Enables Regular Language Reasoning and
Natural Language Length Extrapolation | Unlike recurrent models, conventional wisdom has it that Transformers cannot perfectly model regular languages. Inspired by the notion of working memory, we propose a new Transformer variant named RegularGPT. With its novel combination of Weight-Sharing, Adaptive-Depth, and Sliding-Dilated-Attention, RegularGPT constructs working memory along the depth dimension, thereby enabling efficient and successful modeling of regular languages such as PARITY. We further test RegularGPT on the task of natural language length extrapolation and surprisingly find that it rediscovers the local windowed attention effect deemed necessary in prior work for length extrapolation. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 362,512 |
2211.09710 | Style Classification of Rabbinic Literature for Detection of Lost
Midrash Tanhuma Material | Midrash collections are complex rabbinic works that consist of text in multiple languages, which evolved through long processes of unstable oral and written transmission. Determining the origin of a given passage in such a compilation is not always straightforward and is often a matter of dispute among scholars, yet it is essential for scholars' understanding of the passage and its relationship to other texts in the rabbinic corpus. To help solve this problem, we propose a system for classification of rabbinic literature based on its style, leveraging recent advances in natural language processing for Hebrew texts. Additionally, we demonstrate how this method can be applied to uncover lost material from a specific midrash genre, Tan\d{h}uma-Yelammedenu, that has been preserved in later anthologies. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 331,061 |
1508.00639 | Wiretapped Signal Leakage Minimization for Secure Multiuser MIMO Systems
via Interference Alignment | In this paper, interference alignment (IA) is designed for secure multiuser multiple-input multiple-output systems in the presence of an eavesdropper. The proposed IA technique designs the transmit precoding and receiving subspace matrices to minimize both the total inter-main-link interference and the wiretapped signals. With perfect channel state information is known at the transmitters (Txs), the cost function of the optimization problem is alternatively minimized over the precoding matrices at the Txs and the receiving subspace matrices at the receivers (Rxs) and the eavesdropper. The feasible condition, the proof of convergence of the proposed IA approach are provided. Numerical results reveal a significant improvement in terms of average secrecy sum rate of our IA algorithm as compared to the conventional IA design. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 45,693 |
2401.07122 | Decentralized Federated Learning with Asynchronous Parameter Sharing for
Large-scale IoT Networks | Federated learning (FL) enables wireless terminals to collaboratively learn a shared parameter model while keeping all the training data on devices per se. Parameter sharing consists of synchronous and asynchronous ways: the former transmits parameters as blocks or frames and waits until all transmissions finish, whereas the latter provides messages about the status of pending and failed parameter transmission requests. Whatever synchronous or asynchronous parameter sharing is applied, the learning model shall adapt to distinct network architectures as an improper learning model will deteriorate learning performance and, even worse, lead to model divergence for the asynchronous transmission in resource-limited large-scale Internet-of-Things (IoT) networks. This paper proposes a decentralized learning model and develops an asynchronous parameter-sharing algorithm for resource-limited distributed IoT networks. This decentralized learning model approaches a convex function as the number of nodes increases, and its learning process converges to a global stationary point with a higher probability than the centralized FL model. Moreover, by jointly accounting for the convergence bound of federated learning and the transmission delay of wireless communications, we develop a node scheduling and bandwidth allocation algorithm to minimize the transmission delay. Extensive simulation results corroborate the effectiveness of the distributed algorithm in terms of fast learning model convergence and low transmission delay. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 421,426 |
1412.7964 | Knowledge Propagation in Contextualized Knowledge Repositories: an
Experimental Evaluation | As the interest in the representation of context dependent knowledge in the Semantic Web has been recognized, a number of logic based solutions have been proposed in this regard. In our recent works, in response to this need, we presented the description logic-based Contextualized Knowledge Repository (CKR) framework. CKR is not only a theoretical framework, but it has been effectively implemented over state-of-the-art tools for the management of Semantic Web data: inference inside and across contexts has been realized in the form of forward SPARQL-based rules over different RDF named graphs. In this paper we present the first evaluation results for such CKR implementation. In particular, in first experiment we study its scalability with respect to different reasoning regimes. In a second experiment we analyze the effects of knowledge propagation on the computation of inferences. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 38,872 |
2207.09228 | Image Super-Resolution with Deep Dictionary | Since the first success of Dong et al., the deep-learning-based approach has become dominant in the field of single-image super-resolution. This replaces all the handcrafted image processing steps of traditional sparse-coding-based methods with a deep neural network. In contrast to sparse-coding-based methods, which explicitly create high/low-resolution dictionaries, the dictionaries in deep-learning-based methods are implicitly acquired as a nonlinear combination of multiple convolutions. One disadvantage of deep-learning-based methods is that their performance is degraded for images created differently from the training dataset (out-of-domain images). We propose an end-to-end super-resolution network with a deep dictionary (SRDD), where a high-resolution dictionary is explicitly learned without sacrificing the advantages of deep learning. Extensive experiments show that explicit learning of high-resolution dictionary makes the network more robust for out-of-domain test images while maintaining the performance of the in-domain test images. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 308,840 |
2104.02775 | Looking into Your Speech: Learning Cross-modal Affinity for Audio-visual
Speech Separation | In this paper, we address the problem of separating individual speech signals from videos using audio-visual neural processing. Most conventional approaches utilize frame-wise matching criteria to extract shared information between co-occurring audio and video. Thus, their performance heavily depends on the accuracy of audio-visual synchronization and the effectiveness of their representations. To overcome the frame discontinuity problem between two modalities due to transmission delay mismatch or jitter, we propose a cross-modal affinity network (CaffNet) that learns global correspondence as well as locally-varying affinities between audio and visual streams. Given that the global term provides stability over a temporal sequence at the utterance-level, this resolves the label permutation problem characterized by inconsistent assignments. By extending the proposed cross-modal affinity on the complex network, we further improve the separation performance in the complex spectral domain. Experimental results verify that the proposed methods outperform conventional ones on various datasets, demonstrating their advantages in real-world scenarios. | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 228,845 |
2308.14555 | Kernel Limit of Recurrent Neural Networks Trained on Ergodic Data
Sequences | Mathematical methods are developed to characterize the asymptotics of recurrent neural networks (RNN) as the number of hidden units, data samples in the sequence, hidden state updates, and training steps simultaneously grow to infinity. In the case of an RNN with a simplified weight matrix, we prove the convergence of the RNN to the solution of an infinite-dimensional ODE coupled with the fixed point of a random algebraic equation. The analysis requires addressing several challenges which are unique to RNNs. In typical mean-field applications (e.g., feedforward neural networks), discrete updates are of magnitude $\mathcal{O}(\frac{1}{N})$ and the number of updates is $\mathcal{O}(N)$. Therefore, the system can be represented as an Euler approximation of an appropriate ODE/PDE, which it will converge to as $N \rightarrow \infty$. However, the RNN hidden layer updates are $\mathcal{O}(1)$. Therefore, RNNs cannot be represented as a discretization of an ODE/PDE and standard mean-field techniques cannot be applied. Instead, we develop a fixed point analysis for the evolution of the RNN memory states, with convergence estimates in terms of the number of update steps and the number of hidden units. The RNN hidden layer is studied as a function in a Sobolev space, whose evolution is governed by the data sequence (a Markov chain), the parameter updates, and its dependence on the RNN hidden layer at the previous time step. Due to the strong correlation between updates, a Poisson equation must be used to bound the fluctuations of the RNN around its limit equation. These mathematical methods give rise to the neural tangent kernel (NTK) limits for RNNs trained on data sequences as the number of data samples and size of the neural network grow to infinity. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 388,370 |
2311.03561 | Sea You Later: Metadata-Guided Long-Term Re-Identification for UAV-Based
Multi-Object Tracking | Re-identification (ReID) in multi-object tracking (MOT) for UAVs in maritime computer vision has been challenging for several reasons. More specifically, short-term re-identification (ReID) is difficult due to the nature of the characteristics of small targets and the sudden movement of the drone's gimbal. Long-term ReID suffers from the lack of useful appearance diversity. In response to these challenges, we present an adaptable motion-based MOT algorithm, called Metadata Guided MOT (MG-MOT). This algorithm effectively merges short-term tracking data into coherent long-term tracks, harnessing crucial metadata from UAVs, including GPS position, drone altitude, and camera orientations. Extensive experiments are conducted to validate the efficacy of our MOT algorithm. Utilizing the challenging SeaDroneSee tracking dataset, which encompasses the aforementioned scenarios, we achieve a much-improved performance in the latest edition of the UAV-based Maritime Object Tracking Challenge with a state-of-the-art HOTA of 69.5% and an IDF1 of 85.9% on the testing split. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 405,895 |
2311.07981 | Benchmarking Individual Tree Mapping with Sub-meter Imagery | There is a rising interest in mapping trees using satellite or aerial imagery, but there is no standardized evaluation protocol for comparing and enhancing methods. In dense canopy areas, the high variability of tree sizes and their spatial proximity makes it arduous to define the quality of the predictions. Concurrently, object-centric approaches such as bounding box detection usuallyperform poorly on small and dense objects. It thus remains unclear what is the ideal framework for individual tree mapping, in regards to detection and segmentation approaches, convolutional neural networks and transformers. In this paper, we introduce an evaluation framework suited for individual tree mapping in any physical environment, with annotation costs and applicative goals in mind. We review and compare different approaches and deep architectures, and introduce a new method that we experimentally prove to be a good compromise between segmentation and detection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 407,544 |
2104.00905 | Background-Aware Pooling and Noise-Aware Loss for Weakly-Supervised
Semantic Segmentation | We address the problem of weakly-supervised semantic segmentation (WSSS) using bounding box annotations. Although object bounding boxes are good indicators to segment corresponding objects, they do not specify object boundaries, making it hard to train convolutional neural networks (CNNs) for semantic segmentation. We find that background regions are perceptually consistent in part within an image, and this can be leveraged to discriminate foreground and background regions inside object bounding boxes. To implement this idea, we propose a novel pooling method, dubbed background-aware pooling (BAP), that focuses more on aggregating foreground features inside the bounding boxes using attention maps. This allows to extract high-quality pseudo segmentation labels to train CNNs for semantic segmentation, but the labels still contain noise especially at object boundaries. To address this problem, we also introduce a noise-aware loss (NAL) that makes the networks less susceptible to incorrect labels. Experimental results demonstrate that learning with our pseudo labels already outperforms state-of-the-art weakly- and semi-supervised methods on the PASCAL VOC 2012 dataset, and the NAL further boosts the performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 228,170 |
1903.11341 | Diversity with Cooperation: Ensemble Methods for Few-Shot Classification | Few-shot classification consists of learning a predictive model that is able to effectively adapt to a new class, given only a few annotated samples. To solve this challenging problem, meta-learning has become a popular paradigm that advocates the ability to "learn to adapt". Recent works have shown, however, that simple learning strategies without meta-learning could be competitive. In this paper, we go a step further and show that by addressing the fundamental high-variance issue of few-shot learning classifiers, it is possible to significantly outperform current meta-learning techniques. Our approach consists of designing an ensemble of deep networks to leverage the variance of the classifiers, and introducing new strategies to encourage the networks to cooperate, while encouraging prediction diversity. Evaluation is conducted on the mini-ImageNet and CUB datasets, where we show that even a single network obtained by distillation yields state-of-the-art results. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 125,496 |
2305.04853 | The Current State of Summarization | With the explosive growth of textual information, summarization systems have become increasingly important. This work aims to concisely indicate the current state of the art in abstractive text summarization. As part of this, we outline the current paradigm shifts towards pre-trained encoder-decoder models and large autoregressive language models. Additionally, we delve further into the challenges of evaluating summarization systems and the potential of instruction-tuned models for zero-shot summarization. Finally, we provide a brief overview of how summarization systems are currently being integrated into commercial applications. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 362,928 |
2210.05252 | Graph Neural Network Policies and Imitation Learning for Multi-Domain
Task-Oriented Dialogues | Task-oriented dialogue systems are designed to achieve specific goals while conversing with humans. In practice, they may have to handle simultaneously several domains and tasks. The dialogue manager must therefore be able to take into account domain changes and plan over different domains/tasks in order to deal with multidomain dialogues. However, learning with reinforcement in such context becomes difficult because the state-action dimension is larger while the reward signal remains scarce. Our experimental results suggest that structured policies based on graph neural networks combined with different degrees of imitation learning can effectively handle multi-domain dialogues. The reported experiments underline the benefit of structured policies over standard policies. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 322,775 |
1908.06075 | An Exploratory Analysis of the Latent Structure of Process Data via
Action Sequence Autoencoder | Computer simulations have become a popular tool of assessing complex skills such as problem-solving skills. Log files of computer-based items record the entire human-computer interactive processes for each respondent. The response processes are very diverse, noisy, and of nonstandard formats. Few generic methods have been developed for exploiting the information contained in process data. In this article, we propose a method to extract latent variables from process data. The method utilizes a sequence-to-sequence autoencoder to compress response processes into standard numerical vectors. It does not require prior knowledge of the specific items and human-computers interaction patterns. The proposed method is applied to both simulated and real process data to demonstrate that the resulting latent variables extract useful information from the response processes. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 141,902 |
2305.04324 | A generalized network level disruption strategy selection model for
urban public transport systems | A fast recovery from disruptions is of vital importance for the reliability of transit systems. This study presents a new attempt to tackle the transit disruption mitigation problem in a comprehensive and hierarchical way. A network level strategy selection optimization model is formulated as a joint routing and resource allocation (nJRRA) problem. By constraining the problem further into an epsilon-constrained nJRRA problem, classic solution algorithms can be applied to solve the quadratically constrained quadratic program (QCQP). On top of this "basic model", we propose adding a decision to delay the resource allocation decisions up to a maximum initiation time when the incident duration is stochastic. To test the models, a quasi-dynamic evaluation program with a given incident duration distribution is constructed using discretized time steps and discrete distributions. Five different demand patterns and four different disruption duration distributions (20 combinations) are tested on a toy transit network. The results show that the two models outperform benchmark strategies such as using only line level adjustment or only bus bridging. They also highlight conditions when delaying the decision is preferred. | false | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | 362,722 |
2112.11254 | Design And Analysis Of Three-Output Open Differential with 3-DOF | This paper presents a novel passive three-output differential with three degrees of freedom (3DOF), that translates motion and torque from a single input to three outputs. The proposed Three-Output Open Differential is designed such that its functioning is analogous to the functioning of a traditional two-output open differential. That is, the differential translates equal motion and torque to all its three outputs when the outputs are unconstrained or are subjected to equivalent load conditions. The introduced design is the first differential with three outputs to realise this outcome. The differential action between the three outputs is realised passively by a symmetric arrangement of three two-output open differentials and three two-input open differentials. The resulting differential mechanism achieves the novel result of equivalent input to output angular velocity and torque relations for all its three outputs. Furthermore, Three-Output Open Differential achieves the novel result for differentials with more than two outputs where each of its outputs shares equivalent angular velocity and torque relations with all the other outputs. The kinematics and dynamics of the Three-Output Open Differential are derived using the bond graph method. In addition, the merits of the differential mechanism along with its current and potential applications are presented. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 272,659 |
2403.11802 | Counting-Stars: A Multi-evidence, Position-aware, and Scalable Benchmark
for Evaluating Long-Context Large Language Models | Despite recent efforts to develop large language models with robust long-context capabilities, the lack of long-context benchmarks means that relatively little is known about their performance. To alleviate this gap, in this paper, we propose \textbf{Counting-Stars}, a multi-evidence, position-aware, and scalable benchmark designed to evaluate the multi-evidence retrieval capabilities of long-context LLMs. \textbf{Counting-Stars} comprises two counting-based multiple pieces of evidence retrieval sub-tasks: searching and reasoning. Using Counting-Stars, we conduct experiments to evaluate several long-context LLMs, including GPT-4 Turbo, Gemini 1.5 Pro, Claude3 Opus, GLM-4, and Moonshot-v1. Extensive experimental results demonstrate that Gemini 1.5 Pro achieves the best overall results, while GPT-4 Turbo exhibits the most stable performance across various tasks. Furthermore, our analysis of these LLMs, which have been extended to handle long-context scenarios, indicates that significant room for improvement remains as the length of the input context and the complexity of the tasks increase. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 438,868 |
2012.04846 | SnapMix: Semantically Proportional Mixing for Augmenting Fine-grained
Data | Data mixing augmentation has proved effective in training deep models. Recent methods mix labels mainly based on the mixture proportion of image pixels. As the main discriminative information of a fine-grained image usually resides in subtle regions, methods along this line are prone to heavy label noise in fine-grained recognition. We propose in this paper a novel scheme, termed as Semantically Proportional Mixing (SnapMix), which exploits class activation map (CAM) to lessen the label noise in augmenting fine-grained data. SnapMix generates the target label for a mixed image by estimating its intrinsic semantic composition, and allows for asymmetric mixing operations and ensures semantic correspondence between synthetic images and target labels. Experiments show that our method consistently outperforms existing mixed-based approaches on various datasets and under different network depths. Furthermore, by incorporating the mid-level features, the proposed SnapMix achieves top-level performance, demonstrating its potential to serve as a solid baseline for fine-grained recognition. Our code is available at https://github.com/Shaoli-Huang/SnapMix.git. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 210,590 |
1407.1166 | On the Capacity of CSI Based Transmission Link Selection in
Decode-and-Forward Cooperative System | In this paper, we study the problem of best transmission link selection in a decode-and-forward (DF) cooperative system from capacity point of view. The transmission link can be a cooperative (via a relay) or direct link between the source and destination nodes. In a two-hop DF system with multiple relays and a direct link in between the source and destination, the transmission link selection can be performed based on full or partial channel state information (CSI) of all links involved in cooperation. We derive analytical ergodic capacity of full and partial CSI based path selection schemes in the DF cooperative system. Further, the full and partial CSI based link selection schemes are compared with help of these expressions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 34,407 |
2301.00888 | SAFEMYRIDES: Application of Decentralized Control Edge-Computing to
Ridesharing Monitoring Services | Edge computing is changing the face of many industries and services. Common edge computing models offload computing which is prone to security risks and privacy violation. However, advances in deep learning enabled Internet of Things (IoTs) to take decisions and run cognitive tasks locally. This research introduces a decentralized-control edge model where most computation and decisions are moved to the IoT level. The model aims at decreasing communication to the edge which in return enhances efficiency and decreases latency. The model also avoids data transfer which raises security and privacy risks. To examine the model, we developed SAFEMYRIDES, a scene-aware ridesharing monitoring system where smart phones are detecting violations at the runtime. Current real-time monitoring systems are costly and require continuous network connectivity. The system uses optimized deep learning that run locally on IoTs to detect violations in ridesharing and record violation incidences. The system would enhance safety and security in ridesharing without violating privacy. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 339,054 |
2206.13500 | Neural Neural Textures Make Sim2Real Consistent | Unpaired image translation algorithms can be used for sim2real tasks, but many fail to generate temporally consistent results. We present a new approach that combines differentiable rendering with image translation to achieve temporal consistency over indefinite timescales, using surface consistency losses and \emph{neural neural textures}. We call this algorithm TRITON (Texture Recovering Image Translation Network): an unsupervised, end-to-end, stateless sim2real algorithm that leverages the underlying 3D geometry of input scenes by generating realistic-looking learnable neural textures. By settling on a particular texture for the objects in a scene, we ensure consistency between frames statelessly. Unlike previous algorithms, TRITON is not limited to camera movements -- it can handle the movement of objects as well, making it useful for downstream tasks such as robotic manipulation. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | true | 304,995 |
1611.00962 | Multitask Protein Function Prediction Through Task Dissimilarity | Automated protein function prediction is a challenging problem with distinctive features, such as the hierarchical organization of protein functions and the scarcity of annotated proteins for most biological functions. We propose a multitask learning algorithm addressing both issues. Unlike standard multitask algorithms, which use task (protein functions) similarity information as a bias to speed up learning, we show that dissimilarity information enforces separation of rare class labels from frequent class labels, and for this reason is better suited for solving unbalanced protein function prediction problems. We support our claim by showing that a multitask extension of the label propagation algorithm empirically works best when the task relatedness information is represented using a dissimilarity matrix as opposed to a similarity matrix. Moreover, the experimental comparison carried out on three model organism shows that our method has a more stable performance in both "protein-centric" and "function-centric" evaluation settings. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 63,305 |
2109.06356 | An Iterative Approach to Improving Solution Quality for AC Optimal Power
Flow Problems | The existence of multiple solutions to AC optimal power flow (ACOPF) problems has been noted for decades. Existing solvers are generally successful in finding local solutions, which satisfy first and second order optimality conditions, but may not be globally optimal. In this paper, we propose a simple iterative approach to improve the quality of solutions to ACOPF problems. First, we call an existing solver for the ACOPF problem. From the solution and the associated dual variables, we form a partial Lagrangian. Then we optimize this partial Lagrangian and use its solution as a warm start to call the solver again for the ACOPF problem. By repeating this process, we can iteratively improve the solution quality, moving from local solutions to global ones. We show the effectiveness of our algorithm on standard IEEE networks. The simulation results show that our algorithm can escape from local solutions to achieve global optimums within a few iterations. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 255,117 |
2407.00909 | Heterogeneous Graph-based Framework with Disentangled Representations
Learning for Multi-target Cross Domain Recommendation | CDR (Cross-Domain Recommendation), i.e., leveraging information from multiple domains, is a critical solution to data sparsity problem in recommendation system. The majority of previous research either focused on single-target CDR (STCDR) by utilizing data from the source domains to improve the model's performance on the target domain, or applied dual-target CDR (DTCDR) by integrating data from the source and target domains. In addition, multi-target CDR (MTCDR) is a generalization of DTCDR, which is able to capture the link among different domains. In this paper we present HGDR (Heterogeneous Graph-based Framework with Disentangled Representations Learning), an end-to-end heterogeneous network architecture where graph convolutional layers are applied to model relations among different domains, meanwhile utilizes the idea of disentangling representation for domain-shared and domain-specifc information. First, a shared heterogeneous graph is generated by gathering users and items from several domains without any further side information. Second, we use HGDR to compute disentangled representations for users and items in all domains. Experiments on real-world datasets and online A/B tests prove that our proposed model can transmit information among domains effectively and reach the SOTA performance. The code can be found here: https://github.com/NetEase-Media/HGCDR. | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | false | 469,053 |
2011.12636 | Improving Augmentation and Evaluation Schemes for Semantic Image
Synthesis | Despite data augmentation being a de facto technique for boosting the performance of deep neural networks, little attention has been paid to developing augmentation strategies for generative adversarial networks (GANs). To this end, we introduce a novel augmentation scheme designed specifically for GAN-based semantic image synthesis models. We propose to randomly warp object shapes in the semantic label maps used as an input to the generator. The local shape discrepancies between the warped and non-warped label maps and images enable the GAN to learn better the structural and geometric details of the scene and thus to improve the quality of generated images. While benchmarking the augmented GAN models against their vanilla counterparts, we discover that the quantification metrics reported in the previous semantic image synthesis studies are strongly biased towards specific semantic classes as they are derived via an external pre-trained segmentation network. We therefore propose to improve the established semantic image synthesis evaluation scheme by analyzing separately the performance of generated images on the biased and unbiased classes for the given segmentation network. Finally, we show strong quantitative and qualitative improvements obtained with our augmentation scheme, on both class splits, using state-of-the-art semantic image synthesis models across three different datasets. On average across COCO-Stuff, ADE20K and Cityscapes datasets, the augmented models outperform their vanilla counterparts by ~3 mIoU and ~10 FID points. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | true | 208,222 |
2103.06569 | ANN-aided incremental multiscale-remodelling-based finite strain
poroelasticity | Mechanical modelling of poroelastic media under finite strain is usually carried out via phenomenological models neglecting complex micro-macro scales interdependency. One reason is that the mathematical two-scale analysis is only straightforward assuming infinitesimal strain theory. Exploiting the potential of ANNs for fast and reliable upscaling and localisation procedures, we propose an incremental numerical approach that considers rearrangement of the cell properties based on its current deformation, which leads to the remodelling of the macroscopic model after each time increment. This computational framework is valid for finite strain and large deformation problems while it ensures infinitesimal strain increments within time steps. The full effects of the interdependency between the properties and response of macro and micro scales are considered for the first time providing more accurate predictive analysis of fluid-saturated porous media which is studied via a numerical consolidation example. Furthermore, the (nonlinear) deviation from Darcy's law is captured in fluid filtration numerical analyses. Finally, the brain tissue mechanical response under uniaxial cyclic test is simulated and studied. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 224,350 |
2106.10947 | Leveraging Conditional Generative Models in a General Explanation
Framework of Classifier Decisions | Providing a human-understandable explanation of classifiers' decisions has become imperative to generate trust in their use for day-to-day tasks. Although many works have addressed this problem by generating visual explanation maps, they often provide noisy and inaccurate results forcing the use of heuristic regularization unrelated to the classifier in question. In this paper, we propose a new general perspective of the visual explanation problem overcoming these limitations. We show that visual explanation can be produced as the difference between two generated images obtained via two specific conditional generative models. Both generative models are trained using the classifier to explain and a database to enforce the following properties: (i) All images generated by the first generator are classified similarly to the input image, whereas the second generator's outputs are classified oppositely. (ii) Generated images belong to the distribution of real images. (iii) The distances between the input image and the corresponding generated images are minimal so that the difference between the generated elements only reveals relevant information for the studied classifier. Using symmetrical and cyclic constraints, we present two different approximations and implementations of the general formulation. Experimentally, we demonstrate significant improvements w.r.t the state-of-the-art on three different public data sets. In particular, the localization of regions influencing the classifier is consistent with human annotations. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 242,225 |
1009.1498 | A Statistical Measure of Complexity | In this chapter, a statistical measure of complexity is introduced and some of its properties are discussed. Also, some straightforward applications are shown. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 7,509 |
2309.08642 | A Stochastic Online Forecast-and-Optimize Framework for Real-Time Energy
Dispatch in Virtual Power Plants under Uncertainty | Aggregating distributed energy resources in power systems significantly increases uncertainties, in particular caused by the fluctuation of renewable energy generation. This issue has driven the necessity of widely exploiting advanced predictive control techniques under uncertainty to ensure long-term economics and decarbonization. In this paper, we propose a real-time uncertainty-aware energy dispatch framework, which is composed of two key elements: (i) A hybrid forecast-and-optimize sequential task, integrating deep learning-based forecasting and stochastic optimization, where these two stages are connected by the uncertainty estimation at multiple temporal resolutions; (ii) An efficient online data augmentation scheme, jointly involving model pre-training and online fine-tuning stages. In this way, the proposed framework is capable to rapidly adapt to the real-time data distribution, as well as to target on uncertainties caused by data drift, model discrepancy and environment perturbations in the control process, and finally to realize an optimal and robust dispatch solution. The proposed framework won the championship in CityLearn Challenge 2022, which provided an influential opportunity to investigate the potential of AI application in the energy domain. In addition, comprehensive experiments are conducted to interpret its effectiveness in the real-life scenario of smart building energy management. | false | false | false | false | true | false | true | false | false | false | true | false | false | false | false | false | false | false | 392,270 |
2210.01834 | Invariant Aggregator for Defending against Federated Backdoor Attacks | Federated learning enables training high-utility models across several clients without directly sharing their private data. As a downside, the federated setting makes the model vulnerable to various adversarial attacks in the presence of malicious clients. Despite the theoretical and empirical success in defending against attacks that aim to degrade models' utility, defense against backdoor attacks that increase model accuracy on backdoor samples exclusively without hurting the utility on other samples remains challenging. To this end, we first analyze the failure modes of existing defenses over a flat loss landscape, which is common for well-designed neural networks such as Resnet (He et al., 2015) but is often overlooked by previous works. Then, we propose an invariant aggregator that redirects the aggregated update to invariant directions that are generally useful via selectively masking out the update elements that favor few and possibly malicious clients. Theoretical results suggest that our approach provably mitigates backdoor attacks and remains effective over flat loss landscapes. Empirical results on three datasets with different modalities and varying numbers of clients further demonstrate that our approach mitigates a broad class of backdoor attacks with a negligible cost on the model utility. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 321,417 |
2501.15223 | Efficient and Interpretable Neural Networks Using Complex Lehmer
Transform | We propose an efficient and interpretable neural network with a novel activation function called the weighted Lehmer transform. This new activation function enables adaptive feature selection and extends to the complex domain, capturing phase-sensitive and hierarchical relationships within data. Notably, it provides greater interpretability and transparency compared to existing machine learning models, facilitating a deeper understanding of its functionality and decision-making processes. We analyze the mathematical properties of both real-valued and complex-valued Lehmer activation units and demonstrate their applications in modeling nonlinear interactions. Empirical evaluations demonstrate that our proposed neural network achieves competitive accuracy on benchmark datasets with significantly improved computational efficiency. A single layer of real-valued or complex-valued Lehmer activation units is shown to deliver state-of-the-art performance, balancing efficiency with interpretability. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 527,456 |
2306.15590 | A Mixed-Integer Approach for Motion Planning of Nonholonomic Robots
under Visible Light Communication Constraints | This work addresses the problem of motion planning for a group of nonholonomic robots under Visible Light Communication (VLC) connectivity requirements. In particular, we consider an inspection task performed by a Robot Chain Control System (RCCS), where a leader must visit relevant regions of an environment while the remaining robots operate as relays, maintaining the connectivity between the leader and a base station. We leverage Mixed-Integer Linear Programming (MILP) to design a trajectory planner that can coordinate the RCCS, minimizing time and control effort while also handling the issues of directed Line-Of-Sight (LOS), connectivity over directed networks, and the nonlinearity of the robots' dynamics. The efficacy of the proposal is demonstrated with realistic simulations in the Gazebo environment using the Turtlebot3 robot platform. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | true | false | false | false | 376,071 |
2006.12166 | Open Source Software for Efficient and Transparent Reviews | To help researchers conduct a systematic review or meta-analysis as efficiently and transparently as possible, we designed a tool (ASReview) to accelerate the step of screening titles and abstracts. For many tasks - including but not limited to systematic reviews and meta-analyses - the scientific literature needs to be checked systematically. Currently, scholars and practitioners screen thousands of studies by hand to determine which studies to include in their review or meta-analysis. This is error prone and inefficient because of extremely imbalanced data: only a fraction of the screened studies is relevant. The future of systematic reviewing will be an interaction with machine learning algorithms to deal with the enormous increase of available text. We therefore developed an open source machine learning-aided pipeline applying active learning: ASReview. We demonstrate by means of simulation studies that ASReview can yield far more efficient reviewing than manual reviewing, while providing high quality. Furthermore, we describe the options of the free and open source research software and present the results from user experience tests. We invite the community to contribute to open source projects such as our own that provide measurable and reproducible improvements over current practice. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 183,501 |
2410.01844 | Urban Anomalies: A Simulated Human Mobility Dataset with Injected
Anomalies | Human mobility anomaly detection based on location is essential in areas such as public health, safety, welfare, and urban planning. Developing models and approaches for location-based anomaly detection requires a comprehensive dataset. However, privacy concerns and the absence of ground truth hinder the availability of publicly available datasets. With this paper, we provide extensive simulated human mobility datasets featuring various anomaly types created using an existing Urban Patterns of Life Simulation. To create these datasets, we inject changes in the logic of individual agents to change their behavior. Specifically, we create four of anomalous agent behavior by (1) changing the agents' appetite (causing agents to have meals more frequently), (2) changing their group of interest (causing agents to interact with different agents from another group). (3) changing their social place selection (causing agents to visit different recreational places) and (4) changing their work schedule (causing agents to skip work), For each type of anomaly, we use three degrees of behavioral change to tune the difficulty of detecting the anomalous agents. To select agents to inject anomalous behavior into, we employ three methods: (1) Random selection using a centralized manipulation mechanism, (2) Spread based selection using an infectious disease model, and (3) through exposure of agents to a specific location. All datasets are split into normal and anomalous phases. The normal phase, which can be used for training models of normalcy, exhibits no anomalous behavior. The anomalous phase, which can be used for testing for anomalous detection algorithm, includes ground truth labels that indicate, for each five-minute simulation step, which agents are anomalous at that time. Datasets are generated using the maps (roads and buildings) for Atlanta and Berlin, having 1k agents in each simulation. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 493,985 |
2208.03117 | A reformulation of collision avoidance algorithm based on artificial
potential fields for fixed-wing UAVs in a dynamic environment | As mini UAVs become increasingly useful in the civilian work domain, the need for a method for them to operate safely in a cluttered environment is growing, especially for fixed-wing UAVs as they are incapable of following the stop-decide-execute methodology. This paper presents preliminary research to design a reactive collision avoidance algorithm based on the improved definition of the repulsive forces used in the Artificial potential field algorithms to allow feasible and safe navigation of fixed-wing UAVs in cluttered, dynamic environments. We present simulation results of the improved definition in multiple scenarios, and we have also discussed possible future studies to improve upon these results. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 311,689 |
2107.13078 | A Payload Optimization Method for Federated Recommender Systems | We introduce the payload optimization method for federated recommender systems (FRS). In federated learning (FL), the global model payload that is moved between the server and users depends on the number of items to recommend. The model payload grows when there is an increasing number of items. This becomes challenging for an FRS if it is running in production mode. To tackle the payload challenge, we formulated a multi-arm bandit solution that selected part of the global model and transmitted it to all users. The selection process was guided by a novel reward function suitable for FL systems. So far as we are aware, this is the first optimization method that seeks to address item dependent payloads. The method was evaluated using three benchmark recommendation datasets. The empirical validation confirmed that the proposed method outperforms the simpler methods that do not benefit from the bandits for the purpose of item selection. In addition, we have demonstrated the usefulness of our proposed method by rigorously evaluating the effects of a payload reduction on the recommendation performance degradation. Our method achieved up to a 90\% reduction in model payload, yielding only a $\sim$4\% - 8\% loss in the recommendation performance for highly sparse datasets | false | false | false | false | true | true | true | false | false | false | false | false | false | false | false | false | false | false | 248,089 |
2205.05123 | Deep fusion of gray level co-occurrence matrices for lung nodule
classification | Lung cancer is a severe menace to human health, due to which millions of people die because of late diagnoses of cancer; thus, it is vital to detect the disease as early as possible. The Computerized chest analysis Tomography of scan is assumed to be one of the efficient solutions for detecting and classifying lung nodules. The necessity of high accuracy of analyzing C.T. scan images of the lung is considered as one of the crucial challenges in detecting and classifying lung cancer. A new long-short-term-memory (LSTM) based deep fusion structure, is introduced, where, the texture features computed from lung nodules through new volumetric grey-level-co-occurrence-matrices (GLCM) computations are applied to classify the nodules into: benign, malignant and ambiguous. An improved Otsu segmentation method combined with the water strider optimization algorithm (WSA) is proposed to detect the lung nodules. Otsu-WSA thresholding can overcome the restrictions present in previous thresholding methods. Extended experiments are run to assess this fusion structure by considering 2D-GLCM computations based 2D-slices fusion, and an approximation of this 3D-GLCM with volumetric 2.5D-GLCM computations-based LSTM fusion structure. The proposed methods are trained and assessed through the LIDC-IDRI dataset, where 94.4%, 91.6%, and 95.8% Accuracy, sensitivity, and specificity are obtained, respectively for 2D-GLCM fusion and 97.33%, 96%, and 98%, accuracy, sensitivity, and specificity, respectively, for 2.5D-GLCM fusion. The yield of the same are 98.7%, 98%, and 99%, for the 3D-GLCM fusion. The obtained results and analysis indicate that the WSA-Otsu method requires less execution time and yields a more accurate thresholding process. It is found that 3D-GLCM based LSTM outperforms its counterparts. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 295,843 |
2301.08801 | Information Retrieval: Recent Advances and Beyond | In this paper, we provide a detailed overview of the models used for information retrieval in the first and second stages of the typical processing chain. We discuss the current state-of-the-art models, including methods based on terms, semantic retrieval, and neural. Additionally, we delve into the key topics related to the learning process of these models. This way, this survey offers a comprehensive understanding of the field and is of interest for for researchers and practitioners entering/working in the information retrieval domain. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 341,293 |
2404.09532 | TMPQ-DM: Joint Timestep Reduction and Quantization Precision Selection
for Efficient Diffusion Models | Diffusion models have emerged as preeminent contenders in the realm of generative models. Distinguished by their distinctive sequential generative processes, characterized by hundreds or even thousands of timesteps, diffusion models progressively reconstruct images from pure Gaussian noise, with each timestep necessitating full inference of the entire model. However, the substantial computational demands inherent to these models present challenges for deployment, quantization is thus widely used to lower the bit-width for reducing the storage and computing overheads. Current quantization methodologies primarily focus on model-side optimization, disregarding the temporal dimension, such as the length of the timestep sequence, thereby allowing redundant timesteps to continue consuming computational resources, leaving substantial scope for accelerating the generative process. In this paper, we introduce TMPQ-DM, which jointly optimizes timestep reduction and quantization to achieve a superior performance-efficiency trade-off, addressing both temporal and model optimization aspects. For timestep reduction, we devise a non-uniform grouping scheme tailored to the non-uniform nature of the denoising process, thereby mitigating the explosive combinations of timesteps. In terms of quantization, we adopt a fine-grained layer-wise approach to allocate varying bit-widths to different layers based on their respective contributions to the final generative performance, thus rectifying performance degradation observed in prior studies. To expedite the evaluation of fine-grained quantization, we further devise a super-network to serve as a precision solver by leveraging shared quantization results. These two design components are seamlessly integrated within our framework, enabling rapid joint exploration of the exponentially large decision space via a gradient-free evolutionary search algorithm. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 446,722 |
2309.00135 | Construction Grammar and Artificial Intelligence | In this chapter, we argue that it is highly beneficial for the contemporary construction grammarian to have a thorough understanding of the strong relationship between the research fields of construction grammar and artificial intelligence. We start by unravelling the historical links between the two fields, showing that their relationship is rooted in a common attitude towards human communication and language. We then discuss the first direction of influence, focussing in particular on how insights and techniques from the field of artificial intelligence play an important role in operationalising, validating and scaling constructionist approaches to language. We then proceed to the second direction of influence, highlighting the relevance of construction grammar insights and analyses to the artificial intelligence endeavour of building truly intelligent agents. We support our case with a variety of illustrative examples and conclude that the further elaboration of this relationship will play a key role in shaping the future of the field of construction grammar. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 389,203 |
2211.11350 | Rooms with Text: A Dataset for Overlaying Text Detection | In this paper, we introduce a new dataset of room interior pictures with overlaying and scene text, totalling to 4836 annotated images in 25 product categories. We provide details on the collection and annotation process of our dataset, and analyze its statistics. Furthermore, we propose a baseline method for overlaying text detection, that leverages the character region-aware text detection framework to guide the classification model. We validate our approach and show its efficiency in terms of binary classification metrics, reaching the final performance of 0.95 F1 score, with false positive and false negative rates of 0.02 and 0.06 correspondingly. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 331,694 |
1002.0097 | On the Construction of Prefix-Free and Fix-Free Codes with Specified
Codeword Compositions | We investigate the construction of prefix-free and fix-free codes with specified codeword compositions. We present a polynomial time algorithm which constructs a fix-free code with the same codeword compositions as a given code for a special class of codes called distinct codes. We consider the construction of optimal fix-free codes which minimizes the average codeword cost for general letter costs with uniform distribution of the codewords and present an approximation algorithm to find a near optimal fix-free code with a given constant cost. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 5,566 |
2406.01145 | Explore then Determine: A GNN-LLM Synergy Framework for Reasoning over
Knowledge Graph | The task of reasoning over Knowledge Graphs (KGs) poses a significant challenge for Large Language Models (LLMs) due to the complex structure and large amounts of irrelevant information. Existing LLM reasoning methods overlook the importance of compositional learning on KG to supply with precise knowledge. Besides, the fine-tuning and frequent interaction with LLMs incur substantial time and resource costs. This paper focuses on the Question Answering over Knowledge Graph (KGQA) task and proposes an Explore-then-Determine (EtD) framework that synergizes LLMs with graph neural networks (GNNs) for reasoning over KGs. The Explore stage employs a lightweight GNN to explore promising candidates and relevant fine-grained knowledge to the questions, while the Determine stage utilizes the explored information to construct a knowledge-enhanced multiple-choice prompt, guiding a frozen LLM to determine the final answer. Extensive experiments on three benchmark KGQA datasets demonstrate that EtD achieves state-of-the-art performance and generates faithful reasoning results. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 460,189 |
1107.3765 | Using Variational Inference and MapReduce to Scale Topic Modeling | Latent Dirichlet Allocation (LDA) is a popular topic modeling technique for exploring document collections. Because of the increasing prevalence of large datasets, there is a need to improve the scalability of inference of LDA. In this paper, we propose a technique called ~\emph{MapReduce LDA} (Mr. LDA) to accommodate very large corpus collections in the MapReduce framework. In contrast to other techniques to scale inference for LDA, which use Gibbs sampling, we use variational inference. Our solution efficiently distributes computation and is relatively simple to implement. More importantly, this variational implementation, unlike highly tuned and specialized implementations, is easily extensible. We demonstrate two extensions of the model possible with this scalable framework: informed priors to guide topic discovery and modeling topics from a multilingual corpus. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 11,357 |
2008.01681 | SoloGAN: Multi-domain Multimodal Unpaired Image-to-Image Translation via
a Single Generative Adversarial Network | Despite significant advances in image-to-image (I2I) translation with generative adversarial networks (GANs), it remains challenging to effectively translate an image to a set of diverse images in multiple target domains using a single pair of generator and discriminator. Existing I2I translation methods adopt multiple domain-specific content encoders for different domains, where each domain-specific content encoder is trained with images from the same domain only. Nevertheless, we argue that the content (domain-invariance) features should be learned from images among all of the domains. Consequently, each domain-specific content encoder of existing schemes fails to extract the domain-invariant features efficiently. To address this issue, we present a flexible and general SoloGAN model for efficient multimodal I2I translation among multiple domains with unpaired data. In contrast to existing methods, the SoloGAN algorithm uses a single projection discriminator with an additional auxiliary classifier and shares the encoder and generator for all domains. Consequently, the SoloGAN can be trained effectively with images from all domains such that the domain-invariance content representation can be efficiently extracted. Qualitative and quantitative results over a wide range of datasets against several counterparts and variants of the SoloGAN demonstrate the merits of the method, especially for challenging I2I translation datasets, i.e., datasets involving extreme shape variations or need to keep the complex backgrounds unchanged after translations. Furthermore, we demonstrate the contribution of each component in SoloGAN by ablation studies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 190,409 |
2411.11681 | PSPO*: An Effective Process-supervised Policy Optimization for Reasoning
Alignment | Process supervision enhances the performance of large language models in reasoning tasks by providing feedback at each step of chain-of-thought reasoning. However, due to the lack of effective process supervision methods, even advanced large language models are prone to logical errors and redundant reasoning. We claim that the effectiveness of process supervision significantly depends on both the accuracy and the length of reasoning chains. Moreover, we identify that these factors exhibit a nonlinear relationship with the overall reward score of the reasoning process. Inspired by these insights, we propose a novel process supervision paradigm, PSPO*, which systematically outlines the workflow from reward model training to policy optimization, and highlights the importance of nonlinear rewards in process supervision. Based on PSPO*, we develop the PSPO-WRS, which considers the number of reasoning steps in determining reward scores and utilizes an adjusted Weibull distribution for nonlinear reward shaping. Experimental results on six mathematical reasoning datasets demonstrate that PSPO-WRS consistently outperforms current mainstream models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 509,135 |
1302.4957 | Learning Bayesian Networks: A Unification for Discrete and Gaussian
Domains | We examine Bayesian methods for learning Bayesian networks from a combination of prior knowledge and statistical data. In particular, we unify the approaches we presented at last year's conference for discrete and Gaussian domains. We derive a general Bayesian scoring metric, appropriate for both domains. We then use this metric in combination with well-known statistical facts about the Dirichlet and normal--Wishart distributions to derive our metrics for discrete and Gaussian domains. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 22,231 |
2202.10944 | Convex Surrogate Loss Functions for Contextual Pricing with Transaction
Data | We study an off-policy contextual pricing problem where the seller has access to samples of prices that customers were previously offered, whether they purchased at that price, and auxiliary features describing the customer and/or item being sold. This is in contrast to the well-studied setting in which samples of the customer's valuation (willingness to pay) are observed. In our setting, the observed data is influenced by the previous pricing policy, and we do not know how customers would have responded to alternative prices. We introduce suitable loss functions for this setting that can be directly optimized to find an effective pricing policy with expected revenue guarantees, without the need for estimation of an intermediate demand function. We focus on convex loss functions. This is particularly relevant when linear pricing policies are desired for interpretability reasons, resulting in a tractable convex revenue optimization problem. We propose generalized hinge and quantile pricing loss functions that price at a multiplicative factor of the conditional expected valuation or a particular quantile of the prices that sold, despite the valuation data not being observed. We prove expected revenue bounds for these pricing policies respectively when the valuation distribution is log-concave, and we provide generalization bounds for the finite sample case. Finally, we conduct simulations on both synthetic and real-world data to demonstrate that this approach is competitive with, and in some settings outperforms, state-of-the-art methods in contextual pricing. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 281,719 |
2501.19077 | Temperature-Annealed Boltzmann Generators | Efficient sampling of unnormalized probability densities such as the Boltzmann distribution of molecular systems is a longstanding challenge. Next to conventional approaches like molecular dynamics or Markov chain Monte Carlo, variational approaches, such as training normalizing flows with the reverse Kullback-Leibler divergence, have been introduced. However, such methods are prone to mode collapse and often do not learn to sample the full configurational space. Here, we present temperature-annealed Boltzmann generators (TA-BG) to address this challenge. First, we demonstrate that training a normalizing flow with the reverse Kullback-Leibler divergence at high temperatures is possible without mode collapse. Furthermore, we introduce a reweighting-based training objective to anneal the distribution to lower target temperatures. We apply this methodology to three molecular systems of increasing complexity and, compared to the baseline, achieve better results in almost all metrics while requiring up to three times fewer target energy evaluations. For the largest system, our approach is the only method that accurately resolves the metastable states of the system. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 528,998 |
1706.04473 | Idea density for predicting Alzheimer's disease from transcribed speech | Idea Density (ID) measures the rate at which ideas or elementary predications are expressed in an utterance or in a text. Lower ID is found to be associated with an increased risk of developing Alzheimer's disease (AD) (Snowdon et al., 1996; Engelman et al., 2010). ID has been used in two different versions: propositional idea density (PID) counts the expressed ideas and can be applied to any text while semantic idea density (SID) counts pre-defined information content units and is naturally more applicable to normative domains, such as picture description tasks. In this paper, we develop DEPID, a novel dependency-based method for computing PID, and its version DEPID-R that enables to exclude repeating ideas---a feature characteristic to AD speech. We conduct the first comparison of automatically extracted PID and SID in the diagnostic classification task on two different AD datasets covering both closed-topic and free-recall domains. While SID performs better on the normative dataset, adding PID leads to a small but significant improvement (+1.7 F-score). On the free-topic dataset, PID performs better than SID as expected (77.6 vs 72.3 in F-score) but adding the features derived from the word embedding clustering underlying the automatic SID increases the results considerably, leading to an F-score of 84.8. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 75,348 |
2410.19932 | Tracking and triangulating firefly flashes in field recordings | Identifying firefly flashes from other bright features in nature images is complicated. I provide a training dataset and trained neural networks for reliable flash classification. The training set consists of thousands of cropped images (patches) extracted by manual labeling from video recordings of fireflies in their natural habitat. The trained network appears as considerably more reliable to differentiate flashes from other sources of light compared to traditional methods relying solely on intensity thresholding. This robust tracking enables a new calibration-free method for the 3D reconstruction of flash occurrences from stereoscopic 360-degree videos, which I also present here. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 502,574 |
2011.03883 | Swarm Formation Morphing for Congestion Aware Collision Avoidance | The focus of this work is to present a novel methodology for optimal distribution of a swarm formation on either side of an obstacle, when evading the obstacle, to avoid overpopulation on the sides to reduce the agents' waiting delays, resulting in a reduced overall mission time and lower energy consumption. To handle this, the problem is divided into two main parts: 1) the disturbance phase: how to morph the formation optimally to avoid the obstacle in the least possible time in the situation at hand, and 2) the convergence phase: how to optimally resume the intended formation shape once the threat of potential collision has been eliminated. For the first problem, we develop a methodology which tests different formation morphing combinations and finds the optimal one, by utilizing trajectory, velocity, and coordinate information, to bypass the obstacle. For the second problem, we utilize a thin-plate splines (TPS) inspired temperature function minimization method to bring the agents back from the distorted formation into the desired formation in an optimal manner, after collision avoidance has been successfully performed. Experimental results show that, in the considered test scenario, the traditional method based on the shortest path results in 14.7% higher energy consumption as compared to our proposed approach. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 205,384 |
2006.05697 | Meta Transition Adaptation for Robust Deep Learning with Noisy Labels | To discover intrinsic inter-class transition probabilities underlying data, learning with noise transition has become an important approach for robust deep learning on corrupted labels. Prior methods attempt to achieve such transition knowledge by pre-assuming strongly confident anchor points with 1-probability belonging to a specific class, generally infeasible in practice, or directly jointly estimating the transition matrix and learning the classifier from the noisy samples, always leading to inaccurate estimation misguided by wrong annotation information especially in large noise cases. To alleviate these issues, this study proposes a new meta-transition-learning strategy for the task. Specifically, through the sound guidance of a small set of meta data with clean labels, the noise transition matrix and the classifier parameters can be mutually ameliorated to avoid being trapped by noisy training samples, and without need of any anchor point assumptions. Besides, we prove our method is with statistical consistency guarantee on correctly estimating the desired transition matrix. Extensive synthetic and real experiments validate that our method can more accurately extract the transition matrix, naturally following its more robust performance than prior arts. Its essential relationship with label distribution learning is also discussed, which explains its fine performance even under no-noise scenarios. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,171 |
1205.6961 | Tighter Worst-Case Bounds on Algebraic Gossip | Gossip and in particular network coded algebraic gossip have recently attracted attention as a fast, bandwidth-efficient, reliable and distributed way to broadcast or multicast multiple messages. While the algorithms are simple, involved queuing approaches are used to study their performance. The most recent result in this direction shows that uniform algebraic gossip disseminates k messages in O({\Delta}(D + k + log n)) rounds where D is the diameter, n the size of the network and {\Delta} the maximum degree. In this paper we give a simpler, short and self-contained proof for this worst-case guarantee. Our approach also allows to reduce the quadratic {\Delta}D term to min{3n, {\Delta}D}. We furthermore show that a simple round robin routing scheme also achieves min{3n, {\Delta}D} + {\Delta}k rounds, eliminating both randomization and coding. Lastly, we combine a recent non-uniform gossip algorithm with a simple routing scheme to get a O(D + k + log^{O(1)}) gossip information dissemination algorithm. This is order optimal as long as D and k are not both polylogarithmically small. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 16,263 |
2206.02032 | A Neural Network Approach for Homogenization of Multiscale Problems | We propose a neural network-based approach to the homogenization of multiscale problems. The proposed method uses a derivative-free formulation of a training loss, which incorporates Brownian walkers to find the macroscopic description of a multiscale PDE solution. Compared with other network-based approaches for multiscale problems, the proposed method is free from the design of hand-crafted neural network architecture and the cell problem to calculate the homogenization coefficient. The exploration neighborhood of the Brownian walkers affects the overall learning trajectory. We determine the bounds of micro- and macro-time steps that capture the local heterogeneous and global homogeneous solution behaviors, respectively, through a neural network. The bounds imply that the computational cost of the proposed method is independent of the microscale periodic structure for the standard periodic problems. We validate the efficiency and robustness of the proposed method through a suite of linear and nonlinear multiscale problems with periodic and random field coefficients. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 300,714 |
2203.04530 | Design of Detectors at the Electron Ion Collider with Artificial
Intelligence | Artificial Intelligence (AI) for design is a relatively new but active area of research across many disciplines. Surprisingly when it comes to designing detectors with AI this is an area at its infancy. The Electron Ion Collider is the ultimate machine to study the strong force. The EIC is a large-scale experiment with an integrated detector that extends for about $\pm$35 meters to include the central, far-forward, and far-backward regions. The design of the central detector is made by multiple sub-detectors, each in principle characterized by a multidimensional design space and multiple design criteria also called objectives. Simulations with Geant4 are typically compute intensive, and the optimization of the detector design may include non-differentiable terms as well as noisy objectives. In this context, AI can offer state of the art solutions to solve complex combinatorial problems in an efficient way. In particular, one of the proto-collaborations, ECCE, has explored during the detector proposal the possibility of using multi-objective optimization to design the tracking system of the EIC detector. This document provides an overview of these techniques and recent progress made during the EIC detector proposal. Future high energy nuclear physics experiments can leverage AI-based strategies to design more efficient detectors by optimizing their performance driven by physics criteria and minimizing costs for their realization. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 284,501 |
2405.11537 | VR-GPT: Visual Language Model for Intelligent Virtual Reality
Applications | The advent of immersive Virtual Reality applications has transformed various domains, yet their integration with advanced artificial intelligence technologies like Visual Language Models remains underexplored. This study introduces a pioneering approach utilizing VLMs within VR environments to enhance user interaction and task efficiency. Leveraging the Unity engine and a custom-developed VLM, our system facilitates real-time, intuitive user interactions through natural language processing, without relying on visual text instructions. The incorporation of speech-to-text and text-to-speech technologies allows for seamless communication between the user and the VLM, enabling the system to guide users through complex tasks effectively. Preliminary experimental results indicate that utilizing VLMs not only reduces task completion times but also improves user comfort and task engagement compared to traditional VR interaction methods. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | true | 455,186 |
2407.12293 | Multi evolutional deep neural networks (Multi-EDNN) | Evolutional deep neural networks (EDNN) solve partial differential equations (PDEs) by marching the network representation of the solution fields, using the governing equations. Use of a single network to solve coupled PDEs on large domains requires a large number of network parameters and incurs a significant computational cost. We introduce coupled EDNN (C-EDNN) to solve systems of PDEs by using independent networks for each state variable, which are only coupled through the governing equations. We also introduce distributed EDNN (D-EDNN) by spatially partitioning the global domain into several elements and assigning individual EDNNs to each element to solve the local evolution of the PDE. The networks then exchange the solution and fluxes at their interfaces, similar to flux-reconstruction methods, and ensure that the PDE dynamics are accurately preserved between neighboring elements. Together C-EDNN and D-EDNN form the general class of Multi-EDNN methods. We demonstrate these methods with aid of canonical problems including linear advection, the heat equation, and the compressible Navier-Stokes equations in Couette and Taylor-Green flows. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 473,858 |
2411.10032 | VMID: A Multimodal Fusion LLM Framework for Detecting and Identifying
Misinformation of Short Videos | Short video platforms have become important channels for news dissemination, offering a highly engaging and immediate way for users to access current events and share information. However, these platforms have also emerged as significant conduits for the rapid spread of misinformation, as fake news and rumors can leverage the visual appeal and wide reach of short videos to circulate extensively among audiences. Existing fake news detection methods mainly rely on single-modal information, such as text or images, or apply only basic fusion techniques, limiting their ability to handle the complex, multi-layered information inherent in short videos. To address these limitations, this paper presents a novel fake news detection method based on multimodal information, designed to identify misinformation through a multi-level analysis of video content. This approach effectively utilizes different modal representations to generate a unified textual description, which is then fed into a large language model for comprehensive evaluation. The proposed framework successfully integrates multimodal features within videos, significantly enhancing the accuracy and reliability of fake news detection. Experimental results demonstrate that the proposed approach outperforms existing models in terms of accuracy, robustness, and utilization of multimodal information, achieving an accuracy of 90.93%, which is significantly higher than the best baseline model (SV-FEND) at 81.05%. Furthermore, case studies provide additional evidence of the effectiveness of the approach in accurately distinguishing between fake news, debunking content, and real incidents, highlighting its reliability and robustness in real-world applications. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 508,469 |
1612.05688 | A User Simulator for Task-Completion Dialogues | Despite widespread interests in reinforcement-learning for task-oriented dialogue systems, several obstacles can frustrate research and development progress. First, reinforcement learners typically require interaction with the environment, so conventional dialogue corpora cannot be used directly. Second, each task presents specific challenges, requiring separate corpus of task-specific annotated data. Third, collecting and annotating human-machine or human-human conversations for task-oriented dialogues requires extensive domain knowledge. Because building an appropriate dataset can be both financially costly and time-consuming, one popular approach is to build a user simulator based upon a corpus of example dialogues. Then, one can train reinforcement learning agents in an online fashion as they interact with the simulator. Dialogue agents trained on these simulators can serve as an effective starting point. Once agents master the simulator, they may be deployed in a real environment to interact with humans, and continue to be trained online. To ease empirical algorithmic comparisons in dialogues, this paper introduces a new, publicly available simulation framework, where our simulator, designed for the movie-booking domain, leverages both rules and collected data. The simulator supports two tasks: movie ticket booking and movie seeking. Finally, we demonstrate several agents and detail the procedure to add and test your own agent in the proposed framework. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 65,711 |
2212.14246 | Robust representations of oil wells' intervals via sparse attention
mechanism | Transformer-based neural network architectures achieve state-of-the-art results in different domains, from natural language processing (NLP) to computer vision (CV). The key idea of Transformers, the attention mechanism, has already led to significant breakthroughs in many areas. The attention has found their implementation for time series data as well. However, due to the quadratic complexity of the attention calculation regarding input sequence length, the application of Transformers is limited by high resource demands. Moreover, their modifications for industrial time series need to be robust to missing or noised values, which complicates the expansion of the horizon of their application. To cope with these issues, we introduce the class of efficient Transformers named Regularized Transformers (Reguformers). We implement the regularization technique inspired by the dropout ideas to improve robustness and reduce computational expenses. The focus in our experiments is on oil&gas data, namely, well logs, a prominent example of multivariate time series. The goal is to solve the problems of similarity and representation learning for them. To evaluate our models for such problems, we work with an industry-scale open dataset consisting of well logs of more than 20 wells. The experiments show that all variations of Reguformers outperform the previously developed RNNs, classical Transformer model, and robust modifications of it like Informer and Performer in terms of well-intervals' classification and the quality of the obtained well-intervals' representations. Moreover, the sustainability to missing and incorrect data in our models exceeds that of others by a significant margin. The best result that the Reguformer achieves on well-interval similarity task is the mean PR~AUC score equal to 0.983, which is comparable to the classical Transformer and outperforms the previous models. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 338,551 |
2404.12487 | Advancing Applications of Satellite Photogrammetry: Novel Approaches for
Built-up Area Modeling and Natural Environment Monitoring using
Stereo/Multi-view Satellite Image-derived 3D Data | With the development of remote sensing technology in recent decades, spaceborne sensors with sub-meter and meter spatial resolution (Worldview and PlanetScope) have achieved a considerable image quality to generate 3D geospatial data via a stereo matching pipeline. These achievements have significantly increased the data accessibility in 3D, necessitating adapting these 3D geospatial data to analyze human and natural environments. This dissertation explores several novel approaches based on stereo and multi-view satellite image-derived 3D geospatial data, to deal with remote sensing application issues for built-up area modeling and natural environment monitoring, including building model 3D reconstruction, glacier dynamics tracking, and lake algae monitoring. Specifically, the dissertation introduces four parts of novel approaches that deal with the spatial and temporal challenges with satellite-derived 3D data. The first study advances LoD-2 building modeling from satellite-derived Orthophoto and DSMs with a novel approach employing a model-driven workflow that generates building rectangular 3D geometry models. Secondly, we further enhanced our building reconstruction framework for dense urban areas and non-rectangular purposes, we implemented deep learning for unit-level segmentation and introduced a gradient-based circle reconstruction for circular buildings to develop a polygon composition technique for advanced building LoD2 reconstruction. Our third study utilizes high-spatiotemporal resolution PlanetScope satellite imagery for glacier tracking at 3D level in mid-latitude regions. Finally, we proposed a term as "Algal Behavior Function" to refine the quantification of chlorophyll-a concentrations from satellite imagery in water quality monitoring, addressing algae fluctuations and timing discrepancies between satellite observations and field measurements, thus enhancing the precision of underwater algae volume estimates. Overall, this dissertation demonstrates the extensive potential of satellite photogrammetry applications in addressing urban and environmental challenges. It further showcases innovative analytical methodologies that enhance the applicability of adapting stereo and multi-view very high-resolution satellite-derived 3D data. (See full abstract in the document) | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 447,905 |
1602.04052 | Image Restoration and Reconstruction using Variable Splitting and
Class-adapted Image Priors | This paper proposes using a Gaussian mixture model as a prior, for solving two image inverse problems, namely image deblurring and compressive imaging. We capitalize on the fact that variable splitting algorithms, like ADMM, are able to decouple the handling of the observation operator from that of the regularizer, and plug a state-of-the-art algorithm into the pure denoising step. Furthermore, we show that, when applied to a specific type of image, a Gaussian mixture model trained from an database of images of the same type is able to outperform current state-of-the-art methods. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 52,082 |
2402.09390 | HGOT: Hierarchical Graph of Thoughts for Retrieval-Augmented In-Context
Learning in Factuality Evaluation | With the widespread adoption of large language models (LLMs) in numerous applications, the challenge of factuality and the propensity for hallucinations has emerged as a significant concern. To address this issue, particularly in retrieval-augmented in-context learning, we introduce the hierarchical graph of thoughts (HGOT), a structured, multi-layered graph approach designed to enhance the retrieval of pertinent passages during in-context learning. The framework utilizes the emergent planning capabilities of LLMs, employing the divide-and-conquer strategy to break down complex queries into manageable sub-queries. It refines self-consistency majority voting for answer selection, which incorporates the recently proposed citation recall and precision metrics to assess the quality of thoughts, linking an answer's credibility intrinsically to the thought's quality. This methodology introduces a weighted system in majority voting, prioritizing answers based on the citation quality of their thoughts. Additionally, we propose a scoring mechanism for evaluating retrieved passages, considering factors such as citation frequency and quality, self-consistency confidence, and the retrieval module's ranking. Experiments indicate that HGOT excels as a versatile approach, outperforming competing models in FEVER by up to $7\%$ and matching leading models such as Retrieve-then-Read in Open-SQuAD, and DSP in HotPotQA, demonstrating its efficacy in enhancing LLMs' factuality. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 429,498 |
2107.02378 | Learning an Explicit Hyperparameter Prediction Function Conditioned on
Tasks | Meta learning has attracted much attention recently in machine learning community. Contrary to conventional machine learning aiming to learn inherent prediction rules to predict labels for new query data, meta learning aims to learn the learning methodology for machine learning from observed tasks, so as to generalize to new query tasks by leveraging the meta-learned learning methodology. In this study, we interpret such learning methodology as learning an explicit hyper-parameter prediction function shared by all training tasks. Specifically, this function is represented as a parameterized function called meta-learner, mapping from a training/test task to its suitable hyper-parameter setting, extracted from a pre-specified function set called meta learning machine. Such setting guarantees that the meta-learned learning methodology is able to flexibly fit diverse query tasks, instead of only obtaining fixed hyper-parameters by many current meta learning methods, with less adaptability to query task's variations. Such understanding of meta learning also makes it easily succeed from traditional learning theory for analyzing its generalization bounds with general losses/tasks/models. The theory naturally leads to some feasible controlling strategies for ameliorating the quality of the extracted meta-learner, verified to be able to finely ameliorate its generalization capability in some typical meta learning applications, including few-shot regression, few-shot classification and domain generalization. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 244,790 |
2012.06110 | I-GCN: Robust Graph Convolutional Network via Influence Mechanism | Deep learning models for graphs, especially Graph Convolutional Networks (GCNs), have achieved remarkable performance in the task of semi-supervised node classification. However, recent studies show that GCNs suffer from adversarial perturbations. Such vulnerability to adversarial attacks significantly decreases the stability of GCNs when being applied to security-critical applications. Defense methods such as preprocessing, attention mechanism and adversarial training have been discussed by various studies. While being able to achieve desirable performance when the perturbation rates are low, such methods are still vulnerable to high perturbation rates. Meanwhile, some defending algorithms perform poorly when the node features are not visible. Therefore, in this paper, we propose a novel mechanism called influence mechanism, which is able to enhance the robustness of the GCNs significantly. The influence mechanism divides the effect of each node into two parts: introverted influence which tries to maintain its own features and extroverted influence which exerts influences on other nodes. Utilizing the influence mechanism, we propose the Influence GCN (I-GCN) model. Extensive experiments show that our proposed model is able to achieve higher accuracy rates than state-of-the-art methods when defending against non-targeted attacks. | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 210,997 |
1709.08226 | A novel recommendation system to match college events and groups to
students | With the recent increase in data online, discovering meaningful opportunities can be time-consuming and complicated for many individuals. To overcome this data overload challenge, we present a novel text-content-based recommender system as a valuable tool to predict user interests. To that end, we develop a specific procedure to create user models and item feature-vectors, where items are described in free text. The user model is generated by soliciting from a user a few keywords and expanding those keywords into a list of weighted near-synonyms. The item feature-vectors are generated from the textual descriptions of the items, using modified tf-idf values of the users' keywords and their near-synonyms. Once the users are modeled and the items are abstracted into feature vectors, the system returns the maximum-similarity items as recommendations to that user. Our experimental evaluation shows that our method of creating the user models and item feature-vectors resulted in higher precision and accuracy in comparison to well-known feature-vector-generating methods like Glove and Word2Vec. It also shows that stemming and the use of a modified version of tf-idf increase the accuracy and precision by 2% and 3%, respectively, compared to non-stemming and the standard tf-idf definition. Moreover, the evaluation results show that updating the user model from usage histories improves the precision and accuracy of the system. This recommender system has been developed as part of the Agnes application, which runs on iOS and Android platforms and is accessible through the Agnes website. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 81,434 |
1605.02065 | Concentrated Differential Privacy: Simplifications, Extensions, and
Lower Bounds | "Concentrated differential privacy" was recently introduced by Dwork and Rothblum as a relaxation of differential privacy, which permits sharper analyses of many privacy-preserving computations. We present an alternative formulation of the concept of concentrated differential privacy in terms of the Renyi divergence between the distributions obtained by running an algorithm on neighboring inputs. With this reformulation in hand, we prove sharper quantitative results, establish lower bounds, and raise a few new questions. We also unify this approach with approximate differential privacy by giving an appropriate definition of "approximate concentrated differential privacy." | false | false | false | false | false | false | true | false | false | true | false | false | true | false | false | false | false | true | 55,569 |
2105.12713 | Spatio-Contextual Deep Network Based Multimodal Pedestrian Detection For
Autonomous Driving | Pedestrian Detection is the most critical module of an Autonomous Driving system. Although a camera is commonly used for this purpose, its quality degrades severely in low-light night time driving scenarios. On the other hand, the quality of a thermal camera image remains unaffected in similar conditions. This paper proposes an end-to-end multimodal fusion model for pedestrian detection using RGB and thermal images. Its novel spatio-contextual deep network architecture is capable of exploiting the multimodal input efficiently. It consists of two distinct deformable ResNeXt-50 encoders for feature extraction from the two modalities. Fusion of these two encoded features takes place inside a multimodal feature embedding module (MuFEm) consisting of several groups of a pair of Graph Attention Network and a feature fusion unit. The output of the last feature fusion unit of MuFEm is subsequently passed to two CRFs for their spatial refinement. Further enhancement of the features is achieved by applying channel-wise attention and extraction of contextual information with the help of four RNNs traversing in four different directions. Finally, these feature maps are used by a single-stage decoder to generate the bounding box of each pedestrian and the score map. We have performed extensive experiments of the proposed framework on three publicly available multimodal pedestrian detection benchmark datasets, namely KAIST, CVC-14, and UTokyo. The results on each of them improved the respective state-of-the-art performance. A short video giving an overview of this work along with its qualitative results can be seen at https://youtu.be/FDJdSifuuCs. Our source code will be released upon publication of the paper. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 237,079 |
2101.10266 | COVID-19 Outbreak Prediction and Analysis using Self Reported Symptoms | It is crucial for policymakers to understand the community prevalence of COVID-19 so combative resources can be effectively allocated and prioritized during the COVID-19 pandemic. Traditionally, community prevalence has been assessed through diagnostic and antibody testing data. However, despite the increasing availability of COVID-19 testing, the required level has not been met in most parts of the globe, introducing a need for an alternative method for communities to determine disease prevalence. This is further complicated by the observation that COVID-19 prevalence and spread varies across different spatial, temporal, and demographics. In this study, we understand trends in the spread of COVID-19 by utilizing the results of self-reported COVID-19 symptoms surveys as an alternative to COVID-19 testing reports. This allows us to assess community disease prevalence, even in areas with low COVID-19 testing ability. Using individually reported symptom data from various populations, our method predicts the likely percentage of the population that tested positive for COVID-19. We do so with a Mean Absolute Error (MAE) of 1.14 and Mean Relative Error (MRE) of 60.40\% with 95\% confidence interval as (60.12, 60.67). This implies that our model predicts +/- 1140 cases than the original in a population of 1 million. In addition, we forecast the location-wise percentage of the population testing positive for the next 30 days using self-reported symptoms data from previous days. The MAE for this method is as low as 0.15 (MRE of 23.61\% with 95\% confidence interval as (23.6, 13.7)) for New York. We present an analysis of these results, exposing various clinical attributes of interest across different demographics. Lastly, we qualitatively analyze how various policy enactments (testing, curfew) affect the prevalence of COVID-19 in a community. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 216,879 |
2406.14197 | On the Representational Capacity of Neural Language Models with
Chain-of-Thought Reasoning | The performance of modern language models (LMs) has been improved by chain-of-thought (CoT) reasoning, i.e., the process of generating intermediate results that guide the model towards a final answer. A possible explanation for this improvement is that CoT reasoning extends an LM's computational power, as RNNs and transformers with additional scratch space are known to be Turing complete. Comparing LMs to Turing machines, however, introduces a category error - Turing machines decide language membership, whereas LMs define distributions over strings. To bridge this gap, we formalize CoT reasoning in a probabilistic setting. We present several results on the representational capacity of recurrent and transformer LMs with CoT reasoning, showing that they can represent the same family of distributions over strings as probabilistic Turing machines. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 466,202 |
2303.17361 | Invertible Convolution with Symmetric Paddings | We show that symmetrically padded convolution can be analytically inverted via DFT. We comprehensively analyze several different symmetric and anti-symmetric padding modes and show that multiple cases exist where the inversion can be achieved. The implementation is available at \url{https://github.com/prclibo/iconv_dft}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 355,183 |
2406.03662 | The Missing Curve Detectors of InceptionV1: Applying Sparse Autoencoders
to InceptionV1 Early Vision | Recent work on sparse autoencoders (SAEs) has shown promise in extracting interpretable features from neural networks and addressing challenges with polysemantic neurons caused by superposition. In this paper, we apply SAEs to the early vision layers of InceptionV1, a well-studied convolutional neural network, with a focus on curve detectors. Our results demonstrate that SAEs can uncover new interpretable features not apparent from examining individual neurons, including additional curve detectors that fill in previous gaps. We also find that SAEs can decompose some polysemantic neurons into more monosemantic constituent features. These findings suggest SAEs are a valuable tool for understanding InceptionV1, and convolutional neural networks more generally. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 461,322 |
2402.12913 | OPDAI at SemEval-2024 Task 6: Small LLMs can Accelerate Hallucination
Detection with Weakly Supervised Data | This paper mainly describes a unified system for hallucination detection of LLMs, which wins the second prize in the model-agnostic track of the SemEval-2024 Task 6, and also achieves considerable results in the model-aware track. This task aims to detect hallucination with LLMs for three different text-generation tasks without labeled training data. We utilize prompt engineering and few-shot learning to verify the performance of different LLMs on the validation data. Then we select the LLMs with better performance to generate high-quality weakly supervised training data, which not only satisfies the consistency of different LLMs, but also satisfies the consistency of the optimal LLM with different sampling parameters. Furthermore, we finetune different LLMs by using the constructed training data, and finding that a relatively small LLM can achieve a competitive level of performance in hallucination detection, when compared to the large LLMs and the prompt-based approaches using GPT-4. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 431,042 |
2102.09249 | Composable Generative Models | Generative modeling has recently seen many exciting developments with the advent of deep generative architectures such as Variational Auto-Encoders (VAE) or Generative Adversarial Networks (GAN). The ability to draw synthetic i.i.d. observations with the same joint probability distribution as a given dataset has a wide range of applications including representation learning, compression or imputation. It appears that it also has many applications in privacy preserving data analysis, especially when used in conjunction with differential privacy techniques. This paper focuses on synthetic data generation models with privacy preserving applications in mind. It introduces a novel architecture, the Composable Generative Model (CGM) that is state-of-the-art in tabular data generation. Any conditional generative model can be used as a sub-component of the CGM, including CGMs themselves, allowing the generation of numerical, categorical data as well as images, text, or time series. The CGM has been evaluated on 13 datasets (6 standard datasets and 7 simulated) and compared to 14 recent generative models. It beats the state of the art in tabular data generation by a significant margin. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 220,712 |
2207.03674 | Learning High-quality Proposals for Acne Detection | Acne detection is crucial for interpretative diagnosis and precise treatment of skin disease. The arbitrary boundary and small size of acne lesions lead to a significant number of poor-quality proposals in two-stage detection. In this paper, we propose a novel head structure for Region Proposal Network to improve the proposals' quality in two ways. At first, a Spatial Aware Double Head(SADH) structure is proposed to disentangle the representation learning for classification and localization from two different spatial perspectives. The proposed SADH ensures a steeper classification confidence gradient and suppresses the proposals having low intersection-over-union(IoU) with the matched ground truth. Then, we propose a Normalized Wasserstein Distance prediction branch to improve the correlation between the proposals' classification scores and IoUs. In addition, to facilitate further research on acne detection, we construct a new dataset named AcneSCU, with high-resolution imageries, precise annotations, and fine-grained lesion categories. Extensive experiments are conducted on both AcneSCU and the public dataset ACNE04, and the results demonstrate the proposed method could improve the proposals' quality, consistently outperforming state-of-the-art approaches. Code and the collected dataset are available in https://github.com/pingguokiller/acnedetection. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 306,935 |
2311.11287 | Tactile Active Inference Reinforcement Learning for Efficient Robotic
Manipulation Skill Acquisition | Robotic manipulation holds the potential to replace humans in the execution of tedious or dangerous tasks. However, control-based approaches are not suitable due to the difficulty of formally describing open-world manipulation in reality, and the inefficiency of existing learning methods. Thus, applying manipulation in a wide range of scenarios presents significant challenges. In this study, we propose a novel method for skill learning in robotic manipulation called Tactile Active Inference Reinforcement Learning (Tactile-AIRL), aimed at achieving efficient training. To enhance the performance of reinforcement learning (RL), we introduce active inference, which integrates model-based techniques and intrinsic curiosity into the RL process. This integration improves the algorithm's training efficiency and adaptability to sparse rewards. Additionally, we utilize a vision-based tactile sensor to provide detailed perception for manipulation tasks. Finally, we employ a model-based approach to imagine and plan appropriate actions through free energy minimization. Simulation results demonstrate that our method achieves significantly high training efficiency in non-prehensile objects pushing tasks. It enables agents to excel in both dense and sparse reward tasks with just a few interaction episodes, surpassing the SAC baseline. Furthermore, we conduct physical experiments on a gripper screwing task using our method, which showcases the algorithm's rapid learning capability and its potential for practical applications. | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | 408,890 |
2305.02404 | Equation-Free Computations as DDDAS Protocols for Bifurcation Studies: A
Granular Chain Example | This chapter discusses the development and implementation of algorithms based on Equation-Free/Dynamic Data Driven Applications Systems (EF/DDDAS) protocols for the computer-assisted study of the bifurcation structure of complex dynamical systems, such as those that arise in biology (neuronal networks, cell populations), multiscale systems in physics, chemistry and engineering, and system modeling in the social sciences. An illustrative example demonstrates the experimental realization of a chain of granular particles (a so-called engineered granular chain). In particular, the focus is on the detection/stability analysis of time-periodic, spatially localized structures referred to as "dark breathers". Results in this chapter highlight, both experimentally and numerically, that the number of breathers can be controlled by varying the frequency as well as the amplitude of an "out of phase" actuation, and that a "snaking" structure in the bifurcation diagram (computed through standard, model-based numerical methods for dynamical systems) is also recovered through the EF/DDDAS methods operating on a black-box simulator. The EF/DDDAS protocols presented here are, therefore, a step towards general purpose protocols for performing detailed bifurcation analyses directly on laboratory experiments, not only on their mathematical models, but also on measured data. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | 362,022 |
2312.11571 | Model Stealing Attack against Recommender System | Recent studies have demonstrated the vulnerability of recommender systems to data privacy attacks. However, research on the threat to model privacy in recommender systems, such as model stealing attacks, is still in its infancy. Some adversarial attacks have achieved model stealing attacks against recommender systems, to some extent, by collecting abundant training data of the target model (target data) or making a mass of queries. In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks. Although the target model treats target and auxiliary data differently, their similar behavior patterns allow them to be fused using an attention mechanism to assist attacks. Besides, we design stealing functions to effectively extract the recommendation list obtained by querying the target model. Experimental results show that the proposed methods are applicable to most recommender systems and various scenarios and exhibit excellent attack performance on multiple datasets. | false | false | false | false | true | false | true | false | false | false | false | false | true | false | false | false | false | false | 416,650 |
2409.18961 | ProMerge: Prompt and Merge for Unsupervised Instance Segmentation | Unsupervised instance segmentation aims to segment distinct object instances in an image without relying on human-labeled data. This field has recently seen significant advancements, partly due to the strong local correspondences afforded by rich visual feature representations from self-supervised models (e.g., DINO). Recent state-of-the-art approaches use self-supervised features to represent images as graphs and solve a generalized eigenvalue system (i.e., normalized-cut) to generate foreground masks. While effective, this strategy is limited by its attendant computational demands, leading to slow inference speeds. In this paper, we propose Prompt and Merge (ProMerge), which leverages self-supervised visual features to obtain initial groupings of patches and applies a strategic merging to these segments, aided by a sophisticated background-based mask pruning technique. ProMerge not only yields competitive results but also offers a significant reduction in inference time compared to state-of-the-art normalized-cut-based approaches. Furthermore, when training an object detector using our mask predictions as pseudo-labels, the resulting detector surpasses the current leading unsupervised model on various challenging instance segmentation benchmarks. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 492,474 |
2303.01859 | POPGym: Benchmarking Partially Observable Reinforcement Learning | Real world applications of Reinforcement Learning (RL) are often partially observable, thus requiring memory. Despite this, partial observability is still largely ignored by contemporary RL benchmarks and libraries. We introduce Partially Observable Process Gym (POPGym), a two-part library containing (1) a diverse collection of 15 partially observable environments, each with multiple difficulties and (2) implementations of 13 memory model baselines -- the most in a single RL library. Existing partially observable benchmarks tend to fixate on 3D visual navigation, which is computationally expensive and only one type of POMDP. In contrast, POPGym environments are diverse, produce smaller observations, use less memory, and often converge within two hours of training on a consumer-grade GPU. We implement our high-level memory API and memory baselines on top of the popular RLlib framework, providing plug-and-play compatibility with various training algorithms, exploration strategies, and distributed training paradigms. Using POPGym, we execute the largest comparison across RL memory models to date. POPGym is available at https://github.com/proroklab/popgym. | false | false | false | false | true | false | true | true | false | false | false | false | false | false | false | false | false | false | 349,135 |
2406.10139 | YOLOv1 to YOLOv10: A comprehensive review of YOLO variants and their
application in the agricultural domain | This survey investigates the transformative potential of various YOLO variants, from YOLOv1 to the state-of-the-art YOLOv10, in the context of agricultural advancements. The primary objective is to elucidate how these cutting-edge object detection models can re-energise and optimize diverse aspects of agriculture, ranging from crop monitoring to livestock management. It aims to achieve key objectives, including the identification of contemporary challenges in agriculture, a detailed assessment of YOLO's incremental advancements, and an exploration of its specific applications in agriculture. This is one of the first surveys to include the latest YOLOv10, offering a fresh perspective on its implications for precision farming and sustainable agricultural practices in the era of Artificial Intelligence and automation. Further, the survey undertakes a critical analysis of YOLO's performance, synthesizes existing research, and projects future trends. By scrutinizing the unique capabilities packed in YOLO variants and their real-world applications, this survey provides valuable insights into the evolving relationship between YOLO variants and agriculture. The findings contribute towards a nuanced understanding of the potential for precision farming and sustainable agricultural practices, marking a significant step forward in the integration of advanced object detection technologies within the agricultural sector. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 464,244 |
1712.06203 | Attenuation correction for brain PET imaging using deep neural network
based on dixon and ZTE MR images | Positron Emission Tomography (PET) is a functional imaging modality widely used in neuroscience studies. To obtain meaningful quantitative results from PET images, attenuation correction is necessary during image reconstruction. For PET/MR hybrid systems, PET attenuation is challenging as Magnetic Resonance (MR) images do not reflect attenuation coefficients directly. To address this issue, we present deep neural network methods to derive the continuous attenuation coefficients for brain PET imaging from MR images. With only Dixon MR images as the network input, the existing U-net structure was adopted and analysis using forty patient data sets shows it is superior than other Dixon based methods. When both Dixon and zero echo time (ZTE) images are available, we have proposed a modified U-net structure, named GroupU-net, to efficiently make use of both Dixon and ZTE information through group convolution modules when the network goes deeper. Quantitative analysis based on fourteen real patient data sets demonstrates that both network approaches can perform better than the standard methods, and the proposed network structure can further reduce the PET quantification error compared to the U-net structure. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 86,849 |
1902.05903 | Sparse Hypergraphs with Applications to Coding Theory | For fixed integers $r\ge 3,e\ge 3,v\ge r+1$, an $r$-uniform hypergraph is called $\mathscr{G}_r(v,e)$-free if the union of any $e$ distinct edges contains at least $v+1$ vertices. Brown, Erd\H{o}s and S\'{o}s showed that the maximum number of edges of such a hypergraph on $n$ vertices, denoted as $f_r(n,v,e)$, satisfies $$\Omega(n^{\frac{er-v}{e-1}})=f_r(n,v,e)=\mathcal{O}(n^{\lceil\frac{er-v}{e-1}\rceil}).$$ For $e-1\mid er-v$, the lower bound matches the upper bound up to a constant factor; whereas for $e-1\nmid er-v$, in general it is a notoriously hard problem to determine the correct exponent of $n$. Among other results, we improve the above lower bound by showing that $$f_r(n,v,e)=\Omega(n^{\frac{er-v}{e-1}}(\log n)^{\frac{1}{e-1}})$$ for any $r,e,v$ satisfying $\gcd(e-1,er-v)=1$. The hypergraph we constructed is in fact $\mathscr{G}_r(ir-\lceil\frac{(i-1)(er-v)}{e-1}\rceil,i)$-free for every $2\le i\le e$, and it has several interesting applications in Coding Theory. The proof of the new lower bound is based on a novel application of the lower bound on the hypergraph independence number due to Duke, Lefmann, and R{\"o}dl. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 121,638 |
2401.13722 | Proactive Emotion Tracker: AI-Driven Continuous Mood and Emotion
Monitoring | This research project aims to tackle the growing mental health challenges in today's digital age. It employs a modified pre-trained BERT model to detect depressive text within social media and users' web browsing data, achieving an impressive 93% test accuracy. Simultaneously, the project aims to incorporate physiological signals from wearable devices, such as smartwatches and EEG sensors, to provide long-term tracking and prognosis of mood disorders and emotional states. This comprehensive approach holds promise for enhancing early detection of depression and advancing overall mental health outcomes. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 423,831 |
cmp-lg/9808012 | Separating Surface Order and Syntactic Relations in a Dependency Grammar | This paper proposes decoupling the dependency tree from word order, such that surface ordering is not determined by traversing the dependency tree. We develop the notion of a \emph{word order domain structure}, which is linked but structurally dissimilar to the syntactic dependency tree. The proposal results in a lexicalized, declarative, and formally precise description of word order; features which lack previous proposals for dependency grammars. Contrary to other lexicalized approaches to word order, our proposal does not require lexical ambiguities for ordering alternatives. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 536,919 |
1607.06833 | Explicit Polyhedral Bounds on Network Coding Rate Regions via Entropy
Function Region: Algorithms, Symmetry, and Computation | Automating the solutions of multiple network information theory problems, stretching from fundamental concerns such as determining all information inequalities and the limitations of linear codes, to applied ones such as designing coded networks, distributed storage systems, and caching systems, can be posed as polyhedral projections. These problems are demonstrated to exhibit multiple types of polyhedral symmetries. It is shown how these symmetries can be exploited to reduce the complexity of solving these problems through polyhedral projection. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 58,933 |
1909.04331 | Visual Area Coverage with Attitude-Dependent Camera Footprints by
Particle Harvesting | In aerial visual area coverage missions, the camera footprint changes over time based on the camera position and orientation -- a fact that complicates the whole process of coverage and path planning. This article proposes a solution to the problem of visual coverage by filling the target area with a set of randomly distributed particles and harvesting them by camera footprints. This way, high coverage is obtained at a low computational cost. In this approach, the path planner considers six degrees of freedom (DoF) for the camera movement and commands thrust and attitude references to a lower layer controller, while maximizing the covered area and coverage quality. The proposed method requires a priori information of the boundaries of the target area and can handle areas of very complex and highly non-convex geometry. The effectiveness of the approach is demonstrated in multiple simulations in terms of computational efficiency and coverage. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 144,776 |
2011.09094 | UP-DETR: Unsupervised Pre-training for Object Detection with
Transformers | DEtection TRansformer (DETR) for object detection reaches competitive performance compared with Faster R-CNN via a transformer encoder-decoder architecture. However, trained with scratch transformers, DETR needs large-scale training data and an extreme long training schedule even on COCO dataset. Inspired by the great success of pre-training transformers in natural language processing, we propose a novel pretext task named random query patch detection in Unsupervised Pre-training DETR (UP-DETR). Specifically, we randomly crop patches from the given image and then feed them as queries to the decoder. The model is pre-trained to detect these query patches from the input image. During the pre-training, we address two critical issues: multi-task learning and multi-query localization. (1) To trade off classification and localization preferences in the pretext task, we find that freezing the CNN backbone is the prerequisite for the success of pre-training transformers. (2) To perform multi-query localization, we develop UP-DETR with multi-query patch detection with attention mask. Besides, UP-DETR also provides a unified perspective for fine-tuning object detection and one-shot detection tasks. In our experiments, UP-DETR significantly boosts the performance of DETR with faster convergence and higher average precision on object detection, one-shot detection and panoptic segmentation. Code and pre-training models: https://github.com/dddzg/up-detr. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 207,091 |
2208.05830 | Speech Enhancement and Dereverberation with Diffusion-based Generative
Models | In this work, we build upon our previous publication and use diffusion-based generative models for speech enhancement. We present a detailed overview of the diffusion process that is based on a stochastic differential equation and delve into an extensive theoretical examination of its implications. Opposed to usual conditional generation tasks, we do not start the reverse process from pure Gaussian noise but from a mixture of noisy speech and Gaussian noise. This matches our forward process which moves from clean speech to noisy speech by including a drift term. We show that this procedure enables using only 30 diffusion steps to generate high-quality clean speech estimates. By adapting the network architecture, we are able to significantly improve the speech enhancement performance, indicating that the network, rather than the formalism, was the main limitation of our original approach. In an extensive cross-dataset evaluation, we show that the improved method can compete with recent discriminative models and achieves better generalization when evaluating on a different corpus than used for training. We complement the results with an instrumental evaluation using real-world noisy recordings and a listening experiment, in which our proposed method is rated best. Examining different sampler configurations for solving the reverse process allows us to balance the performance and computational speed of the proposed method. Moreover, we show that the proposed method is also suitable for dereverberation and thus not limited to additive background noise removal. Code and audio examples are available online, see https://github.com/sp-uhh/sgmse | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 312,518 |
1810.09712 | Finding Appropriate Traffic Regulations via Graph Convolutional Networks | Appropriate traffic regulations, e.g. planned road closure, are important in congested events. Crowd simulators have been used to find appropriate regulations by simulating multiple scenarios with different regulations. However, this approach requires multiple simulation runs, which are time-consuming. In this paper, we propose a method to learn a function that outputs regulation effects given the current traffic situation as inputs. If the function is learned using the training data of many simulation runs in advance, we can obtain an appropriate regulation efficiently by bypassing simulations for the current situation. We use the graph convolutional networks for modeling the function, which enable us to find regulations even for unseen areas. With the proposed method, we construct a graph for each area, where a node represents a road, and an edge represents the road connection. By running crowd simulations with various regulations on various areas, we generate traffic situations and regulation effects. The graph convolutional networks are trained to output the regulation effects given the graph with the traffic situation information as inputs. With experiments using real-world road networks and a crowd simulator, we demonstrate that the proposed method can find a road to close that reduces the average time needed to reach the destination. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 111,113 |
2110.01664 | Estimating Potential Outcome Distributions with Collaborating Causal
Networks | Traditional causal inference approaches leverage observational study data to estimate the difference in observed and unobserved outcomes for a potential treatment, known as the Conditional Average Treatment Effect (CATE). However, CATE corresponds to the comparison on the first moment alone, and as such may be insufficient in reflecting the full picture of treatment effects. As an alternative, estimating the full potential outcome distributions could provide greater insights. However, existing methods for estimating treatment effect potential outcome distributions often impose restrictive or simplistic assumptions about these distributions. Here, we propose Collaborating Causal Networks (CCN), a novel methodology which goes beyond the estimation of CATE alone by learning the full potential outcome distributions. Estimation of outcome distributions via the CCN framework does not require restrictive assumptions of the underlying data generating process. Additionally, CCN facilitates estimation of the utility of each possible treatment and permits individual-specific variation through utility functions. CCN not only extends outcome estimation beyond traditional risk difference, but also enables a more comprehensive decision-making process through definition of flexible comparisons. Under assumptions commonly made in the causal literature, we show that CCN learns distributions that asymptotically capture the true potential outcome distributions. Furthermore, we propose an adjustment approach that is empirically effective in alleviating sample imbalance between treatment groups in observational data. Finally, we evaluate the performance of CCN in multiple synthetic and semi-synthetic experiments. We demonstrate that CCN learns improved distribution estimates compared to existing Bayesian and deep generative methods as well as improved decisions with respects to a variety of utility functions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 258,845 |
2109.01847 | Learning Object-Compositional Neural Radiance Field for Editable Scene
Rendering | Implicit neural rendering techniques have shown promising results for novel view synthesis. However, existing methods usually encode the entire scene as a whole, which is generally not aware of the object identity and limits the ability to the high-level editing tasks such as moving or adding furniture. In this paper, we present a novel neural scene rendering system, which learns an object-compositional neural radiance field and produces realistic rendering with editing capability for a clustered and real-world scene. Specifically, we design a novel two-pathway architecture, in which the scene branch encodes the scene geometry and appearance, and the object branch encodes each standalone object conditioned on learnable object activation codes. To survive the training in heavily cluttered scenes, we propose a scene-guided training strategy to solve the 3D space ambiguity in the occluded regions and learn sharp boundaries for each object. Extensive experiments demonstrate that our system not only achieves competitive performance for static scene novel-view synthesis, but also produces realistic rendering for object-level editing. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 253,555 |
2112.02862 | SelectAugment: Hierarchical Deterministic Sample Selection for Data
Augmentation | Data augmentation (DA) has been widely investigated to facilitate model optimization in many tasks. However, in most cases, data augmentation is randomly performed for each training sample with a certain probability, which might incur content destruction and visual ambiguities. To eliminate this, in this paper, we propose an effective approach, dubbed SelectAugment, to select samples to be augmented in a deterministic and online manner based on the sample contents and the network training status. Specifically, in each batch, we first determine the augmentation ratio, and then decide whether to augment each training sample under this ratio. We model this process as a two-step Markov decision process and adopt Hierarchical Reinforcement Learning (HRL) to learn the augmentation policy. In this way, the negative effects of the randomness in selecting samples to augment can be effectively alleviated and the effectiveness of DA is improved. Extensive experiments demonstrate that our proposed SelectAugment can be adapted upon numerous commonly used DA methods, e.g., Mixup, Cutmix, AutoAugment, etc, and improve their performance on multiple benchmark datasets of image classification and fine-grained image recognition. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 269,999 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.