id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1106.0224 | Reasoning about Minimal Belief and Negation as Failure | We investigate the problem of reasoning in the propositional fragment of MBNF, the logic of minimal belief and negation as failure introduced by Lifschitz, which can be considered as a unifying framework for several nonmonotonic formalisms, including default logic, autoepistemic logic, circumscription, epistemic queries, and logic programming. We characterize the complexity and provide algorithms for reasoning in propositional MBNF. In particular, we show that entailment in propositional MBNF lies at the third level of the polynomial hierarchy, hence it is harder than reasoning in all the above mentioned propositional formalisms for nonmonotonic reasoning. We also prove the exact correspondence between negation as failure in MBNF and negative introspection in Moore's autoepistemic logic. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 10,638 |
1604.07224 | The Manifold Particle Filter for State Estimation on High-dimensional
Implicit Manifolds | We estimate the state a noisy robot arm and underactuated hand using an Implicit Manifold Particle Filter (MPF) informed by touch sensors. As the robot touches the world, its state space collapses to a contact manifold that we represent implicitly using a signed distance field. This allows us to extend the MPF to higher (six or more) dimensional state spaces. Earlier work (which explicitly represents the contact manifold) only shows the MPF in two or three dimensions. Through a series of experiments, we show that the implicit MPF converges faster and is more accurate than a conventional particle filter during periods of persistent contact. We present three methods of sampling the implicit contact manifold, and compare them in experiments. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 55,063 |
2311.17320 | Revisiting Single Image Reflection Removal In the Wild | This research focuses on the issue of single-image reflection removal (SIRR) in real-world conditions, examining it from two angles: the collection pipeline of real reflection pairs and the perception of real reflection locations. We devise an advanced reflection collection pipeline that is highly adaptable to a wide range of real-world reflection scenarios and incurs reduced costs in collecting large-scale aligned reflection pairs. In the process, we develop a large-scale, high-quality reflection dataset named Reflection Removal in the Wild (RRW). RRW contains over 14,950 high-resolution real-world reflection pairs, a dataset forty-five times larger than its predecessors. Regarding perception of reflection locations, we identify that numerous virtual reflection objects visible in reflection images are not present in the corresponding ground-truth images. This observation, drawn from the aligned pairs, leads us to conceive the Maximum Reflection Filter (MaxRF). The MaxRF could accurately and explicitly characterize reflection locations from pairs of images. Building upon this, we design a reflection location-aware cascaded framework, specifically tailored for SIRR. Powered by these innovative techniques, our solution achieves superior performance than current leading methods across multiple real-world benchmarks. Codes and datasets will be publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 411,249 |
2206.04967 | Deep Learning-based Massive MIMO CSI Acquisition for 5G Evolution and 6G | Recently, inspired by successful applications in many fields, deep learning (DL) technologies for CSI acquisition have received considerable research interest from both academia and industry. Considering the practical feedback mechanism of 5th generation (5G) New radio (NR) networks, we propose two implementation schemes for artificial intelligence for CSI (AI4CSI), the DL-based receiver and end-to-end design, respectively. The proposed AI4CSI schemes were evaluated in 5G NR networks in terms of spectrum efficiency (SE), feedback overhead, and computational complexity, and compared with legacy schemes. To demonstrate whether these schemes can be used in real-life scenarios, both the modeled-based channel data and practically measured channels were used in our investigations. When DL-based CSI acquisition is applied to the receiver only, which has little air interface impact, it provides approximately 25\% SE gain at a moderate feedback overhead level. It is feasible to deploy it in current 5G networks during 5G evolutions. For the end-to-end DL-based CSI enhancements, the evaluations also demonstrated their additional performance gain on SE, which is 6% -- 26% compared with DL-based receivers and 33% -- 58% compared with legacy CSI schemes. Considering its large impact on air-interface design, it will be a candidate technology for 6th generation (6G) networks, in which an air interface designed by artificial intelligence can be used. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 301,841 |
2302.05803 | TPE-Net: Track Point Extraction and Association Network for Rail Path
Proposal Generation | One essential feature of an autonomous train is minimizing collision risks with third-party objects. To estimate the risk, the control system must identify topological information of all the rail routes ahead on which the train can possibly move, especially within merging or diverging rails. This way, the train can figure out the status of potential obstacles with respect to its route and hence, make a timely decision. Numerous studies have successfully extracted all rail tracks as a whole within forward-looking images without considering element instances. Still, some image-based methods have employed hard-coded prior knowledge of railway geometry on 3D data to associate left-right rails and generate rail route instances. However, we propose a rail path extraction pipeline in which left-right rail pixels of each rail route instance are extracted and associated through a fully convolutional encoder-decoder architecture called TPE-Net. Two different regression branches for TPE-Net are proposed to regress the locations of center points of each rail route, along with their corresponding left-right pixels. Extracted rail pixels are then spatially clustered to generate topological information of all the possible train routes (ego-paths), discarding non-ego-path ones. Experimental results on a challenging, publicly released benchmark show true-positive-pixel level average precision and recall of 0.9207 and 0.8721, respectively, at about 12 frames per second. Even though our evaluation results are not higher than the SOTA, the proposed regression pipeline performs remarkably in extracting the correspondences by looking once at the image. It generates strong rail route hypotheses without reliance on camera parameters, 3D data, and geometrical constraints. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 345,173 |
2103.12857 | Embracing the Disharmony in Medical Imaging: A Simple and Effective
Framework for Domain Adaptation | Domain shift, the mismatch between training and testing data characteristics, causes significant degradation in the predictive performance in multi-source imaging scenarios. In medical imaging, the heterogeneity of population, scanners and acquisition protocols at different sites presents a significant domain shift challenge and has limited the widespread clinical adoption of machine learning models. Harmonization methods which aim to learn a representation of data invariant to these differences are the prevalent tools to address domain shift, but they typically result in degradation of predictive accuracy. This paper takes a different perspective of the problem: we embrace this disharmony in data and design a simple but effective framework for tackling domain shift. The key idea, based on our theoretical arguments, is to build a pretrained classifier on the source data and adapt this model to new data. The classifier can be fine-tuned for intra-site domain adaptation. We can also tackle situations where we do not have access to ground-truth labels on target data; we show how one can use auxiliary tasks for adaptation; these tasks employ covariates such as age, gender and race which are easy to obtain but nevertheless correlated to the main task. We demonstrate substantial improvements in both intra-site domain adaptation and inter-site domain generalization on large-scale real-world 3D brain MRI datasets for classifying Alzheimer's disease and schizophrenia. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 226,301 |
2403.06070 | Reframe Anything: LLM Agent for Open World Video Reframing | The proliferation of mobile devices and social media has revolutionized content dissemination, with short-form video becoming increasingly prevalent. This shift has introduced the challenge of video reframing to fit various screen aspect ratios, a process that highlights the most compelling parts of a video. Traditionally, video reframing is a manual, time-consuming task requiring professional expertise, which incurs high production costs. A potential solution is to adopt some machine learning models, such as video salient object detection, to automate the process. However, these methods often lack generalizability due to their reliance on specific training data. The advent of powerful large language models (LLMs) open new avenues for AI capabilities. Building on this, we introduce Reframe Any Video Agent (RAVA), a LLM-based agent that leverages visual foundation models and human instructions to restructure visual content for video reframing. RAVA operates in three stages: perception, where it interprets user instructions and video content; planning, where it determines aspect ratios and reframing strategies; and execution, where it invokes the editing tools to produce the final video. Our experiments validate the effectiveness of RAVA in video salient object detection and real-world reframing tasks, demonstrating its potential as a tool for AI-powered video editing. | true | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 436,282 |
2301.01482 | Underwater Object Tracker: UOSTrack for Marine Organism Grasping of
Underwater Vehicles | A visual single-object tracker is an indispensable component of underwater vehicles (UVs) in marine organism grasping tasks. Its accuracy and stability are imperative to guide the UVs to perform grasping behavior. Although single-object trackers show competitive performance in the challenge of underwater image degradation, there are still issues with sample imbalance and exclusion of similar objects that need to be addressed for application in marine organism grasping. This paper proposes Underwater OSTrack (UOSTrack), which consists of underwater image and open-air sequence hybrid training (UOHT), and motion-based post-processing (MBPP). The UOHT training paradigm is designed to train the sample-imbalanced underwater tracker so that the tracker is exposed to a great number of underwater domain training samples and learns the feature expressions. The MBPP paradigm is proposed to exclude similar objects. It uses the estimation box predicted with a Kalman filter and the candidate boxes in the response map to relocate the lost tracked object in the candidate area. UOSTrack achieves an average performance improvement of 4.41% and 7.98% maximum compared to state-of-the-art methods on various benchmarks, respectively. Field experiments have verified the accuracy and stability of our proposed UOSTrack for UVs in marine organism grasping tasks. More details can be found at https://github.com/LiYunfengLYF/UOSTrack. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 339,252 |
2310.19626 | Transformation vs Tradition: Artificial General Intelligence (AGI) for
Arts and Humanities | Recent advances in artificial general intelligence (AGI), particularly large language models and creative image generation systems have demonstrated impressive capabilities on diverse tasks spanning the arts and humanities. However, the swift evolution of AGI has also raised critical questions about its responsible deployment in these culturally significant domains traditionally seen as profoundly human. This paper provides a comprehensive analysis of the applications and implications of AGI for text, graphics, audio, and video pertaining to arts and the humanities. We survey cutting-edge systems and their usage in areas ranging from poetry to history, marketing to film, and communication to classical art. We outline substantial concerns pertaining to factuality, toxicity, biases, and public safety in AGI systems, and propose mitigation strategies. The paper argues for multi-stakeholder collaboration to ensure AGI promotes creativity, knowledge, and cultural values without undermining truth or human dignity. Our timely contribution summarizes a rapidly developing field, highlighting promising directions while advocating for responsible progress centering on human flourishing. The analysis lays the groundwork for further research on aligning AGI's technological capacities with enduring social goods. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 404,060 |
1907.03960 | Learning from Thresholds: Fully Automated Classification of Tumor
Infiltrating Lymphocytes for Multiple Cancer Types | Deep learning classifiers for characterization of whole slide tissue morphology require large volumes of annotated data to learn variations across different tissue and cancer types. As is well known, manual generation of digital pathology training data is time consuming and expensive. In this paper, we propose a semi-automated method for annotating a group of similar instances at once, instead of collecting only per-instance manual annotations. This allows for a much larger training set, that reflects visual variability across multiple cancer types and thus training of a single network which can be automatically applied to each cancer type without human adjustment. We apply our method to the important task of classifying Tumor Infiltrating Lymphocytes (TILs) in H&E images. Prior approaches were trained for individual cancer types, with smaller training sets and human-in-the-loop threshold adjustment. We utilize these thresholded results as large scale "semi-automatic" annotations. Combined with existing manual annotations, our trained deep networks are able to automatically produce better TIL prediction results in 12 cancer types, compared to the human-in-the-loop approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 137,980 |
2208.02246 | AdaCat: Adaptive Categorical Discretization for Autoregressive Models | Autoregressive generative models can estimate complex continuous data distributions, like trajectory rollouts in an RL environment, image intensities, and audio. Most state-of-the-art models discretize continuous data into several bins and use categorical distributions over the bins to approximate the continuous data distribution. The advantage is that the categorical distribution can easily express multiple modes and are straightforward to optimize. However, such approximation cannot express sharp changes in density without using significantly more bins, making it parameter inefficient. We propose an efficient, expressive, multimodal parameterization called Adaptive Categorical Discretization (AdaCat). AdaCat discretizes each dimension of an autoregressive model adaptively, which allows the model to allocate density to fine intervals of interest, improving parameter efficiency. AdaCat generalizes both categoricals and quantile-based regression. AdaCat is a simple add-on to any discretization-based distribution estimator. In experiments, AdaCat improves density estimation for real-world tabular data, images, audio, and trajectories, and improves planning in model-based offline RL. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 311,416 |
2007.07085 | Semi-supervised Collaborative Filtering by Text-enhanced Domain
Adaptation | Data sparsity is an inherent challenge in the recommender systems, where most of the data is collected from the implicit feedbacks of users. This causes two difficulties in designing effective algorithms: first, the majority of users only have a few interactions with the system and there is no enough data for learning; second, there are no negative samples in the implicit feedbacks and it is a common practice to perform negative sampling to generate negative samples. However, this leads to a consequence that many potential positive samples are mislabeled as negative ones and data sparsity would exacerbate the mislabeling problem. To solve these difficulties, we regard the problem of recommendation on sparse implicit feedbacks as a semi-supervised learning task, and explore domain adaption to solve it. We transfer the knowledge learned from dense data to sparse data and we focus on the most challenging case -- there is no user or item overlap. In this extreme case, aligning embeddings of two datasets directly is rather sub-optimal since the two latent spaces encode very different information. As such, we adopt domain-invariant textual features as the anchor points to align the latent spaces. To align the embeddings, we extract the textual features for each user and item and feed them into a domain classifier with the embeddings of users and items. The embeddings are trained to puzzle the classifier and textual features are fixed as anchor points. By domain adaptation, the distribution pattern in the source domain is transferred to the target domain. As the target part can be supervised by domain adaptation, we abandon negative sampling in target dataset to avoid label noise. We adopt three pairs of real-world datasets to validate the effectiveness of our transfer strategy. Results show that our models outperform existing models significantly. | false | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | 187,223 |
2408.15710 | Conan-embedding: General Text Embedding with More and Better Negative
Samples | With the growing popularity of RAG, the capabilities of embedding models are gaining increasing attention. Embedding models are primarily trained through contrastive loss learning, with negative examples being a key component. Previous work has proposed various hard negative mining strategies, but these strategies are typically employed as preprocessing steps. In this paper, we propose the conan-embedding model, which maximizes the utilization of more and higher-quality negative examples. Specifically, since the model's ability to handle preprocessed negative examples evolves during training, we propose dynamic hard negative mining method to expose the model to more challenging negative examples throughout the training process. Secondly, contrastive learning requires as many negative examples as possible but is limited by GPU memory constraints. Therefore, we use a Cross-GPU balancing Loss to provide more negative examples for embedding training and balance the batch size across multiple tasks. Moreover, we also discovered that the prompt-response pairs from LLMs can be used for embedding training. Our approach effectively enhances the capabilities of embedding models, currently ranking first on the Chinese leaderboard of Massive text embedding benchmark | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 484,046 |
2407.08641 | How more data can hurt: Instability and regularization in
next-generation reservoir computing | It has been found recently that more data can, counter-intuitively, hurt the performance of deep neural networks. Here, we show that a more extreme version of the phenomenon occurs in data-driven models of dynamical systems. To elucidate the underlying mechanism, we focus on next-generation reservoir computing (NGRC) -- a popular framework for learning dynamics from data. We find that, despite learning a better representation of the flow map with more training data, NGRC can adopt an ill-conditioned ``integrator'' and lose stability. We link this data-induced instability to the auxiliary dimensions created by the delayed states in NGRC. Based on these findings, we propose simple strategies to mitigate the instability, either by increasing regularization strength in tandem with data size, or by carefully introducing noise during training. Our results highlight the importance of proper regularization in data-driven modeling of dynamical systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 472,243 |
2010.09256 | Diffusion in large networks | We investigate the phenomenon of diffusion in a countably infinite society of individuals interacting with their neighbors in a network. At a given time, each individual is either active or inactive. The diffusion is driven by two characteristics: the network structure and the diffusion mechanism represented by an aggregation function. We distinguish between two diffusion mechanisms (probabilistic, deterministic) and focus on two types of aggregation functions (strict, Boolean). Under strict aggregation functions, polarization of the society cannot happen, and its state evolves towards a mixture of infinitely many active and infinitely many inactive agents, or towards a homogeneous society. Under Boolean aggregation functions, the diffusion process becomes deterministic and the contagion model of Morris (2000) becomes a particular case of our framework. Polarization can then happen. Our dynamics also allows for cycles in both cases. The network structure is not relevant for these questions, but is important for establishing irreducibility, at the price of a richness assumption: the network should contain infinitely many complex stars and have enough space for storing local configurations. Our model can be given a game-theoretic interpretation via a local coordination game, where each player would apply a best-response strategy in a random neighborhood. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 201,466 |
2206.00850 | Dynamic MRI using Learned Transform-based Tensor Low-Rank Network
(LT$^2$LR-Net) | While low-rank matrix prior has been exploited in dynamic MR image reconstruction and has obtained satisfying performance, tensor low-rank models have recently emerged as powerful alternative representations for three-dimensional dynamic MR datasets. In this paper, we introduce a novel deep unrolling network for dynamic MRI, namely the learned transform-based tensor low-rank network (LT$^2$LR-Net). First, we generalize the tensor singular value decomposition (t-SVD) into an arbitrary unitary transform-based version and subsequently propose the novel transformed tensor nuclear norm (TTNN). Then, we design a novel TTNN-based iterative optimization algorithm based on the alternating direction method of multipliers (ADMM) to exploit the tensor low-rank prior in the transformed domain. The corresponding iterative steps are unrolled into the proposed LT$^2$LR-Net, where the convolutional neural network (CNN) is incorporated to adaptively learn the transformation from the dynamic MR dataset for more robust and accurate tensor low-rank representations. Experimental results on the cardiac cine MR dataset demonstrate that the proposed framework can provide improved recovery results compared with the state-of-the-art methods. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 300,282 |
2406.16109 | X-ray2CTPA: Generating 3D CTPA scans from 2D X-ray conditioning | Chest X-rays or chest radiography (CXR), commonly used for medical diagnostics, typically enables limited imaging compared to computed tomography (CT) scans, which offer more detailed and accurate three-dimensional data, particularly contrast-enhanced scans like CT Pulmonary Angiography (CTPA). However, CT scans entail higher costs, greater radiation exposure, and are less accessible than CXRs. In this work we explore cross-modal translation from a 2D low contrast-resolution X-ray input to a 3D high contrast and spatial-resolution CTPA scan. Driven by recent advances in generative AI, we introduce a novel diffusion-based approach to this task. We evaluate the models performance using both quantitative metrics and qualitative feedback from radiologists, ensuring diagnostic relevance of the generated images. Furthermore, we employ the synthesized 3D images in a classification framework and show improved AUC in a PE categorization task, using the initial CXR input. The proposed method is generalizable and capable of performing additional cross-modality translations in medical imaging. It may pave the way for more accessible and cost-effective advanced diagnostic tools. The code for this project is available: https://github.com/NoaCahan/X-ray2CTPA . | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 467,002 |
2406.12478 | Accelerating Depthwise Separable Convolutions on Ultra-Low-Power Devices | Depthwise separable convolutions are a fundamental component in efficient Deep Neural Networks, as they reduce the number of parameters and operations compared to traditional convolutions while maintaining comparable accuracy. However, their low data reuse opportunities make deploying them notoriously difficult. In this work, we perform an extensive exploration of alternatives to fuse the depthwise and pointwise kernels that constitute the separable convolutional block. Our approach aims to minimize time-consuming memory transfers by combining different data layouts. When targeting a commercial ultra-low-power device with a three-level memory hierarchy, the GreenWaves GAP8 SoC, we reduce the latency of end-to-end network execution by up to 11.40%. Furthermore, our kernels reduce activation data movements between L2 and L1 memories by up to 52.97%. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 465,432 |
2011.04286 | Simultaneous Data Communication and Channel Estimation in Multi-User
Full Duplex MIMO Systems | In this paper, we study Simultaneous Communication of Data and Control (SCDC) information signals in Full Duplex (FD) Multiple-Input Multiple-Output (MIMO) wireless systems. In particular, considering an FD MIMO base station serving multiple single-antenna FD users, a novel multi-user communication scheme for simultaneous DownLink (DL) beamformed data transmission and UpLink (UL) pilot-assisted channel estimation is presented. Capitalizing on a recent FD MIMO hardware architecture with reduced complexity self-interference analog cancellation, we jointly design the base station's transmit and receive beamforming matrices as well as the settings for the multiple analog taps and the digital SI canceller with the objective to maximize the DL sum rate. Our simulation results showcase that the proposed approach outperforms its conventional half duplex counterpart with 50% reduction in hardware complexity compared to the latest FD-based SCDC schemes. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 205,538 |
2011.00515 | On Signal-to-Noise Ratio Issues in Variational Inference for Deep
Gaussian Processes | We show that the gradient estimates used in training Deep Gaussian Processes (DGPs) with importance-weighted variational inference are susceptible to signal-to-noise ratio (SNR) issues. Specifically, we show both theoretically and via an extensive empirical evaluation that the SNR of the gradient estimates for the latent variable's variational parameters decreases as the number of importance samples increases. As a result, these gradient estimates degrade to pure noise if the number of importance samples is too large. To address this pathology, we show how doubly reparameterized gradient estimators, originally proposed for training variational autoencoders, can be adapted to the DGP setting and that the resultant estimators completely remedy the SNR issue, thereby providing more reliable training. Finally, we demonstrate that our fix can lead to consistent improvements in the predictive performance of DGP models. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 204,249 |
2006.02876 | Enhanced back-translation for low resource neural machine translation
using self-training | Improving neural machine translation (NMT) models using the back-translations of the monolingual target data (synthetic parallel data) is currently the state-of-the-art approach for training improved translation systems. The quality of the backward system - which is trained on the available parallel data and used for the back-translation - has been shown in many studies to affect the performance of the final NMT model. In low resource conditions, the available parallel data is usually not enough to train a backward model that can produce the qualitative synthetic data needed to train a standard translation model. This work proposes a self-training strategy where the output of the backward model is used to improve the model itself through the forward translation technique. The technique was shown to improve baseline low resource IWSLT'14 English-German and IWSLT'15 English-Vietnamese backward translation models by 11.06 and 1.5 BLEUs respectively. The synthetic data generated by the improved English-German backward model was used to train a forward model which out-performed another forward model trained using standard back-translation by 2.7 BLEU. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 180,160 |
2205.03143 | How to Minimize the Weighted Sum AoI in Multi-Source Status Update
Systems: OMA or NOMA? | In this paper, the minimization of the weighted sum average age of information (AoI) in a multi-source status update communication system is studied. Multiple independent sources send update packets to a common destination node in a time-slotted manner under the limit of maximum retransmission rounds. Different multiple access schemes, i.e., orthogonal multiple access (OMA) and non-orthogonal multiple access (NOMA) are exploited here over a block-fading multiple access channel (MAC). Constrained Markov decision process (CMDP) problems are formulated to describe the AoI minimization problems considering both transmission schemes. The Lagrangian method is utilised to convert CMDP problems to unconstraint Markov decision process (MDP) problems and corresponding algorithms to derive the power allocation policies are obtained. On the other hand, for the case of unknown environments, two online reinforcement learning approaches considering both multiple access schemes are proposed to achieve near-optimal age performance. Numerical simulations validate the improvement of the proposed policy in terms of weighted sum AoI compared to the fixed power transmission policy, and illustrate that NOMA is more favorable in case of larger packet size. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | 295,182 |
2309.01380 | Understanding Video Scenes through Text: Insights from Text-based Video
Question Answering | Researchers have extensively studied the field of vision and language, discovering that both visual and textual content is crucial for understanding scenes effectively. Particularly, comprehending text in videos holds great significance, requiring both scene text understanding and temporal reasoning. This paper focuses on exploring two recently introduced datasets, NewsVideoQA and M4-ViteVQA, which aim to address video question answering based on textual content. The NewsVideoQA dataset contains question-answer pairs related to the text in news videos, while M4-ViteVQA comprises question-answer pairs from diverse categories like vlogging, traveling, and shopping. We provide an analysis of the formulation of these datasets on various levels, exploring the degree of visual understanding and multi-frame comprehension required for answering the questions. Additionally, the study includes experimentation with BERT-QA, a text-only model, which demonstrates comparable performance to the original methods on both datasets, indicating the shortcomings in the formulation of these datasets. Furthermore, we also look into the domain adaptation aspect by examining the effectiveness of training on M4-ViteVQA and evaluating on NewsVideoQA and vice-versa, thereby shedding light on the challenges and potential benefits of out-of-domain training. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 389,666 |
1903.02120 | Decoders Matter for Semantic Segmentation: Data-Dependent Decoding
Enables Flexible Feature Aggregation | Recent semantic segmentation methods exploit encoder-decoder architectures to produce the desired pixel-wise segmentation prediction. The last layer of the decoders is typically a bilinear upsampling procedure to recover the final pixel-wise prediction. We empirically show that this oversimple and data-independent bilinear upsampling may lead to sub-optimal results. In this work, we propose a data-dependent upsampling (DUpsampling) to replace bilinear, which takes advantages of the redundancy in the label space of semantic segmentation and is able to recover the pixel-wise prediction from low-resolution outputs of CNNs. The main advantage of the new upsampling layer lies in that with a relatively lower-resolution feature map such as $\frac{1}{16}$ or $\frac{1}{32}$ of the input size, we can achieve even better segmentation accuracy, significantly reducing computation complexity. This is made possible by 1) the new upsampling layer's much improved reconstruction capability; and more importantly 2) the DUpsampling based decoder's flexibility in leveraging almost arbitrary combinations of the CNN encoders' features. Experiments demonstrate that our proposed decoder outperforms the state-of-the-art decoder, with only $\sim$20\% of computation. Finally, without any post-processing, the framework equipped with our proposed decoder achieves new state-of-the-art performance on two datasets: 88.1\% mIOU on PASCAL VOC with 30\% computation of the previously best model; and 52.5\% mIOU on PASCAL Context. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 123,425 |
2405.05945 | Lumina-T2X: Transforming Text into Any Modality, Resolution, and
Duration via Flow-based Large Diffusion Transformers | Sora unveils the potential of scaling Diffusion Transformer for generating photorealistic images and videos at arbitrary resolutions, aspect ratios, and durations, yet it still lacks sufficient implementation details. In this technical report, we introduce the Lumina-T2X family - a series of Flow-based Large Diffusion Transformers (Flag-DiT) equipped with zero-initialized attention, as a unified framework designed to transform noise into images, videos, multi-view 3D objects, and audio clips conditioned on text instructions. By tokenizing the latent spatial-temporal space and incorporating learnable placeholders such as [nextline] and [nextframe] tokens, Lumina-T2X seamlessly unifies the representations of different modalities across various spatial-temporal resolutions. This unified approach enables training within a single framework for different modalities and allows for flexible generation of multimodal data at any resolution, aspect ratio, and length during inference. Advanced techniques like RoPE, RMSNorm, and flow matching enhance the stability, flexibility, and scalability of Flag-DiT, enabling models of Lumina-T2X to scale up to 7 billion parameters and extend the context window to 128K tokens. This is particularly beneficial for creating ultra-high-definition images with our Lumina-T2I model and long 720p videos with our Lumina-T2V model. Remarkably, Lumina-T2I, powered by a 5-billion-parameter Flag-DiT, requires only 35% of the training computational costs of a 600-million-parameter naive DiT. Our further comprehensive analysis underscores Lumina-T2X's preliminary capability in resolution extrapolation, high-resolution editing, generating consistent 3D views, and synthesizing videos with seamless transitions. We expect that the open-sourcing of Lumina-T2X will further foster creativity, transparency, and diversity in the generative AI community. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,116 |
1708.05582 | Agree to Disagree: Improving Disagreement Detection with Dual GRUs | This paper presents models for detecting agreement/disagreement in online discussions. In this work we show that by using a Siamese inspired architecture to encode the discussions, we no longer need to rely on hand-crafted features to exploit the meta thread structure. We evaluate our model on existing online discussion corpora - ABCD, IAC and AWTP. Experimental results on ABCD dataset show that by fusing lexical and word embedding features, our model achieves the state of the art performance of 0.804 average F1 score. We also show that the model trained on ABCD dataset performs competitively on relatively smaller annotated datasets (IAC and AWTP). | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 79,164 |
1202.6404 | Signal Shaping for BICM at Low SNR | The mutual information of bit-interleaved coded modulation (BICM) systems, sometimes called the BICM capacity, is investigated at low signal-to-noise ratio (SNR), i.e., in the wideband regime. A new linear transform that depends on bits' probabilities is introduced. This transform is used to prove the asymptotical equivalence between certain BICM systems with uniform and nonuniform input distributions. Using known results for BICM systems with a uniform input distribution, we completely characterize the combinations of input alphabet, input distribution, and binary labeling that achieve the Shannon limit -1.59 dB. The main conclusion is that a BICM system achieves the Shannon limit at low SNR if and only if it can be represented as a zero-mean linear projection of a hypercube, which is the same condition as for uniform input distributions. Hence, probabilistic shaping offers no extra degrees of freedom to optimize the low-SNR mutual information of BICM systems, in addition to what is provided by geometrical shaping. These analytical conclusions are confirmed by numerical results, which also show that for a fixed input alphabet, probabilistic shaping of BICM can improve the mutual information in the low and medium SNR range over any coded modulation system with a uniform input distribution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 14,633 |
2006.02689 | Solving Hard AI Planning Instances Using Curriculum-Driven Deep
Reinforcement Learning | Despite significant progress in general AI planning, certain domains remain out of reach of current AI planning systems. Sokoban is a PSPACE-complete planning task and represents one of the hardest domains for current AI planners. Even domain-specific specialized search methods fail quickly due to the exponential search complexity on hard instances. Our approach based on deep reinforcement learning augmented with a curriculum-driven method is the first one to solve hard instances within one day of training while other modern solvers cannot solve these instances within any reasonable time limit. In contrast to prior efforts, which use carefully handcrafted pruning techniques, our approach automatically uncovers domain structure. Our results reveal that deep RL provides a promising framework for solving previously unsolved AI planning problems, provided a proper training curriculum can be devised. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 180,116 |
2305.20018 | Scalable Learning of Latent Language Structure With Logical Offline
Cycle Consistency | We introduce Logical Offline Cycle Consistency Optimization (LOCCO), a scalable, semi-supervised method for training a neural semantic parser. Conceptually, LOCCO can be viewed as a form of self-learning where the semantic parser being trained is used to generate annotations for unlabeled text that are then used as new supervision. To increase the quality of annotations, our method utilizes a count-based prior over valid formal meaning representations and a cycle-consistency score produced by a neural text generation model as additional signals. Both the prior and semantic parser are updated in an alternate fashion from full passes over the training data, which can be seen as approximating the marginalization of latent structures through stochastic variational inference. The use of a count-based prior, frozen text generation model, and offline annotation process yields an approach with negligible complexity and latency increases as compared to conventional self-learning. As an added bonus, the annotations produced by LOCCO can be trivially repurposed to train a neural text generation model. We demonstrate the utility of LOCCO on the well-known WebNLG benchmark where we obtain an improvement of 2 points against a self-learning parser under equivalent conditions, an improvement of 1.3 points against the previous state-of-the-art parser, and competitive text generation performance in terms of BLEU score. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 369,775 |
2105.07086 | Divergence Estimation in Message Passing algorithms | Many modern imaging applications can be modeled as compressed sensing linear inverse problems. When the measurement operator involved in the inverse problem is sufficiently random, denoising Scalable Message Passing (SMP) algorithms have a potential to demonstrate high efficiency in recovering compressed data. One of the key components enabling SMP to achieve fast convergence, stability and predictable dynamics is the Onsager correction that must be updated at each iteration of the algorithm. This correction involves the denoiser's divergence that is traditionally estimated via the Black-Box Monte Carlo (BB-MC) method \cite{MC-divergence}. While the BB-MC method demonstrates satisfying accuracy of estimation, it requires executing the denoiser additional times at each iteration and might lead to a substantial increase in computational cost of the SMP algorithms. In this work we develop two Large System Limit models of the Onsager correction for denoisers operating within SMP algorithms and use these models to propose two practical classes of divergence estimators that require no additional executions of the denoiser and demonstrate similar or superior correction compared to the BB-MC method. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 235,311 |
2203.12057 | Cat-inspired Gaits for A Tilt-rotor -- from Symmetrical to Asymmetrical | Among the tilt-rotors (quadrotors) developed in the last decades, Rylls model with eight inputs (four magnitudes of the thrusts and four tilting angles) attracted great attention. Typical feedback linearization maneuvers all the eight inputs with a united control rule to stabilize this tilt-rotor. Instead of assigning the tilting angles by the control rule, the recent research predetermined the tilting angles and left the magnitudes of the thrusts the only control signals. These tilting angles are designed to mimic the cat-trot gait, avoiding the singular decoupling matrix feedback linearization. To complete the discussions of the cat-gaits inspired tilt-rotor gaits, this research addresses the analyses on the rest of the common cat gaits, walk, run, transverse gallop, and rotary gallop. It is found that the singular decoupling matrix exist in walk gait and rotary gallop. Further modifications are conducted to these two gaits to accommodate the application of feedback linearization. The modified gaits with different periods are then applied to the tilt-rotor in tracking experiments, in which the references are uniform rectilinear motion and uniform circular motion. All the experiments are simulated in Simulink, MATLAB. The result shows that. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 287,120 |
1909.10307 | Human Synthesis and Scene Compositing | Generating good quality and geometrically plausible synthetic images of humans with the ability to control appearance, pose and shape parameters, has become increasingly important for a variety of tasks ranging from photo editing, fashion virtual try-on, to special effects and image compression. In this paper, we propose HUSC, a HUman Synthesis and Scene Compositing framework for the realistic synthesis of humans with different appearance, in novel poses and scenes. Central to our formulation is 3d reasoning for both people and scenes, in order to produce realistic collages, by correctly modeling perspective effects and occlusion, by taking into account scene semantics and by adequately handling relative scales. Conceptually our framework consists of three components: (1) a human image synthesis model with controllable pose and appearance, based on a parametric representation, (2) a person insertion procedure that leverages the geometry and semantics of the 3d scene, and (3) an appearance compositing process to create a seamless blending between the colors of the scene and the generated human image, and avoid visual artifacts. The performance of our framework is supported by both qualitative and quantitative results, in particular state-of-the art synthesis scores for the DeepFashion dataset. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 146,511 |
1705.05637 | Text-based Adventures of the Golovin AI Agent | The domain of text-based adventure games has been recently established as a new challenge of creating the agent that is both able to understand natural language, and acts intelligently in text-described environments. In this paper, we present our approach to tackle the problem. Our agent, named Golovin, takes advantage of the limited game domain. We use genre-related corpora (including fantasy books and decompiled games) to create language models suitable to this domain. Moreover, we embed mechanisms that allow us to specify, and separately handle, important tasks as fighting opponents, managing inventory, and navigating on the game map. We validated usefulness of these mechanisms, measuring agent's performance on the set of 50 interactive fiction games. Finally, we show that our agent plays on a level comparable to the winner of the last year Text-Based Adventure AI Competition. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 73,527 |
1903.12344 | Learning Good Representation via Continuous Attention | In this paper we present our scientific discovery that good representation can be learned via continuous attention during the interaction between Unsupervised Learning(UL) and Reinforcement Learning(RL) modules driven by intrinsic motivation. Specifically, we designed intrinsic rewards generated from UL modules for driving the RL agent to focus on objects for a period of time and to learn good representations of objects for later object recognition task. We evaluate our proposed algorithm in both with and without extrinsic reward settings. Experiments with end-to-end training in simulated environments with applications to few-shot object recognition demonstrated the effectiveness of the proposed algorithm. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 125,702 |
2001.00942 | Simple explanation of Landauer's bound and its ineffectiveness for
multivalued logic | We discuss, using recent results on the Landauer's bound in multivalued logic, the difficulties and pitfalls of how to apply this principle. The presentation is based on Szilard's version of Maxwell's demon experiment and use of equilibrium Thermodynamics. Different versions of thermodynamical/mechanical memory are presented - one-hot encoding version and the implementation based on reversed Szilard's experiment. Relation of the Landauer's principle to Galois connection is explained in detail. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 159,359 |
2109.09267 | Intelligent Reflecting Surfaces and Classical Relays: Coexistence and
Co-Design | This paper investigates a multiuser downlink communication system with coexisting intelligent reflecting surface (IRS) and classical half-duplex decode-and-forward (DF) relay. In this system, the IRS and the DF relay interact with each other and assist transmission simultaneously. In particular, active beamforming at the base station (BS) and at the DF relay, and passive beamforming at the IRS, are jointly designed to maximize the sum-rate of all users. The sum-rate maximization problem is nonconvex due to the coupled beamforming vectors. We propose an alternating optimization (AO) based algorithm to tackle this complex co-design problem. Numerical validation and discussion on the superiority of the coexistence system and the tradeoffs therein are presented. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 256,216 |
2202.06022 | Fun Selfie Filters in Face Recognition: Impact Assessment and Removal | This work investigates the impact of fun selfie filters, which are frequently used to modify selfies, on face recognition systems. Based on a qualitative assessment and classification of freely available mobile applications, ten relevant fun selfie filters are selected to create a database. To this end, the selected filters are automatically applied to face images of public face image databases. Different state-of-the-art methods are used to evaluate the influence of fun selfie filters on the performance of face detection using dlib, RetinaFace, and a COTS method, sample quality estimated by FaceQNet and MagFace, and recognition accuracy employing ArcFace and a COTS algorithm. The obtained results indicate that selfie filters negatively affect face recognition modules, especially if fun selfie filters cover a large region of the face, where the mouth, nose, and eyes are covered. To mitigate such unwanted effects, a GAN-based selfie filter removal algorithm is proposed which consists of a segmentation module, a perceptual network, and a generation module. In a cross-database experiment the application of the presented selfie filter removal technique has shown to significantly improve the biometric performance of the underlying face recognition systems. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 280,064 |
2302.06810 | Learning from Noisy Labels with Decoupled Meta Label Purifier | Training deep neural networks(DNN) with noisy labels is challenging since DNN can easily memorize inaccurate labels, leading to poor generalization ability. Recently, the meta-learning based label correction strategy is widely adopted to tackle this problem via identifying and correcting potential noisy labels with the help of a small set of clean validation data. Although training with purified labels can effectively improve performance, solving the meta-learning problem inevitably involves a nested loop of bi-level optimization between model weights and hyper-parameters (i.e., label distribution). As compromise, previous methods resort to a coupled learning process with alternating update. In this paper, we empirically find such simultaneous optimization over both model weights and label distribution can not achieve an optimal routine, consequently limiting the representation ability of backbone and accuracy of corrected labels. From this observation, a novel multi-stage label purifier named DMLP is proposed. DMLP decouples the label correction process into label-free representation learning and a simple meta label purifier. In this way, DMLP can focus on extracting discriminative feature and label correction in two distinctive stages. DMLP is a plug-and-play label purifier, the purified labels can be directly reused in naive end-to-end network retraining or other robust learning methods, where state-of-the-art results are obtained on several synthetic and real-world noisy datasets, especially under high noise levels. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 345,535 |
2006.06885 | Uncovering the Folding Landscape of RNA Secondary Structure with Deep
Graph Embeddings | Biomolecular graph analysis has recently gained much attention in the emerging field of geometric deep learning. Here we focus on organizing biomolecular graphs in ways that expose meaningful relations and variations between them. We propose a geometric scattering autoencoder (GSAE) network for learning such graph embeddings. Our embedding network first extracts rich graph features using the recently proposed geometric scattering transform. Then, it leverages a semi-supervised variational autoencoder to extract a low-dimensional embedding that retains the information in these features that enable prediction of molecular properties as well as characterize graphs. We show that GSAE organizes RNA graphs both by structure and energy, accurately reflecting bistable RNA structures. Also, the model is generative and can sample new folding trajectories. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 181,593 |
1512.02005 | Three Tier Network Architecture to mitigate DDoS Attacks on Hybrid Cloud
Environments | Connecting the wired and wireless networks particularly the Mobile ad hoc Network is interesting in real world situations due to its usefulness and practicality. Different mechanisms have been proposed to integrate MANETs and the Internet. These strategies differ in gateway discovery mechanism, cell switching criteria, ad hoc routing protocol.In this paper, Mobile-IP is integrated with Hierarchical Cluster-Head Gateway Switch Routing (CGSR) Protocol to provide Internet access to the mobile node of the ad hoc Network. This paper discusses a mechanism for selecting an alternate route in case if the Cluster Head is unable to forward the packets to the destination. The proposed framework provides bi-directional connectivity between the MANET and the Internet nodes. A detailed performance comparison is made between the proposed approach and the other three-tier strategies based on mobility of cluster heads and cluster gateways and other network parameters. The experimental results indicate that the proposed architecture has better packet delivery ratio, end-end delay and mobile node-gateway connectivity ratio for providing full bi-directional connectivity. | false | false | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | true | 49,892 |
2105.06092 | Voltage Regulation Support Along a Distribution Line by a Virtual Power
Plant Based on a Center of Mass Load Modeling | A voltage regulation method for slow voltage variations at distribution level is proposed, based on a view of the loads, generators and storage along a distribution line as point weights. The "centers of mass" of the absorbed and injected currents (loads & generation, respectively) are compensated by minimizing the distance between them, through proper re-dispatching of the power of the available units and interruptible loads. The technique is recursively applied to lesser parts of the distribution line to address local phenomena and is assumed to be offered as ancillary service to the system operator by a Virtual Power Plant. The favorable results of the methodology are assessed on a distribution line of the island of Rhodes (Greece) under critical loading for numerous scenarios. Unlike previous approaches, the technique focuses specifically in the restoration of bus voltages within standard limits and may reduce the activation of on-line tap changing transformers control. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 235,020 |
2302.06006 | Slepian Scale-Discretised Wavelets on Manifolds | Inspired by recent interest in geometric deep learning, this work generalises the recently developed Slepian scale-discretised wavelets on the sphere to Riemannian manifolds. Through the sifting convolution, one may define translations and, thus, convolutions on manifolds - which are otherwise not well-defined in general. Slepian wavelets are constructed on a region of a manifold and are therefore suited to problems where data only exists in a particular region. The Slepian functions, on which Slepian wavelets are built, are the basis functions of the Slepian spatial-spectral concentration problem on the manifold. A tiling of the Slepian harmonic line with smoothly decreasing generating functions defines the scale-discretised wavelets; allowing one to probe spatially localised, scale-dependent features of a signal. By discretising manifolds as graphs, the Slepian functions and wavelets of a triangular mesh are presented. Through a wavelet transform, the wavelet coefficients of a field defined on the mesh are found and used in a straightforward thresholding denoising scheme. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 345,252 |
2411.03535 | The Differentiable Feasibility Pump | Although nearly 20 years have passed since its conception, the feasibility pump algorithm remains a widely used heuristic to find feasible primal solutions to mixed-integer linear problems. Many extensions of the initial algorithm have been proposed. Yet, its core algorithm remains centered around two key steps: solving the linear relaxation of the original problem to obtain a solution that respects the constraints, and rounding it to obtain an integer solution. This paper shows that the traditional feasibility pump and many of its follow-ups can be seen as gradient-descent algorithms with specific parameters. A central aspect of this reinterpretation is observing that the traditional algorithm differentiates the solution of the linear relaxation with respect to its cost. This reinterpretation opens many opportunities for improving the performance of the original algorithm. We study how to modify the gradient-update step as well as extending its loss function. We perform extensive experiments on MIPLIB instances and show that these modifications can substantially reduce the number of iterations needed to find a solution. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 505,935 |
2102.08201 | Improper Reinforcement Learning with Gradient-based Policy Optimization | We consider an improper reinforcement learning setting where a learner is given $M$ base controllers for an unknown Markov decision process, and wishes to combine them optimally to produce a potentially new controller that can outperform each of the base ones. This can be useful in tuning across controllers, learnt possibly in mismatched or simulated environments, to obtain a good controller for a given target environment with relatively few trials. \par We propose a gradient-based approach that operates over a class of improper mixtures of the controllers. We derive convergence rate guarantees for the approach assuming access to a gradient oracle. The value function of the mixture and its gradient may not be available in closed-form; however, we show that we can employ rollouts and simultaneous perturbation stochastic approximation (SPSA) for explicit gradient descent optimization. Numerical results on (i) the standard control theoretic benchmark of stabilizing an inverted pendulum and (ii) a constrained queueing task show that our improper policy optimization algorithm can stabilize the system even when the base policies at its disposal are unstable\footnote{Under review. Please do not distribute.}. | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | 220,375 |
1910.12027 | Consistency Regularization for Generative Adversarial Networks | Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization. In this work, we propose a simple, effective training stabilizer based on the notion of consistency regularization---a popular technique in the semi-supervised learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the discriminator to these augmentations. We conduct a series of experiments to demonstrate that consistency regularization works effectively with spectral normalization and various GAN architectures, loss functions and optimizer settings. Our method achieves the best FID scores for unconditional image generation compared to other regularization methods on CIFAR-10 and CelebA. Moreover, Our consistency regularized GAN (CR-GAN) improves state-of-the-art FID scores for conditional generation from 14.73 to 11.48 on CIFAR-10 and from 8.73 to 6.66 on ImageNet-2012. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 150,943 |
1907.00686 | Sparse regular variation | Regular variation provides a convenient theoretical framework to study large events. In the multivariate setting, the dependence structure of the positive extremes is characterized by a measure - the spectral measure - defined on the positive orthant of the unit sphere. This measure gathers information on the localization of extreme events and has often a sparse support since severe events do not simultaneously occur in all directions. However, it is defined through weak convergence which does not provide a natural way to capture this sparsity structure.In this paper, we introduce the notion of sparse regular variation which allows to better learn the dependence structure of extreme events. This concept is based on the Euclidean projection onto the simplex for which efficient algorithms are known. We prove that under mild assumptions sparse regular variation and regular variation are two equivalent notions and we establish several results for sparsely regularly varying random vectors. Finally, we illustrate on numerical examples how this new concept allows one to detect extremal directions. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 137,116 |
2107.14658 | Task 1A DCASE 2021: Acoustic Scene Classification with mismatch-devices
using squeeze-excitation technique and low-complexity constraint | Acoustic scene classification (ASC) is one of the most popular problems in the field of machine listening. The objective of this problem is to classify an audio clip into one of the predefined scenes using only the audio data. This problem has considerably progressed over the years in the different editions of DCASE. It usually has several subtasks that allow to tackle this problem with different approaches. The subtask presented in this report corresponds to a ASC problem that is constrained by the complexity of the model as well as having audio recorded from different devices, known as mismatch devices (real and simulated). The work presented in this report follows the research line carried out by the team in previous years. Specifically, a system based on two steps is proposed: a two-dimensional representation of the audio using the Gamamtone filter bank and a convolutional neural network using squeeze-excitation techniques. The presented system outperforms the baseline by about 17 percentage points. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 248,535 |
1909.08112 | Spherical View Synthesis for Self-Supervised 360 Depth Estimation | Learning based approaches for depth perception are limited by the availability of clean training data. This has led to the utilization of view synthesis as an indirect objective for learning depth estimation using efficient data acquisition procedures. Nonetheless, most research focuses on pinhole based monocular vision, with scarce works presenting results for omnidirectional input. In this work, we explore spherical view synthesis for learning monocular 360 depth in a self-supervised manner and demonstrate its feasibility. Under a purely geometrically derived formulation we present results for horizontal and vertical baselines, as well as for the trinocular case. Further, we show how to better exploit the expressiveness of traditional CNNs when applied to the equirectangular domain in an efficient manner. Finally, given the availability of ground truth depth data, our work is uniquely positioned to compare view synthesis against direct supervision in a consistent and fair manner. The results indicate that alternative research directions might be better suited to enable higher quality depth perception. Our data, models and code are publicly available at https://vcl3d.github.io/SphericalViewSynthesis/. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 145,871 |
2408.00695 | Accelerating Full Waveform Inversion By Transfer Learning | Full waveform inversion (FWI) is a powerful tool for reconstructing material fields based on sparsely measured data obtained by wave propagation. For specific problems, discretizing the material field with a neural network (NN) improves the robustness and reconstruction quality of the corresponding optimization problem. We call this method NN-based FWI. Starting from an initial guess, the weights of the NN are iteratively updated to fit the simulated wave signals to the sparsely measured data set. For gradient-based optimization, a suitable choice of the initial guess, i.e., a suitable NN weight initialization, is crucial for fast and robust convergence. In this paper, we introduce a novel transfer learning approach to further improve NN-based FWI. This approach leverages supervised pretraining to provide a better NN weight initialization, leading to faster convergence of the subsequent optimization problem. Moreover, the inversions yield physically more meaningful local minima. The network is pretrained to predict the unknown material field using the gradient information from the first iteration of conventional FWI. In our computational experiments on two-dimensional domains, the training data set consists of reference simulations with arbitrarily positioned elliptical voids of different shapes and orientations. We compare the performance of the proposed transfer learning NN-based FWI with three other methods: conventional FWI, NN-based FWI without pretraining and conventional FWI with an initial guess predicted from the pretrained NN. Our results show that transfer learning NN-based FWI outperforms the other methods in terms of convergence speed and reconstruction quality. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 477,936 |
cs/0703134 | Automatic Generation of Benchmarks for Plagiarism Detection Tools using
Grammatical Evolution | This paper has been withdrawn by the authors due to a major rewriting. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | 540,268 |
1402.1270 | Vers une interface pour l enrichissement des requetes en arabe dans un
systeme de recherche d information | This presentation focuses on the automatic expansion of Arabic request using morphological analyzer and Arabic Wordnet. The expanded request is sent to Google. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 30,653 |
2502.10725 | PropNet: a White-Box and Human-Like Network for Sentence Representation | Transformer-based embedding methods have dominated the field of sentence representation in recent years. Although they have achieved remarkable performance on NLP missions, such as semantic textual similarity (STS) tasks, their black-box nature and large-data-driven training style have raised concerns, including issues related to bias, trust, and safety. Many efforts have been made to improve the interpretability of embedding models, but these problems have not been fundamentally resolved. To achieve inherent interpretability, we propose a purely white-box and human-like sentence representation network, PropNet. Inspired by findings from cognitive science, PropNet constructs a hierarchical network based on the propositions contained in a sentence. While experiments indicate that PropNet has a significant gap compared to state-of-the-art (SOTA) embedding models in STS tasks, case studies reveal substantial room for improvement. Additionally, PropNet enables us to analyze and understand the human cognitive processes underlying STS benchmarks. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 534,024 |
2402.13804 | Reconfigurable Intelligent Surfaces for THz: Hardware Impairments and
Switching Technologies | The demand for unprecedented performance in the upcoming 6G wireless networks is fomenting the research on THz communications empowered by Reconfigurable Inteligent Surfaces (RISs). A wide range of use cases have been proposed, most of them, assuming high-level RIS models that overlook some of the hardware impairments that this technology faces. The expectation is that the emergent reconfigurable THz technologies will eventually overcome its current limitations. This disassociation from the hardware may mask nonphysical assumptions, perceived as hardware limitations. In this paper, a top-down approach bounded by physical constraints is presented, distilling from system-level specifications, hardware requirements, and upper bounds for the RIS-aided system performance. We consider D-band indoor and outdoor scenarios where a more realistic assessment of the state-of-the-art solution can be made. The goal is to highlight the intricacies of the design procedure based on sound assumptions for the RIS performance. For a given signal range and angular coverage, we quantify the required RIS size, number of switching elements, and maximum achievable bandwidth and capacity. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 431,419 |
2207.10409 | Sequence Models for Drone vs Bird Classification | Drone detection has become an essential task in object detection as drone costs have decreased and drone technology has improved. It is, however, difficult to detect distant drones when there is weak contrast, long range, and low visibility. In this work, we propose several sequence classification architectures to reduce the detected false-positive ratio of drone tracks. Moreover, we propose a new drone vs. bird sequence classification dataset to train and evaluate the proposed architectures. 3D CNN, LSTM, and Transformer based sequence classification architectures have been trained on the proposed dataset to show the effectiveness of the proposed idea. As experiments show, using sequence information, bird classification and overall F1 scores can be increased by up to 73% and 35%, respectively. Among all sequence classification models, R(2+1)D-based fully convolutional model yields the best transfer learning and fine-tuning results. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 309,257 |
2403.05788 | On the Benefits of Fine-Grained Loss Truncation: A Case Study on
Factuality in Summarization | Text summarization and simplification are among the most widely used applications of AI. However, models developed for such tasks are often prone to hallucination, which can result from training on unaligned data. One efficient approach to address this issue is Loss Truncation (LT) (Kang and Hashimoto, 2020), an approach to modify the standard log loss to adaptively remove noisy examples during training. However, we find that LT alone yields a considerable number of hallucinated entities on various datasets. We study the behavior of the underlying losses between factual and non-factual examples, to understand and refine the performance of LT. We demonstrate that LT's performance is limited when the underlying assumption that noisy targets have higher NLL loss is not satisfied, and find that word-level NLL among entities provides better signal for distinguishing factuality. We then leverage this to propose a fine-grained NLL loss and fine-grained data cleaning strategies, and observe improvements in hallucination reduction across some datasets. Our work is available at https://https://github.com/yale-nlp/fine-grained-lt. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 436,156 |
2308.14484 | Multimodal Detection of Bots on X (Twitter) using Transformers | Although not all bots are malicious, the vast majority of them are responsible for spreading misinformation and manipulating the public opinion about several issues, i.e., elections and many more. Therefore, the early detection of bots is crucial. Although there have been proposed methods for detecting bots in social media, there are still substantial limitations. For instance, existing research initiatives still extract a large number of features and train traditional machine learning algorithms or use GloVe embeddings and train LSTMs. However, feature extraction is a tedious procedure demanding domain expertise. Also, language models based on transformers have been proved to be better than LSTMs. Other approaches create large graphs and train graph neural networks requiring in this way many hours for training and access to computational resources. To tackle these limitations, this is the first study employing only the user description field and images of three channels denoting the type and content of tweets posted by the users. Firstly, we create digital DNA sequences, transform them to 3d images, and apply pretrained models of the vision domain, including EfficientNet, AlexNet, VGG16, etc. Next, we propose a multimodal approach, where we use TwHIN-BERT for getting the textual representation of the user description field and employ VGG16 for acquiring the visual representation for the image modality. We propose three different fusion methods, namely concatenation, gated multimodal unit, and crossmodal attention, for fusing the different modalities and compare their performances. Finally, we present a qualitative analysis of the behavior of our best performing model. Extensive experiments conducted on the Cresci'17 and TwiBot-20 datasets demonstrate valuable advantages of our introduced approaches over state-of-the-art ones. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 388,348 |
1904.02817 | Unsupervised Domain Adaptation of Contextualized Embeddings for Sequence
Labeling | Contextualized word embeddings such as ELMo and BERT provide a foundation for strong performance across a wide range of natural language processing tasks by pretraining on large corpora of unlabeled text. However, the applicability of this approach is unknown when the target domain varies substantially from the pretraining corpus. We are specifically interested in the scenario in which labeled data is available in only a canonical source domain such as newstext, and the target domain is distinct from both the labeled and pretraining texts. To address this scenario, we propose domain-adaptive fine-tuning, in which the contextualized embeddings are adapted by masked language modeling on text from the target domain. We test this approach on sequence labeling in two challenging domains: Early Modern English and Twitter. Both domains differ substantially from existing pretraining corpora, and domain-adaptive fine-tuning yields substantial improvements over strong BERT baselines, with particularly impressive results on out-of-vocabulary words. We conclude that domain-adaptive fine-tuning offers a simple and effective approach for the unsupervised adaptation of sequence labeling to difficult new domains. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | true | 126,534 |
2404.10419 | MAD Speech: Measures of Acoustic Diversity of Speech | Generative spoken language models produce speech in a wide range of voices, prosody, and recording conditions, seemingly approaching the diversity of natural speech. However, the extent to which generated speech is acoustically diverse remains unclear due to a lack of appropriate metrics. We address this gap by developing lightweight metrics of acoustic diversity, which we collectively refer to as MAD Speech. We focus on measuring five facets of acoustic diversity: voice, gender, emotion, accent, and background noise. We construct the metrics as a composition of specialized, per-facet embedding models and an aggregation function that measures diversity within the embedding space. Next, we build a series of datasets with a priori known diversity preferences for each facet. Using these datasets, we demonstrate that our proposed metrics achieve a stronger agreement with the ground-truth diversity than baselines. Finally, we showcase the applicability of our proposed metrics across several real-life evaluation scenarios. MAD Speech will be made publicly accessible. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 447,095 |
cs/0411011 | Capacity Analysis for Continuous Alphabet Channels with Side
Information, Part I: A General Framework | Capacity analysis for channels with side information at the receiver has been an active area of interest. This problem is well investigated for the case of finite alphabet channels. However, the results are not easily generalizable to the case of continuous alphabet channels due to analytic difficulties inherent with continuous alphabets. In the first part of this two-part paper, we address an analytical framework for capacity analysis of continuous alphabet channels with side information at the receiver. For this purpose, we establish novel necessary and sufficient conditions for weak* continuity and strict concavity of the mutual information. These conditions are used in investigating the existence and uniqueness of the capacity-achieving measures. Furthermore, we derive necessary and sufficient conditions that characterize the capacity value and the capacity-achieving measure for continuous alphabet channels with side information at the receiver. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 538,387 |
1710.09485 | Signed Network Modeling Based on Structural Balance Theory | The modeling of networks, specifically generative models, have been shown to provide a plethora of information about the underlying network structures, as well as many other benefits behind their construction. Recently there has been a considerable increase in interest for the better understanding and modeling of networks, but the vast majority of this work has been for unsigned networks. However, many networks can have positive and negative links(or signed networks), especially in online social media, and they inherently have properties not found in unsigned networks due to the added complexity. Specifically, the positive to negative link ratio and the distribution of signed triangles in the networks are properties that are unique to signed networks and would need to be explicitly modeled. This is because their underlying dynamics are not random, but controlled by social theories, such as Structural Balance Theory, which loosely states that users in social networks will prefer triadic relations that involve less tension. Therefore, we propose a model based on Structural Balance Theory and the unsigned Transitive Chung-Lu model for the modeling of signed networks. Our model introduces two parameters that are able to help maintain the positive link ratio and proportion of balanced triangles. Empirical experiments on three real-world signed networks demonstrate the importance of designing models specific to signed networks based on social theories to obtain better performance in maintaining signed network properties while generating synthetic networks. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 83,211 |
2406.19272 | Stochastic Concept Bottleneck Models | Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness. Unlike previous approaches that model the concept relations via an autoregressive structure, we introduce an explicit, distributional parameterization that allows SCBMs to retain the CBMs' efficient training and inference procedure. Additionally, we leverage the parameterization to derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 468,361 |
2403.20085 | OmniNxt: A Fully Open-source and Compact Aerial Robot with
Omnidirectional Visual Perception | Adopting omnidirectional Field of View (FoV) cameras in aerial robots vastly improves perception ability, significantly advancing aerial robotics's capabilities in inspection, reconstruction, and rescue tasks. However, such sensors also elevate system complexity, e.g., hardware design, and corresponding algorithm, which limits researchers from utilizing aerial robots with omnidirectional FoV in their research. To bridge this gap, we propose OmniNxt, a fully open-source aerial robotics platform with omnidirectional perception. We design a high-performance flight controller NxtPX4 and a multi-fisheye camera set for OmniNxt. Meanwhile, the compatible software is carefully devised, which empowers OmniNxt to achieve accurate localization and real-time dense mapping with limited computation resource occupancy. We conducted extensive real-world experiments to validate the superior performance of OmniNxt in practical applications. All the hardware and software are open-access at https://github.com/HKUST-Aerial-Robotics/OmniNxt, and we provide docker images of each crucial module in the proposed system. Project page: https://hkust-aerial-robotics.github.io/OmniNxt. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 442,602 |
2407.21174 | AI Safety in Practice: Enhancing Adversarial Robustness in Multimodal
Image Captioning | Multimodal machine learning models that combine visual and textual data are increasingly being deployed in critical applications, raising significant safety and security concerns due to their vulnerability to adversarial attacks. This paper presents an effective strategy to enhance the robustness of multimodal image captioning models against such attacks. By leveraging the Fast Gradient Sign Method (FGSM) to generate adversarial examples and incorporating adversarial training techniques, we demonstrate improved model robustness on two benchmark datasets: Flickr8k and COCO. Our findings indicate that selectively training only the text decoder of the multimodal architecture shows performance comparable to full adversarial training while offering increased computational efficiency. This targeted approach suggests a balance between robustness and training costs, facilitating the ethical deployment of multimodal AI systems across various domains. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 477,432 |
2001.01168 | Facial Action Unit Detection via Adaptive Attention and Relation | Facial action unit (AU) detection is challenging due to the difficulty in capturing correlated information from subtle and dynamic AUs. Existing methods often resort to the localization of correlated regions of AUs, in which predefining local AU attentions by correlated facial landmarks often discards essential parts, or learning global attention maps often contains irrelevant areas. Furthermore, existing relational reasoning methods often employ common patterns for all AUs while ignoring the specific way of each AU. To tackle these limitations, we propose a novel adaptive attention and relation (AAR) framework for facial AU detection. Specifically, we propose an adaptive attention regression network to regress the global attention map of each AU under the constraint of attention predefinition and the guidance of AU detection, which is beneficial for capturing both specified dependencies by landmarks in strongly correlated regions and facial globally distributed dependencies in weakly correlated regions. Moreover, considering the diversity and dynamics of AUs, we propose an adaptive spatio-temporal graph convolutional network to simultaneously reason the independent pattern of each AU, the inter-dependencies among AUs, as well as the temporal dependencies. Extensive experiments show that our approach (i) achieves competitive performance on challenging benchmarks including BP4D, DISFA, and GFT in constrained scenarios and Aff-Wild2 in unconstrained scenarios, and (ii) can precisely learn the regional correlation distribution of each AU. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 159,423 |
2006.02244 | SimPool: Towards Topology Based Graph Pooling with Structural Similarity
Features | Deep learning methods for graphs have seen rapid progress in recent years with much focus awarded to generalising Convolutional Neural Networks (CNN) to graph data. CNNs are typically realised by alternating convolutional and pooling layers where the pooling layers subsample the grid and exchange spatial or temporal resolution for increased feature dimensionality. Whereas the generalised convolution operator for graphs has been studied extensively and proven useful, hierarchical coarsening of graphs is still challenging since nodes in graphs have no spatial locality and no natural order. This paper proposes two main contributions, the first is a differential module calculating structural similarity features based on the adjacency matrix. These structural similarity features may be used with various algorithms however in this paper the focus and the second main contribution is on integrating these features with a revisited pooling layer DiffPool arXiv:1806.08804 to propose a pooling layer referred to as SimPool. This is achieved by linking the concept of network reduction by means of structural similarity in graphs with the concept of hierarchical localised pooling. Experimental results demonstrate that as part of an end-to-end Graph Neural Network architecture SimPool calculates node cluster assignments that functionally resemble more to the locality preserving pooling operations used by CNNs that operate on local receptive fields in the standard grid. Furthermore the experimental results demonstrate that these features are useful in inductive graph classification tasks with no increase to the number of parameters. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 179,992 |
1301.2275 | Causes and Explanations: A Structural-Model Approach --- Part 1: Causes | We propose a new definition of actual causes, using structural equations to model counterfactuals.We show that the definitions yield a plausible and elegant account ofcausation that handles well examples which have caused problems forother definitions and resolves major difficulties in the traditionalaccount. In a companion paper, we show how the definition of causality can beused to give an elegant definition of (causal) explanation. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 20,951 |
2006.04105 | Kafka-ML: connecting the data stream with ML/AI frameworks | Machine Learning (ML) and Artificial Intelligence (AI) have a dependency on data sources to train, improve and make predictions through their algorithms. With the digital revolution and current paradigms like the Internet of Things, this information is turning from static data into continuous data streams. However, most of the ML/AI frameworks used nowadays are not fully prepared for this revolution. In this paper, we proposed Kafka-ML, an open-source framework that enables the management of TensorFlow ML/AI pipelines through data streams (Apache Kafka). Kafka-ML provides an accessible and user-friendly Web User Interface where users can easily define ML models, to then train, evaluate and deploy them for inference. Kafka-ML itself and its deployed components are fully managed through containerization technologies, which ensure its portability and easy distribution and other features such as fault-tolerance and high availability. Finally, a novel approach has been introduced to manage and reuse data streams, which may lead to the (no) utilization of data storage and file systems. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 180,564 |
2306.03072 | Explore to Generalize in Zero-Shot RL | We study zero-shot generalization in reinforcement learning-optimizing a policy on a set of training tasks to perform well on a similar but unseen test task. To mitigate overfitting, previous work explored different notions of invariance to the task. However, on problems such as the ProcGen Maze, an adequate solution that is invariant to the task visualization does not exist, and therefore invariance-based approaches fail. Our insight is that learning a policy that effectively $\textit{explores}$ the domain is harder to memorize than a policy that maximizes reward for a specific task, and therefore we expect such learned behavior to generalize well; we indeed demonstrate this empirically on several domains that are difficult for invariance-based approaches. Our $\textit{Explore to Generalize}$ algorithm (ExpGen) builds on this insight: we train an additional ensemble of agents that optimize reward. At test time, either the ensemble agrees on an action, and we generalize well, or we take exploratory actions, which generalize well and drive us to a novel part of the state space, where the ensemble may potentially agree again. We show that our approach is the state-of-the-art on tasks of the ProcGen challenge that have thus far eluded effective generalization, yielding a success rate of $83\%$ on the Maze task and $74\%$ on Heist with $200$ training levels. ExpGen can also be combined with an invariance based approach to gain the best of both worlds, setting new state-of-the-art results on ProcGen. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 371,178 |
2009.11440 | Graph-Based Intrusion Detection System for Controller Area Networks | The controller area network (CAN) is the most widely used intra-vehicular communication network in the automotive industry. Because of its simplicity in design, it lacks most of the requirements needed for a security-proven communication protocol. However, a safe and secured environment is imperative for autonomous as well as connected vehicles. Therefore CAN security is considered one of the important topics in the automotive research community. In this paper, we propose a four-stage intrusion detection system that uses the chi-squared method and can detect any kind of strong and weak cyber attacks in a CAN. This work is the first-ever graph-based defense system proposed for the CAN. Our experimental results show that we have a very low 5.26% misclassification for denial of service (DoS) attack, 10% misclassification for fuzzy attack, 4.76% misclassification for replay attack, and no misclassification for spoofing attack. In addition, the proposed methodology exhibits up to 13.73% better accuracy compared to existing ID sequence-based methods. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | false | false | 197,170 |
1908.03632 | Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants | Voice-enabled interactions provide more human-like experiences in many popular IoT systems. Cloud-based speech analysis services extract useful information from voice input using speech recognition techniques. The voice signal is a rich resource that discloses several possible states of a speaker, such as emotional state, confidence and stress levels, physical condition, age, gender, and personal traits. Service providers can build a very accurate profile of a user's demographic category, personal preferences, and may compromise privacy. To address this problem, a privacy-preserving intermediate layer between users and cloud services is proposed to sanitize the voice input. It aims to maintain utility while preserving user privacy. It achieves this by collecting real time speech data and analyzes the signal to ensure privacy protection prior to sharing of this data with services providers. Precisely, the sensitive representations are extracted from the raw signal by using transformation functions and then wrapped it via voice conversion technology. Experimental evaluation based on emotion recognition to assess the efficacy of the proposed method shows that identification of sensitive emotional state of the speaker is reduced by ~96 %. | false | false | true | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 141,280 |
2203.03962 | Generative Cooperative Learning for Unsupervised Video Anomaly Detection | Video anomaly detection is well investigated in weakly-supervised and one-class classification (OCC) settings. However, unsupervised video anomaly detection methods are quite sparse, likely because anomalies are less frequent in occurrence and usually not well-defined, which when coupled with the absence of ground truth supervision, could adversely affect the performance of the learning algorithms. This problem is challenging yet rewarding as it can completely eradicate the costs of obtaining laborious annotations and enable such systems to be deployed without human intervention. To this end, we propose a novel unsupervised Generative Cooperative Learning (GCL) approach for video anomaly detection that exploits the low frequency of anomalies towards building a cross-supervision between a generator and a discriminator. In essence, both networks get trained in a cooperative fashion, thereby allowing unsupervised learning. We conduct extensive experiments on two large-scale video anomaly detection datasets, UCF crime, and ShanghaiTech. Consistent improvement over the existing state-of-the-art unsupervised and OCC methods corroborate the effectiveness of our approach. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 284,293 |
2108.13637 | When are Deep Networks really better than Decision Forests at small
sample sizes, and how? | Deep networks and decision forests (such as random forests and gradient boosted trees) are the leading machine learning methods for structured and tabular data, respectively. Many papers have empirically compared large numbers of classifiers on one or two different domains (e.g., on 100 different tabular data settings). However, a careful conceptual and empirical comparison of these two strategies using the most contemporary best practices has yet to be performed. Conceptually, we illustrate that both can be profitably viewed as "partition and vote" schemes. Specifically, the representation space that they both learn is a partitioning of feature space into a union of convex polytopes. For inference, each decides on the basis of votes from the activated nodes. This formulation allows for a unified basic understanding of the relationship between these methods. Empirically, we compare these two strategies on hundreds of tabular data settings, as well as several vision and auditory settings. Our focus is on datasets with at most 10,000 samples, which represent a large fraction of scientific and biomedical datasets. In general, we found forests to excel at tabular and structured data (vision and audition) with small sample sizes, whereas deep nets performed better on structured data with larger sample sizes. This suggests that further gains in both scenarios may be realized via further combining aspects of forests and networks. We will continue revising this technical report in the coming months with updated results. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 252,856 |
2407.17412 | (PASS) Visual Prompt Locates Good Structure Sparsity through a Recurrent
HyperNetwork | Large-scale neural networks have demonstrated remarkable performance in different domains like vision and language processing, although at the cost of massive computation resources. As illustrated by compression literature, structural model pruning is a prominent algorithm to encourage model efficiency, thanks to its acceleration-friendly sparsity patterns. One of the key questions of structural pruning is how to estimate the channel significance. In parallel, work on data-centric AI has shown that prompting-based techniques enable impressive generalization of large language models across diverse downstream tasks. In this paper, we investigate a charming possibility - \textit{leveraging visual prompts to capture the channel importance and derive high-quality structural sparsity}. To this end, we propose a novel algorithmic framework, namely \texttt{PASS}. It is a tailored hyper-network to take both visual prompts and network weight statistics as input, and output layer-wise channel sparsity in a recurrent manner. Such designs consider the intrinsic channel dependency between layers. Comprehensive experiments across multiple network architectures and six datasets demonstrate the superiority of \texttt{PASS} in locating good structural sparsity. For example, at the same FLOPs level, \texttt{PASS} subnetworks achieve $1\%\sim 3\%$ better accuracy on Food101 dataset; or with a similar performance of $80\%$ accuracy, \texttt{PASS} subnetworks obtain $0.35\times$ more speedup than the baselines. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 475,958 |
2212.02251 | Multiscale Graph Neural Networks for Protein Residue Contact Map
Prediction | Machine learning (ML) is revolutionizing protein structural analysis, including an important subproblem of predicting protein residue contact maps, i.e., which amino-acid residues are in close spatial proximity given the amino-acid sequence of a protein. Despite recent progresses in ML-based protein contact prediction, predicting contacts with a wide range of distances (commonly classified into short-, medium- and long-range contacts) remains a challenge. Here, we propose a multiscale graph neural network (GNN) based approach taking a cue from multiscale physics simulations, in which a standard pipeline involving a recurrent neural network (RNN) is augmented with three GNNs to refine predictive capability for short-, medium- and long-range residue contacts, respectively. Test results on the ProteinNet dataset show improved accuracy for contacts of all ranges using the proposed multiscale RNN+GNN approach over the conventional approach, including the most challenging case of long-range contact prediction. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 334,738 |
2112.11602 | Causal Inference Despite Limited Global Confounding via Mixture Models | A Bayesian Network is a directed acyclic graph (DAG) on a set of $n$ random variables (the vertices); a Bayesian Network Distribution (BND) is a probability distribution on the random variables that is Markovian on the graph. A finite $k$-mixture of such models is graphically represented by a larger graph which has an additional ``hidden'' (or ``latent'') random variable $U$, ranging in $\{1,\ldots,k\}$, and a directed edge from $U$ to every other vertex. Models of this type are fundamental to causal inference, where $U$ models an unobserved confounding effect of multiple populations, obscuring the causal relationships in the observable DAG. By solving the mixture problem and recovering the joint probability distribution with $U$, traditionally unidentifiable causal relationships become identifiable. Using a reduction to the more well-studied ``product'' case on empty graphs, we give the first algorithm to learn mixtures of non-empty DAGs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 272,743 |
2206.07745 | When to intervene? Prescriptive Process Monitoring Under Uncertainty and
Resource Constraints | Prescriptive process monitoring approaches leverage historical data to prescribe runtime interventions that will likely prevent negative case outcomes or improve a process's performance. A centerpiece of a prescriptive process monitoring method is its intervention policy: a decision function determining if and when to trigger an intervention on an ongoing case. Previous proposals in this field rely on intervention policies that consider only the current state of a given case. These approaches do not consider the tradeoff between triggering an intervention in the current state, given the level of uncertainty of the underlying predictive models, versus delaying the intervention to a later state. Moreover, they assume that a resource is always available to perform an intervention (infinite capacity). This paper addresses these gaps by introducing a prescriptive process monitoring method that filters and ranks ongoing cases based on prediction scores, prediction uncertainty, and causal effect of the intervention, and triggers interventions to maximize a gain function, considering the available resources. The proposal is evaluated using a real-life event log. The results show that the proposed method outperforms existing baselines regarding total gain. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 302,865 |
2410.06424 | Restructuring Vector Quantization with the Rotation Trick | Vector Quantized Variational AutoEncoders (VQ-VAEs) are designed to compress a continuous input to a discrete latent space and reconstruct it with minimal distortion. They operate by maintaining a set of vectors -- often referred to as the codebook -- and quantizing each encoder output to the nearest vector in the codebook. However, as vector quantization is non-differentiable, the gradient to the encoder flows around the vector quantization layer rather than through it in a straight-through approximation. This approximation may be undesirable as all information from the vector quantization operation is lost. In this work, we propose a way to propagate gradients through the vector quantization layer of VQ-VAEs. We smoothly transform each encoder output into its corresponding codebook vector via a rotation and rescaling linear transformation that is treated as a constant during backpropagation. As a result, the relative magnitude and angle between encoder output and codebook vector becomes encoded into the gradient as it propagates through the vector quantization layer and back to the encoder. Across 11 different VQ-VAE training paradigms, we find this restructuring improves reconstruction metrics, codebook utilization, and quantization error. Our code is available at https://github.com/cfifty/rotation_trick. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 496,195 |
1612.00132 | CDVAE: Co-embedding Deep Variational Auto Encoder for Conditional
Variational Generation | Problems such as predicting a new shading field (Y) for an image (X) are ambiguous: many very distinct solutions are good. Representing this ambiguity requires building a conditional model P(Y|X) of the prediction, conditioned on the image. Such a model is difficult to train, because we do not usually have training data containing many different shadings for the same image. As a result, we need different training examples to share data to produce good models. This presents a danger we call "code space collapse" - the training procedure produces a model that has a very good loss score, but which represents the conditional distribution poorly. We demonstrate an improved method for building conditional models by exploiting a metric constraint on training data that prevents code space collapse. We demonstrate our model on two example tasks using real data: image saturation adjustment, image relighting. We describe quantitative metrics to evaluate ambiguous generation results. Our results quantitatively and qualitatively outperform different strong baselines. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | true | 64,829 |
2004.14581 | Feedback U-net for Cell Image Segmentation | Human brain is a layered structure, and performs not only a feedforward process from a lower layer to an upper layer but also a feedback process from an upper layer to a lower layer. The layer is a collection of neurons, and neural network is a mathematical model of the function of neurons. Although neural network imitates the human brain, everyone uses only feedforward process from the lower layer to the upper layer, and feedback process from the upper layer to the lower layer is not used. Therefore, in this paper, we propose Feedback U-Net using Convolutional LSTM which is the segmentation method using Convolutional LSTM and feedback process. The output of U-net gave feedback to the input, and the second round is performed. By using Convolutional LSTM, the features in the second round are extracted based on the features acquired in the first round. On both of the Drosophila cell image and Mouse cell image datasets, our method outperformed conventional U-Net which uses only feedforward process. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 174,942 |
1805.02919 | Learning Short-Cut Connections for Object Counting | Object counting is an important task in computer vision due to its growing demand in applications such as traffic monitoring or surveillance. In this paper, we consider object counting as a learning problem of a joint feature extraction and pixel-wise object density estimation with Convolutional-Deconvolutional networks. We introduce a novel counting model, named Gated U-Net (GU-Net). Specifically, we propose to enrich the U-Net architecture with the concept of learnable short-cut connections. Standard short-cut connections are connections between layers in deep neural networks which skip at least one intermediate layer. Instead of simply setting short-cut connections, we propose to learn these connections from data. Therefore, our short-cuts can work as gating units, which optimize the flow of information between convolutional and deconvolutional layers in the U-Net architecture. We evaluate the introduced GU-Net architecture on three commonly used benchmark data sets for object counting. GU-Nets consistently outperform the base U-Net architecture, and achieve state-of-the-art performance. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 96,950 |
2309.06192 | Improving and Evaluating the Detection of Fragmentation in News
Recommendations with the Clustering of News Story Chains | News recommender systems play an increasingly influential role in shaping information access within democratic societies. However, tailoring recommendations to users' specific interests can result in the divergence of information streams. Fragmented access to information poses challenges to the integrity of the public sphere, thereby influencing democracy and public discourse. The Fragmentation metric quantifies the degree of fragmentation of information streams in news recommendations. Accurate measurement of this metric requires the application of Natural Language Processing (NLP) to identify distinct news events, stories, or timelines. This paper presents an extensive investigation of various approaches for quantifying Fragmentation in news recommendations. These approaches are evaluated both intrinsically, by measuring performance on news story clustering, and extrinsically, by assessing the Fragmentation scores of different simulated news recommender scenarios. Our findings demonstrate that agglomerative hierarchical clustering coupled with SentenceBERT text representation is substantially better at detecting Fragmentation than earlier implementations. Additionally, the analysis of simulated scenarios yields valuable insights and recommendations for stakeholders concerning the measurement and interpretation of Fragmentation. | false | false | false | false | false | true | false | false | true | false | false | false | false | true | false | false | false | false | 391,342 |
2104.01027 | Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training | Self-supervised learning of speech representations has been a very active research area but most work is focused on a single domain such as read audio books for which there exist large quantities of labeled and unlabeled data. In this paper, we explore more general setups where the domain of the unlabeled data for pre-training data differs from the domain of the labeled data for fine-tuning, which in turn may differ from the test data domain. Our experiments show that using target domain data during pre-training leads to large performance improvements across a variety of setups. On a large-scale competitive setup, we show that pre-training on unlabeled in-domain data reduces the gap between models trained on in-domain and out-of-domain labeled data by 66%-73%. This has obvious practical implications since it is much easier to obtain unlabeled target domain data than labeled data. Moreover, we find that pre-training on multiple domains improves generalization performance on domains not seen during training. Code and models will be made available at https://github.com/pytorch/fairseq. | false | false | true | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 228,208 |
2407.05619 | AIRA: A Low-cost IR-based Approach Towards Autonomous Precision Drone
Landing and NLOS Indoor Navigation | Automatic drone landing is an important step for achieving fully autonomous drones. Although there are many works that leverage GPS, video, wireless signals, and active acoustic sensing to perform precise landing, autonomous drone landing remains an unsolved challenge for palm-sized microdrones that may not be able to support the high computational requirements of vision, wireless, or active audio sensing. We propose AIRA, a low-cost infrared light-based platform that targets precise and efficient landing of low-resource microdrones. AIRA consists of an infrared light bulb at the landing station along with an energy efficient hardware photodiode (PD) sensing platform at the bottom of the drone. AIRA costs under 83 USD, while achieving comparable performance to existing vision-based methods at a fraction of the energy cost. AIRA requires only three PDs without any complex pattern recognition models to accurately land the drone, under $10$cm of error, from up to $11.1$ meters away, compared to camera-based methods that require recognizing complex markers using high resolution images with a range of only up to $1.2$ meters from the same height. Moreover, we demonstrate that AIRA can accurately guide drones in low light and partial non line of sight scenarios, which are difficult for traditional vision-based approaches. | false | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | 471,058 |
1908.07841 | Ranking Viscous Finger Simulations to an Acquired Ground Truth with
Topology-aware Matchings | This application paper presents a novel framework based on topological data analysis for the automatic evaluation and ranking of viscous finger simulation runs in an ensemble with respect to a reference acquisition. Individual fingers in a given time-step are associated with critical point pairs in the distance field to the injection point, forming persistence diagrams. Different metrics, based on optimal transport, for comparing time-varying persistence diagrams in this specific applicative case are introduced. We evaluate the relevance of the rankings obtained with these metrics, both qualitatively thanks to a lightweight web visual interface, and quantitatively by studying the deviation from a reference ranking suggested by experts. Extensive experiments show the quantitative superiority of our approach compared to traditional alternatives. Our web interface allows experts to conveniently explore the produced rankings. We show a complete viscous fingering case study demonstrating the utility of our approach in the context of porous media fluid flow, where our framework can be used to automatically discard physically-irrelevant simulation runs from the ensemble and rank the most plausible ones. We document an in-situ implementation to lighten I/O and performance constraints arising in the context of parametric studies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | true | 142,394 |
2306.08737 | A Networked Multi-Agent System for Mobile Wireless Infrastructure on
Demand | Despite the prevalence of wireless connectivity in urban areas around the globe, there remain numerous and diverse situations where connectivity is insufficient or unavailable. To address this, we introduce mobile wireless infrastructure on demand, a system of UAVs that can be rapidly deployed to establish an ad-hoc wireless network. This network has the capability of reconfiguring itself dynamically to satisfy and maintain the required quality of communication. The system optimizes the positions of the UAVs and the routing of data flows throughout the network to achieve this quality of service (QoS). By these means, task agents using the network simply request a desired QoS, and the system adapts accordingly while allowing them to move freely. We have validated this system both in simulation and in real-world experiments. The results demonstrate that our system effectively offers mobile wireless infrastructure on demand, extending the operational range of task agents and supporting complex mobility patterns, all while ensuring connectivity and being resilient to agent failures. | false | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | 373,519 |
2106.12417 | False perfection in machine prediction: Detecting and assessing
circularity problems in machine learning | This paper is an excerpt of an early version of Chapter 2 of the book "Validity, Reliability, and Significance. Empirical Methods for NLP and Data Science", by Stefan Riezler and Michael Hagmann, published in December 2021 by Morgan & Claypool. Please see the book's homepage at https://www.morganclaypoolpublishers.com/catalog_Orig/product_info.php?products_id=1688 for a more recent and comprehensive discussion. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 242,721 |
2310.17753 | Bin Assignment and Decentralized Path Planning for Multi-Robot Parcel
Sorting | At modern warehouses, mobile robots transport packages and drop them into collection bins/chutes based on shipping destinations grouped by, e.g., the ZIP code. System throughput, measured as the number of packages sorted per unit of time, determines the efficiency of the warehouse. This research develops a scalable, high-throughput multi-robot parcel sorting solution, decomposing the task into two related processes, bin assignment and offline/online multi-robot path planning, and optimizing both. Bin assignment matches collection bins with package types to minimize traveling costs. Subsequently, robots are assigned to pick up and drop packages into assigned bins. Multiple highly effective bin assignment algorithms are proposed that can work with an arbitrary planning algorithm. We propose a decentralized path planning routine using only local information to route the robots over a carefully constructed directed road network for multi-robot path planning. Our decentralized planner, provably probabilistically deadlock-free, consistently delivers near-optimal results on par with some top-performing centralized planners while significantly reducing computation times by orders of magnitude. Extensive simulations show that our overall framework delivers promising performances. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 403,273 |
1912.12325 | ODE-based Deep Network for MRI Reconstruction | Fast data acquisition in Magnetic Resonance Imaging (MRI) is vastly in demand and scan time directly depends on the number of acquired k-space samples. The data-driven methods based on deep neural networks have resulted in promising improvements, compared to the conventional methods, in image reconstruction algorithms. The connection between deep neural network and Ordinary Differential Equation (ODE) has been observed and studied recently. The studies show that different residual networks can be interpreted as Euler discretization of an ODE. In this paper, we propose an ODE-based deep network for MRI reconstruction to enable the rapid acquisition of MR images with improved image quality. Our results with undersampled data demonstrate that our method can deliver higher quality images in comparison to the reconstruction methods based on the standard UNet network and Residual network. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 158,818 |
2401.04749 | LogFormer: A Pre-train and Tuning Pipeline for Log Anomaly Detection | Log anomaly detection is a key component in the field of artificial intelligence for IT operations (AIOps). Considering log data of variant domains, retraining the whole network for unknown domains is inefficient in real industrial scenarios. However, previous deep models merely focused on extracting the semantics of log sequences in the same domain, leading to poor generalization on multi-domain logs. To alleviate this issue, we propose a unified Transformer-based framework for Log anomaly detection (LogFormer) to improve the generalization ability across different domains, where we establish a two-stage process including the pre-training and adapter-based tuning stage. Specifically, our model is first pre-trained on the source domain to obtain shared semantic knowledge of log data. Then, we transfer such knowledge to the target domain via shared parameters. Besides, the Log-Attention module is proposed to supplement the information ignored by the log-paring. The proposed method is evaluated on three public and one real-world datasets. Experimental results on multiple benchmarks demonstrate the effectiveness of our LogFormer with fewer trainable parameters and lower training costs. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | true | 420,531 |
1312.6410 | A Survey on Eye-Gaze Tracking Techniques | Study of eye-movement is being employed in Human Computer Interaction (HCI) research. Eye - gaze tracking is one of the most challenging problems in the area of computer vision. The goal of this paper is to present a review of latest research in this continued growth of remote eye-gaze tracking. This overview includes the basic definitions and terminologies, recent advances in the field and finally the need of future development in the field. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 29,362 |
2412.00928 | A Deep Generative Model for the Design of Synthesizable Ionizable Lipids | Lipid nanoparticles (LNPs) are vital in modern biomedicine, enabling the effective delivery of mRNA for vaccines and therapies by protecting it from rapid degradation. Among the components of LNPs, ionizable lipids play a key role in RNA protection and facilitate its delivery into the cytoplasm. However, designing ionizable lipids is complex. Deep generative models can accelerate this process and explore a larger candidate space compared to traditional methods. Due to the structural differences between lipids and small molecules, existing generative models used for small molecule generation are unsuitable for lipid generation. To address this, we developed a deep generative model specifically tailored for the discovery of ionizable lipids. Our model generates novel ionizable lipid structures and provides synthesis paths using synthetically accessible building blocks, addressing synthesizability. This advancement holds promise for streamlining the development of lipid-based delivery systems, potentially accelerating the deployment of new therapeutic agents, including mRNA vaccines and gene therapies. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 512,871 |
2207.01424 | On MDS Codes With Galois Hulls of Arbitrary Dimensions | The Galois hulls of linear codes are a generalization of the Euclidean and Hermitian hulls of linear codes. In this paper, we study the Galois hulls of (extended) GRS codes and present several new constructions of MDS codes with Galois hulls of arbitrary dimensions via (extended) GRS codes. Two general methods of constructing MDS codes with Galois hulls of arbitrary dimensions by Hermitian or general Galois self-orthogonal (extended) GRS codes are given. Using these methods, some MDS codes with larger dimensions and Galois hulls of arbitrary dimensions can be obtained and relatively strict conditions can also lead to many new classes of MDS codes with Galois hulls of arbitrary dimensions. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 306,172 |
2303.04548 | Estimation of the qualification and behavior of a contributor and
aggregation of his answers in a crowdsourcing context | Crowdsourcing is the outsourcing of tasks to a crowd of contributors on a dedicated platform. The crowd on these platforms is very diversified and includes various profiles of contributors which generates data of uneven quality. However, majority voting, which is the aggregating method commonly used in platforms, gives equal weight to each contribution. To overcome this problem, we propose a method, MONITOR, which estimates the contributor's profile and aggregates the collected data by taking into account their possible imperfections thanks to the theory of belief functions. To do so, MONITOR starts by estimating the profile of the contributor through his qualification for the task and his behavior.Crowdsourcing campaigns have been carried out to collect the necessary data to test MONITOR on real data in order to compare it to existing approaches. The results of the experiments show that thanks to the use of the MONITOR method, we obtain a better rate of correct answer after aggregation of the contributions compared to the majority voting. Our contributions in this article are for the first time the proposal of a model that takes into account both the qualification of the contributor and his behavior in the estimation of his profile. For the second one, the weakening and the aggregation of the answers according to the estimated profiles. | true | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 350,129 |
2004.14487 | Teaching Cameras to Feel: Estimating Tactile Physical Properties of
Surfaces From Images | The connection between visual input and tactile sensing is critical for object manipulation tasks such as grasping and pushing. In this work, we introduce the challenging task of estimating a set of tactile physical properties from visual information. We aim to build a model that learns the complex mapping between visual information and tactile physical properties. We construct a first of its kind image-tactile dataset with over 400 multiview image sequences and the corresponding tactile properties. A total of fifteen tactile physical properties across categories including friction, compliance, adhesion, texture, and thermal conductance are measured and then estimated by our models. We develop a cross-modal framework comprised of an adversarial objective and a novel visuo-tactile joint classification loss. Additionally, we develop a neural architecture search framework capable of selecting optimal combinations of viewing angles for estimating a given physical property. | false | false | false | false | false | false | true | true | false | false | false | true | false | false | false | false | false | false | 174,898 |
2106.01972 | SOCCER: An Information-Sparse Discourse State Tracking Collection in the
Sports Commentary Domain | In the pursuit of natural language understanding, there has been a long standing interest in tracking state changes throughout narratives. Impressive progress has been made in modeling the state of transaction-centric dialogues and procedural texts. However, this problem has been less intensively studied in the realm of general discourse where ground truth descriptions of states may be loosely defined and state changes are less densely distributed over utterances. This paper proposes to turn to simplified, fully observable systems that show some of these properties: Sports events. We curated 2,263 soccer matches including time-stamped natural language commentary accompanied by discrete events such as a team scoring goals, switching players or being penalized with cards. We propose a new task formulation where, given paragraphs of commentary of a game at different timestamps, the system is asked to recognize the occurrence of in-game events. This domain allows for rich descriptions of state while avoiding the complexities of many other real-world settings. As an initial point of performance measurement, we include two baseline methods from the perspectives of sentence classification with temporal dependence and current state-of-the-art generative model, respectively, and demonstrate that even sophisticated existing methods struggle on the state tracking task when the definition of state broadens or non-event chatter becomes prevalent. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 238,680 |
1401.3477 | Solving Weighted Constraint Satisfaction Problems with Memetic/Exact
Hybrid Algorithms | A weighted constraint satisfaction problem (WCSP) is a constraint satisfaction problem in which preferences among solutions can be expressed. Bucket elimination is a complete technique commonly used to solve this kind of constraint satisfaction problem. When the memory required to apply bucket elimination is too high, a heuristic method based on it (denominated mini-buckets) can be used to calculate bounds for the optimal solution. Nevertheless, the curse of dimensionality makes these techniques impractical on large scale problems. In response to this situation, we present a memetic algorithm for WCSPs in which bucket elimination is used as a mechanism for recombining solutions, providing the best possible child from the parental set. Subsequently, a multi-level model in which this exact/metaheuristic hybrid is further hybridized with branch-and-bound techniques and mini-buckets is studied. As a case study, we have applied these algorithms to the resolution of the maximum density still life problem, a hard constraint optimization problem based on Conways game of life. The resulting algorithm consistently finds optimal patterns for up to date solved instances in less time than current approaches. Moreover, it is shown that this proposal provides new best known solutions for very large instances. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 29,882 |
1604.08076 | The algebro-geometric study of range maps | Localizing a radiant source is a widespread problem to many scientific and technological research areas. E.g. localization based on range measurements stays at the core of technologies like radar, sonar and wireless sensors networks. In this manuscript we study in depth the model for source localization based on range measurements obtained from the source signal, from the point of view of algebraic geometry. In the case of three receivers, we find unexpected connections between this problem and the geometry of Kummer's and Cayley's surfaces. Our work gives new insights also on the localization based on range differences. | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 55,162 |
1605.01478 | Modeling Rich Contexts for Sentiment Classification with LSTM | Sentiment analysis on social media data such as tweets and weibo has become a very important and challenging task. Due to the intrinsic properties of such data, tweets are short, noisy, and of divergent topics, and sentiment classification on these data requires to modeling various contexts such as the retweet/reply history of a tweet, and the social context about authors and relationships. While few prior study has approached the issue of modeling contexts in tweet, this paper proposes to use a hierarchical LSTM to model rich contexts in tweet, particularly long-range context. Experimental results show that contexts can help us to perform sentiment classification remarkably better. | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | false | false | false | 55,486 |
1607.03260 | Modified LLL algorithm with shifted start column | Multiple-input multiple-output (MIMO) systems are playing an important role in the recent wireless communication. The complexity of the different systems models challenge different researches to get a good complexity to performance balance. Lattices Reduction Techniques and Lenstra-Lenstra-Lovasz (LLL) algorithm bring more resources to investigate and can contribute to the complexity reduction purposes. In this paper, we are looking to modify the LLL algorithm to reduce the computation operations by exploiting the structure of the upper triangular matrix without big performance degradation. Basically, the first columns of the upper triangular matrix contain many zeroes, so the algorithm will perform several operations with very limited income. We are presenting a performance and complexity study and our proposal show that we can gain in term of complexity while the performance results remains almost the same. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 58,480 |
2410.10247 | LOBG:Less Overfitting for Better Generalization in Vision-Language Model | Existing prompt learning methods in Vision-Language Models (VLM) have effectively enhanced the transfer capability of VLM to downstream tasks, but they suffer from a significant decline in generalization due to severe overfitting. To address this issue, we propose a framework named LOBG for vision-language models. Specifically, we use CLIP to filter out fine-grained foreground information that might cause overfitting, thereby guiding prompts with basic visual concepts. To further mitigate overfitting, we devel oped a structural topology preservation (STP) loss at the feature level, which endows the feature space with overall plasticity, allowing effective reshaping of the feature space during optimization. Additionally, we employed hierarchical logit distilation (HLD) at the output level to constrain outputs, complementing STP at the output end. Extensive experimental results demonstrate that our method significantly improves generalization capability and alleviates overfitting compared to state-of-the-art approaches. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 497,981 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.