id stringlengths 9 16 | title stringlengths 4 278 | abstract stringlengths 3 4.08k | cs.HC bool 2 classes | cs.CE bool 2 classes | cs.SD bool 2 classes | cs.SI bool 2 classes | cs.AI bool 2 classes | cs.IR bool 2 classes | cs.LG bool 2 classes | cs.RO bool 2 classes | cs.CL bool 2 classes | cs.IT bool 2 classes | cs.SY bool 2 classes | cs.CV bool 2 classes | cs.CR bool 2 classes | cs.CY bool 2 classes | cs.MA bool 2 classes | cs.NE bool 2 classes | cs.DB bool 2 classes | Other bool 2 classes | __index_level_0__ int64 0 541k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2411.09410 | LLM-assisted Explicit and Implicit Multi-interest Learning Framework for
Sequential Recommendation | Multi-interest modeling in current recommender systems (RS) is mainly based on user behavioral data, capturing user interest preferences from multiple dimensions. However, since behavioral data is implicit and often highly sparse, it is challenging to understand users' complex and diverse interests. Recent studies have shown that the rich semantic information in the text can effectively supplement the deficiencies of behavioral data. Despite this, it is still difficult for small models to directly extract semantic features associated with users' deep interests. That is, how to effectively align semantics with behavioral information to form a more comprehensive and accurate understanding of user interests has become a critical research problem. To address this, we propose an LLM-assisted explicit and implicit multi-interest learning framework (named EIMF) to model user interests on two levels: behavior and semantics. The framework consists of two parts: Implicit Behavioral Interest Module (IBIM) and Explicit Semantic Interest Module (ESIM). The traditional multi-interest RS model in IBIM can learn users' implicit behavioral interests from interactions with items. In ESIM, we first adopt a clustering algorithm to select typical samples and design a prompting strategy on LLM to obtain explicit semantic interests. Furthermore, in the training phase, the semantic interests of typical samples can enhance the representation learning of behavioral interests based on the multi-task learning on semantic prediction and modality alignment. Therefore, in the inference stage, accurate recommendations can be achieved with only the user's behavioral data. Extensive experiments on real-world datasets demonstrate the effectiveness of the proposed EIMF framework, which effectively and efficiently combines small models with LLM to improve the accuracy of multi-interest modeling. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 508,237 |
2405.16542 | Mamba4KT:An Efficient and Effective Mamba-based Knowledge Tracing Model | Knowledge tracing (KT) enhances student learning by leveraging past performance to predict future performance. Current research utilizes models based on attention mechanisms and recurrent neural network structures to capture long-term dependencies and correlations between exercises, aiming to improve model accuracy. Due to the growing amount of data in smart education scenarios, this poses a challenge in terms of time and space consumption for knowledge tracing models. However, existing research often overlooks the efficiency of model training and inference and the constraints of training resources. Recognizing the significance of prioritizing model efficiency and resource usage in knowledge tracing, we introduce Mamba4KT. This novel model is the first to explore enhanced efficiency and resource utilization in knowledge tracing. We also examine the interpretability of the Mamba structure both sequence-level and exercise-level to enhance model interpretability. Experimental findings across three public datasets demonstrate that Mamba4KT achieves comparable prediction accuracy to state-of-the-art models while significantly improving training and inference efficiency and resource utilization. As educational data continues to grow, our work suggests a promising research direction for knowledge tracing that improves model prediction accuracy, model efficiency, resource utilization, and interpretability simultaneously. | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | false | false | 457,480 |
2405.06227 | MaskMatch: Boosting Semi-Supervised Learning Through Mask
Autoencoder-Driven Feature Learning | Conventional methods in semi-supervised learning (SSL) often face challenges related to limited data utilization, mainly due to their reliance on threshold-based techniques for selecting high-confidence unlabeled data during training. Various efforts (e.g., FreeMatch) have been made to enhance data utilization by tweaking the thresholds, yet none have managed to use 100% of the available data. To overcome this limitation and improve SSL performance, we introduce \algo, a novel algorithm that fully utilizes unlabeled data to boost semi-supervised learning. \algo integrates a self-supervised learning strategy, i.e., Masked Autoencoder (MAE), that uses all available data to enforce the visual representation learning. This enables the SSL algorithm to leverage all available data, including samples typically filtered out by traditional methods. In addition, we propose a synthetic data training approach to further increase data utilization and improve generalization. These innovations lead \algo to achieve state-of-the-art results on challenging datasets. For instance, on CIFAR-100 with 2 labels per class, STL-10 with 4 labels per class, and Euro-SAT with 2 labels per class, \algo achieves low error rates of 18.71%, 9.47%, and 3.07%, respectively. The code will be made publicly available. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 453,218 |
1808.06509 | Optimized Rate-Adaptive Protograph-Based LDPC Codes for Source Coding
with Side Information | This paper considers the problem of source coding with side information at the decoder, also called Slepian-Wolf source coding scheme. In practical applications of this coding scheme, the statistical relation between the source and the side information can vary from one data transmission to another, and there is a need to adapt the coding rate depending on the current statistical relation. In this paper, we propose a novel rate-adaptive code construction based on LDPC codes for the Slepian-Wolf source coding scheme. The proposed code design method allows to optimize the code degree distributions at all the considered rates, while minimizing the amount of short cycles in the parity check matrices at all rates. Simulation results show that the proposed method greatly reduces the source coding rate compared to the standard LDPCA solution. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 105,547 |
1812.11485 | Partially Non-Recurrent Controllers for Memory-Augmented Neural Networks | Memory-Augmented Neural Networks (MANNs) are a class of neural networks equipped with an external memory, and are reported to be effective for tasks requiring a large long-term memory and its selective use. The core module of a MANN is called a controller, which is usually implemented as a recurrent neural network (RNN) (e.g., LSTM) to enable the use of contextual information in controlling the other modules. However, such an RNN-based controller often allows a MANN to directly solve the given task by using the (small) internal memory of the controller, and prevents the MANN from making the best use of the external memory, thereby resulting in a suboptimally trained model. To address this problem, we present a novel type of RNN-based controller that is partially non-recurrent and avoids the direct use of its internal memory for solving the task, while keeping the ability of using contextual information in controlling the other modules. Our empirical experiments using Neural Turing Machines and Differentiable Neural Computers on the Toy and bAbI tasks demonstrate that the proposed controllers give substantially better results than standard RNN-based controllers. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 117,574 |
2405.05187 | A score-based particle method for homogeneous Landau equation | We propose a novel score-based particle method for solving the Landau equation in plasmas, that seamlessly integrates learning with structure-preserving particle methods [arXiv:1910.03080]. Building upon the Lagrangian viewpoint of the Landau equation, a central challenge stems from the nonlinear dependence of the velocity field on the density. Our primary innovation lies in recognizing that this nonlinearity is in the form of the score function, which can be approximated dynamically via techniques from score-matching. The resulting method inherits the conservation properties of the deterministic particle method while sidestepping the necessity for kernel density estimation in [arXiv:1910.03080]. This streamlines computation and enhances scalability with dimensionality. Furthermore, we provide a theoretical estimate by demonstrating that the KL divergence between our approximation and the true solution can be effectively controlled by the score-matching loss. Additionally, by adopting the flow map viewpoint, we derive an update formula for exact density computation. Extensive examples have been provided to show the efficiency of the method, including a physically relevant case of Coulomb interaction. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 452,827 |
2210.06728 | On the Efficient Implementation of High Accuracy Optimality of Profile
Maximum Likelihood | We provide an efficient unified plug-in approach for estimating symmetric properties of distributions given $n$ independent samples. Our estimator is based on profile-maximum-likelihood (PML) and is sample optimal for estimating various symmetric properties when the estimation error $\epsilon \gg n^{-1/3}$. This result improves upon the previous best accuracy threshold of $\epsilon \gg n^{-1/4}$ achievable by polynomial time computable PML-based universal estimators [ACSS21, ACSS20]. Our estimator reaches a theoretical limit for universal symmetric property estimation as [Han21] shows that a broad class of universal estimators (containing many well known approaches including ours) cannot be sample optimal for every $1$-Lipschitz property when $\epsilon \ll n^{-1/3}$. | false | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | true | 323,417 |
2202.07021 | QuadSim: A Quadcopter Rotational Dynamics Simulation Framework For
Reinforcement Learning Algorithms | This study focuses on designing and developing a mathematically based quadcopter rotational dynamics simulation framework for testing reinforcement learning (RL) algorithms in many flexible configurations. The design of the simulation framework aims to simulate both linear and nonlinear representations of a quadcopter by solving initial value problems for ordinary differential equation (ODE) systems. In addition, the simulation environment is capable of making the simulation deterministic/stochastic by adding random Gaussian noise in the forms of process and measurement noises. In order to ensure that the scope of this simulation environment is not limited only with our own RL algorithms, the simulation environment has been expanded to be compatible with the OpenAI Gym toolkit. The framework also supports multiprocessing capabilities to run simulation environments simultaneously in parallel. To test these capabilities, many state-of-the-art deep RL algorithms were trained in this simulation framework and the results were compared in detail. | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | false | false | 280,398 |
2307.08104 | Using Decision Trees for Interpretable Supervised Clustering | In this paper, we address an issue of finding explainable clusters of class-uniform data in labelled datasets. The issue falls into the domain of interpretable supervised clustering. Unlike traditional clustering, supervised clustering aims at forming clusters of labelled data with high probability densities. We are particularly interested in finding clusters of data of a given class and describing the clusters with the set of comprehensive rules. We propose an iterative method to extract high-density clusters with the help of decisiontree-based classifiers as the most intuitive learning method, and discuss the method of node selection to maximize quality of identified groups. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 379,673 |
1710.08500 | Are Multiagent Systems Resilient to Communication Failures? | A challenge in multiagent control systems is to ensure that they are appropriately resilient to communication failures between the various agents. In many common game-theoretic formulations of these types of systems, it is implicitly assumed that all agents have access to as much information about other agents' actions as needed. This paper endeavors to augment these game-theoretic methods with policies that would allow agents to react on-the-fly to losses of this information. Unfortunately, we show that even if a single agent loses communication with one other weakly-coupled agent, this can cause arbitrarily-bad system states to emerge as various solution concepts of an associated game, regardless of how the agent accounts for the communication failure and regardless of how weakly coupled the agents are. Nonetheless, we show that the harm that communication failures can cause is limited by the structure of the problem; when agents' action spaces are richer, problems are more susceptible to these types of pathologies. Finally, we undertake an initial study into how a system designer might prevent these pathologies, and explore a few limited settings in which communication failures cannot cause harm. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | false | true | 83,086 |
2303.05780 | TAKT: Target-Aware Knowledge Transfer for Whole Slide Image
Classification | Transferring knowledge from a source domain to a target domain can be crucial for whole slide image classification, since the number of samples in a dataset is often limited due to high annotation costs. However, domain shift and task discrepancy between datasets can hinder effective knowledge transfer. In this paper, we propose a Target-Aware Knowledge Transfer framework, employing a teacher-student paradigm. Our framework enables the teacher model to learn common knowledge from the source and target domains by actively incorporating unlabelled target images into the training of the teacher model. The teacher bag features are subsequently adapted to supervise the training of the student model on the target domain. Despite incorporating the target features during training, the teacher model tends to overlook them under the inherent domain shift and task discrepancy. To alleviate this, we introduce a target-aware feature alignment module to establish a transferable latent relationship between the source and target features by solving the optimal transport problem. Experimental results show that models employing knowledge transfer outperform those trained from scratch, and our method achieves state-of-the-art performance among other knowledge transfer methods on various datasets, including TCGA-RCC, TCGA-NSCLC, and Camelyon16. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 350,600 |
2412.03055 | Real-Time AIoT for UAV Antenna Interference Detection via Edge-Cloud
Collaboration | In the fifth-generation (5G) era, eliminating communication interference sources is crucial for maintaining network performance. Interference often originates from unauthorized or malfunctioning antennas, and radio monitoring agencies must address numerous sources of such antennas annually. Unmanned aerial vehicles (UAVs) can improve inspection efficiency. However, the data transmission delay in the existing cloud-only (CO) artificial intelligence (AI) mode fails to meet the low latency requirements for real-time performance. Therefore, we propose a computer vision-based AI of Things (AIoT) system to detect antenna interference sources for UAVs. The system adopts an optimized edge-cloud collaboration (ECC+) mode, combining a keyframe selection algorithm (KSA), focusing on reducing end-to-end latency (E2EL) and ensuring reliable data transmission, which aligns with the core principles of ultra-reliable low-latency communication (URLLC). At the core of our approach is an end-to-end antenna localization scheme based on the tracking-by-detection (TBD) paradigm, including a detector (EdgeAnt) and a tracker (AntSort). EdgeAnt achieves state-of-the-art (SOTA) performance with a mean average precision (mAP) of 42.1% on our custom antenna interference source dataset, requiring only 3 million parameters and 14.7 GFLOPs. On the COCO dataset, EdgeAnt achieves 38.9% mAP with 5.4 GFLOPs. We deployed EdgeAnt on Jetson Xavier NX (TRT) and Raspberry Pi 4B (NCNN), achieving real-time inference speeds of 21.1 (1088) and 4.8 (640) frames per second (FPS), respectively. Compared with CO mode, the ECC+ mode reduces E2EL by 88.9%, increases accuracy by 28.2%. Additionally, the system offers excellent scalability for coordinated multiple UAVs inspections. The detector code is publicly available at https://github.com/SCNU-RISLAB/EdgeAnt. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 513,795 |
2103.00919 | Computing the sampling performance of event-triggered control | In the context of networked control systems, event-triggered control (ETC) has emerged as a major topic due to its alleged resource usage reduction capabilities. However, this is mainly supported by numerical simulations, and very little is formally known about the traffic generated by ETC. This work devises a method to estimate, and in some cases to determine exactly, the minimum average inter-sample time (MAIST) generated by periodic event-triggered control (PETC) of linear systems. The method involves abstracting the traffic model using a bisimulation refinement algorithm and finding the cycle of minimum average length in the graph associated to it. This always gives a lower bound to the actual MAIST. Moreover, if this cycle turns out to be related to a periodic solution of the closed-loop PETC system, the performance metric is exact. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 222,452 |
2206.11484 | Towards WinoQueer: Developing a Benchmark for Anti-Queer Bias in Large
Language Models | This paper presents exploratory work on whether and to what extent biases against queer and trans people are encoded in large language models (LLMs) such as BERT. We also propose a method for reducing these biases in downstream tasks: finetuning the models on data written by and/or about queer people. To measure anti-queer bias, we introduce a new benchmark dataset, WinoQueer, modeled after other bias-detection benchmarks but addressing homophobic and transphobic biases. We found that BERT shows significant homophobic bias, but this bias can be mostly mitigated by finetuning BERT on a natural language corpus written by members of the LGBTQ+ community. | false | false | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | 304,280 |
1702.00764 | Symbolic, Distributed and Distributional Representations for Natural
Language Processing in the Era of Deep Learning: a Survey | Natural language is inherently a discrete symbolic representation of human knowledge. Recent advances in machine learning (ML) and in natural language processing (NLP) seem to contradict the above intuition: discrete symbols are fading away, erased by vectors or tensors called distributed and distributional representations. However, there is a strict link between distributed/distributional representations and discrete symbols, being the first an approximation of the second. A clearer understanding of the strict link between distributed/distributional representations and symbols may certainly lead to radically new deep learning networks. In this paper we make a survey that aims to renew the link between symbolic representations and distributed/distributional representations. This is the right time to revitalize the area of interpreting how discrete symbols are represented inside neural networks. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 67,703 |
2407.18900 | How Polarized are Online Conversations about Childhood? | 2020 through 2023 were unusually tumultuous years for children in the United States, and children's welfare was prominent in political debate. Theories in moral psychology suggest that political parties would treat concerns for children using different moral frames, and that moral conflict might drive substantial polarization in discussions about children. However, such partisan frames may still differ very little if there is limited underlying disagreement about moral issues and everyday concerns in childhood when not explicitly referencing politics. We evaluate claims of universality and division in moral language using tweets from 2019-2023 linked to U.S. voter records, focusing on expressed morality. Our results show that mentions of children by Republicans and Democrats are usually similar, differing no more than mentions by women and men, and tend to contain no large differences in accompanying moral words. To the extent that mentions of children did differ across parties, these differences were constrained to topics polarized well before the pandemic -- and slightly heightened when co-mentioned with `kids' or `children'. These topics reflected a small fraction of conversations about children. Overall, polarization of online discussion around childhood appears to reflect escalated polarization on lines of existing partisan conflicts rather than concerns originating from new concerns about the welfare of children during and after the pandemic. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 476,555 |
2409.13410 | Sine Wave Normalization for Deep Learning-Based Tumor Segmentation in
CT/PET Imaging | This report presents a normalization block for automated tumor segmentation in CT/PET scans, developed for the autoPET III Challenge. The key innovation is the introduction of the SineNormal, which applies periodic sine transformations to PET data to enhance lesion detection. By highlighting intensity variations and producing concentric ring patterns in PET highlighted regions, the model aims to improve segmentation accuracy, particularly for challenging multitracer PET datasets. The code for this project is available on GitHub (https://github.com/BBQtime/Sine-Wave-Normalization-for-Deep-Learning-Based-Tumor-Segmentation-in-CT-PET). | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 489,982 |
2311.09748 | Translation Aligned Sentence Embeddings for Turkish Language | Due to the limited availability of high quality datasets for training sentence embeddings in Turkish, we propose a training methodology and a regimen to develop a sentence embedding model. The central idea is simple but effective : is to fine-tune a pretrained encoder-decoder model in two consecutive stages, where the first stage involves aligning the embedding space with translation pairs. Thanks to this alignment, the prowess of the main model can be better projected onto the target language in a sentence embedding setting where it can be fine-tuned with high accuracy in short duration with limited target language dataset. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 408,274 |
2501.03937 | A precise asymptotic analysis of learning diffusion models: theory and
insights | In this manuscript, we consider the problem of learning a flow or diffusion-based generative model parametrized by a two-layer auto-encoder, trained with online stochastic gradient descent, on a high-dimensional target density with an underlying low-dimensional manifold structure. We derive a tight asymptotic characterization of low-dimensional projections of the distribution of samples generated by the learned model, ascertaining in particular its dependence on the number of training samples. Building on this analysis, we discuss how mode collapse can arise, and lead to model collapse when the generative model is re-trained on generated synthetic data. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 523,047 |
2009.07635 | The FaceChannel: A Fast & Furious Deep Neural Network for Facial
Expression Recognition | Current state-of-the-art models for automatic Facial Expression Recognition (FER) are based on very deep neural networks that are effective but rather expensive to train. Given the dynamic conditions of FER, this characteristic hinders such models of been used as a general affect recognition. In this paper, we address this problem by formalizing the FaceChannel, a light-weight neural network that has much fewer parameters than common deep neural networks. We introduce an inhibitory layer that helps to shape the learning of facial features in the last layer of the network and thus improving performance while reducing the number of trainable parameters. To evaluate our model, we perform a series of experiments on different benchmark datasets and demonstrate how the FaceChannel achieves a comparable, if not better, performance to the current state-of-the-art in FER. Our experiments include cross-dataset analysis, to estimate how our model behaves on different affective recognition conditions. We conclude our paper with an analysis of how FaceChannel learns and adapt the learned facial features towards the different datasets. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 196,000 |
2407.12830 | Knowledge-based Consistency Testing of Large Language Models | In this work, we systematically expose and measure the inconsistency and knowledge gaps of Large Language Models (LLMs). Specifically, we propose an automated testing framework (called KonTest) which leverages a knowledge graph to construct test cases. KonTest probes and measures the inconsistencies in the LLM's knowledge of the world via a combination of semantically-equivalent queries and test oracles (metamorphic or ontological oracle). KonTest further mitigates knowledge gaps via a weighted LLM model ensemble. Using four state-of-the-art LLMs (Falcon, Gemini, GPT3.5, and Llama2), we show that KonTest generates 19.2% error inducing inputs (1917 errors from 9979 test inputs). It also reveals a 16.5% knowledge gap across all tested LLMs. A mitigation method informed by KonTest's test suite reduces LLM knowledge gap by 32.48%. Our ablation study further shows that GPT3.5 is not suitable for knowledge-based consistency testing because it is only 60%-68% effective in knowledge construction. | false | false | false | false | true | false | true | false | true | false | false | false | false | false | false | false | false | false | 474,106 |
2201.07341 | Learning grammar with a divide-and-concur neural network | We implement a divide-and-concur iterative projection approach to context-free grammar inference. Unlike most state-of-the-art models of natural language processing, our method requires a relatively small number of discrete parameters, making the inferred grammar directly interpretable -- one can read off from a solution how to construct grammatically valid sentences. Another advantage of our approach is the ability to infer meaningful grammatical rules from just a few sentences, compared to the hundreds of gigabytes of training data many other models employ. We demonstrate several ways of applying our approach: classifying words and inferring a grammar from scratch, taking an existing grammar and refining its categories and rules, and taking an existing grammar and expanding its lexicon as it encounters new words in new data. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 275,995 |
2111.02709 | Analog MIMO Communication for One-shot Distributed Principal Component
Analysis | A fundamental algorithm for data analytics at the edge of wireless networks is distributed principal component analysis (DPCA), which finds the most important information embedded in a distributed high-dimensional dataset by distributed computation of a reduced-dimension data subspace, called principal components (PCs). In this paper, to support one-shot DPCA in wireless systems, we propose a framework of analog MIMO transmission featuring the uncoded analog transmission of local PCs for estimating the global PCs. To cope with channel distortion and noise, two maximum-likelihood (global) PC estimators are presented corresponding to the cases with and without receive channel state information (CSI). The first design, termed coherent PC estimator, is derived by solving a Procrustes problem and reveals the form of regularized channel inversion where the regulation attempts to alleviate the effects of both receiver noise and data noise. The second one, termed blind PC estimator, is designed based on the subspace channel-rotation-invariance property and computes a centroid of received local PCs on a Grassmann manifold. Using the manifold-perturbation theory, tight bounds on the mean square subspace distance (MSSD) of both estimators are derived for performance evaluation. The results reveal simple scaling laws of MSSD concerning device population, data and channel signal-to-noise ratios (SNRs), and array sizes. More importantly, both estimators are found to have identical scaling laws, suggesting the dispensability of CSI to accelerate DPCA. Simulation results validate the derived results and demonstrate the promising latency performance of the proposed analog MIMO | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | true | 264,952 |
2308.14583 | Learning to Read Analog Gauges from Synthetic Data | Manually reading and logging gauge data is time inefficient, and the effort increases according to the number of gauges available. We present a computer vision pipeline that automates the reading of analog gauges. We propose a two-stage CNN pipeline that identifies the key structural components of an analog gauge and outputs an angular reading. To facilitate the training of our approach, a synthetic dataset is generated thus obtaining a set of realistic analog gauges with their corresponding annotation. To validate our proposal, an additional real-world dataset was collected with 4.813 manually curated images. When compared against state-of-the-art methodologies, our method shows a significant improvement of 4.55 in the average error, which is a 52% relative improvement. The resources for this project will be made available at: https://github.com/fuankarion/automatic-gauge-reading. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 388,377 |
1907.05021 | Optimal Feature Transport for Cross-View Image Geo-Localization | This paper addresses the problem of cross-view image geo-localization, where the geographic location of a ground-level street-view query image is estimated by matching it against a large scale aerial map (e.g., a high-resolution satellite image). State-of-the-art deep-learning based methods tackle this problem as deep metric learning which aims to learn global feature representations of the scene seen by the two different views. Despite promising results are obtained by such deep metric learning methods, they, however, fail to exploit a crucial cue relevant for localization, namely, the spatial layout of local features. Moreover, little attention is paid to the obvious domain gap (between aerial view and ground view) in the context of cross-view localization. This paper proposes a novel Cross-View Feature Transport (CVFT) technique to explicitly establish cross-view domain transfer that facilitates feature alignment between ground and aerial images. Specifically, we implement the CVFT as network layers, which transports features from one domain to the other, leading to more meaningful feature similarity comparison. Our model is differentiable and can be learned end-to-end. Experiments on large-scale datasets have demonstrated that our method has remarkably boosted the state-of-the-art cross-view localization performance, e.g., on the CVUSA dataset, with significant improvements for top-1 recall from 40.79% to 61.43%, and for top-10 from 76.36% to 90.49%. We expect the key insight of the paper (i.e., explicitly handling domain difference via domain transport) will prove to be useful for other similar problems in computer vision as well. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 138,260 |
2208.14743 | SimpleRecon: 3D Reconstruction Without 3D Convolutions | Traditionally, 3D indoor scene reconstruction from posed images happens in two phases: per-image depth estimation, followed by depth merging and surface reconstruction. Recently, a family of methods have emerged that perform reconstruction directly in final 3D volumetric feature space. While these methods have shown impressive reconstruction results, they rely on expensive 3D convolutional layers, limiting their application in resource-constrained environments. In this work, we instead go back to the traditional route, and show how focusing on high quality multi-view depth prediction leads to highly accurate 3D reconstructions using simple off-the-shelf depth fusion. We propose a simple state-of-the-art multi-view depth estimator with two main contributions: 1) a carefully-designed 2D CNN which utilizes strong image priors alongside a plane-sweep feature volume and geometric losses, combined with 2) the integration of keyframe and geometric metadata into the cost volume which allows informed depth plane scoring. Our method achieves a significant lead over the current state-of-the-art for depth estimation and close or better for 3D reconstruction on ScanNet and 7-Scenes, yet still allows for online real-time low-memory reconstruction. Code, models and results are available at https://nianticlabs.github.io/simplerecon | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 315,410 |
2408.08624 | RealMedQA: A pilot biomedical question answering dataset containing
realistic clinical questions | Clinical question answering systems have the potential to provide clinicians with relevant and timely answers to their questions. Nonetheless, despite the advances that have been made, adoption of these systems in clinical settings has been slow. One issue is a lack of question-answering datasets which reflect the real-world needs of health professionals. In this work, we present RealMedQA, a dataset of realistic clinical questions generated by humans and an LLM. We describe the process for generating and verifying the QA pairs and assess several QA models on BioASQ and RealMedQA to assess the relative difficulty of matching answers to questions. We show that the LLM is more cost-efficient for generating "ideal" QA pairs. Additionally, we achieve a lower lexical similarity between questions and answers than BioASQ which provides an additional challenge to the top two QA models, as per the results. We release our code and our dataset publicly to encourage further research. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 481,072 |
1504.00532 | A general framework for compressed sensing and parallel MRI using
annihilating filter based low-rank Hankel matrix | Parallel MRI (pMRI) and compressed sensing MRI (CS-MRI) have been considered as two distinct reconstruction problems. Inspired by recent k-space interpolation methods, an annihilating filter based low-rank Hankel matrix approach (ALOHA) is proposed as a general framework for sparsity-driven k-space interpolation method which unifies pMRI and CS-MRI. Specifically, our framework is based on the fundamental duality between the transform domain sparsity in the primary space and the low-rankness of weighted Hankel matrix in the reciprocal space, which converts pMRI and CS-MRI to a k-space interpolation problem using structured matrix completion. Using theoretical results from the latest compressed sensing literatures, we showed that the required sampling rates for ALOHA may achieve the optimal rate. Experimental results with in vivo data for single/multi-coil imaging as well as dynamic imaging confirmed that the proposed method outperforms the state-of-the-art pMRI and CS-MRI. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 41,713 |
1503.06962 | Probabilistic Binary-Mask Cocktail-Party Source Separation in a
Convolutional Deep Neural Network | Separation of competing speech is a key challenge in signal processing and a feat routinely performed by the human auditory brain. A long standing benchmark of the spectrogram approach to source separation is known as the ideal binary mask. Here, we train a convolutional deep neural network, on a two-speaker cocktail party problem, to make probabilistic predictions about binary masks. Our results approach ideal binary mask performance, illustrating that relatively simple deep neural networks are capable of robust binary mask prediction. We also illustrate the trade-off between prediction statistics and separation quality. | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | true | false | false | 41,423 |
2408.15778 | LogicGame: Benchmarking Rule-Based Reasoning Abilities of Large Language
Models | Large Language Models (LLMs) have demonstrated notable capabilities across various tasks, showcasing complex problem-solving abilities. Understanding and executing complex rules, along with multi-step planning, are fundamental to logical reasoning and critical for practical LLM agents and decision-making systems. However, evaluating LLMs as effective rule-based executors and planners remains underexplored. In this paper, we introduce LogicGame, a novel benchmark designed to evaluate the comprehensive rule understanding, execution, and planning capabilities of LLMs. Unlike traditional benchmarks, LogicGame provides diverse games that contain a series of rules with an initial state, requiring models to comprehend and apply predefined regulations to solve problems. We create simulated scenarios in which models execute or plan operations to achieve specific outcomes. These game scenarios are specifically designed to distinguish logical reasoning from mere knowledge by relying exclusively on predefined rules. This separation allows for a pure assessment of rule-based reasoning capabilities. The evaluation considers not only final outcomes but also intermediate steps, providing a comprehensive assessment of model performance. Moreover, these intermediate steps are deterministic and can be automatically verified. LogicGame defines game scenarios with varying difficulty levels, from simple rule applications to complex reasoning chains, in order to offer a precise evaluation of model performance on rule understanding and multi-step execution. Utilizing LogicGame, we test various LLMs and identify notable shortcomings in their rule-based logical reasoning abilities. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 484,075 |
2405.14425 | When predict can also explain: few-shot prediction to select better
neural latents | Latent variable models serve as powerful tools to infer underlying dynamics from observed neural activity. Ideally, the inferred dynamics should align with true ones. However, due to the absence of ground truth data, prediction benchmarks are often employed as proxies. One widely-used method, *co-smoothing*, involves jointly estimating latent variables and predicting observations along held-out channels to assess model performance. In this study, we reveal the limitations of the co-smoothing prediction framework and propose a remedy. In a student-teacher setup with Hidden Markov Models, we demonstrate that the high co-smoothing model space encompasses models with arbitrary extraneous dynamics in their latent representations. To address this, we introduce a secondary metric -- *few-shot co-smoothing*, performing regression from the latent variables to held-out channels in the data using fewer trials. Our results indicate that among models with near-optimal co-smoothing, those with extraneous dynamics underperform in the few-shot co-smoothing compared to 'minimal' models that are devoid of such dynamics. We provide analytical insights into the origin of this phenomenon and further validate our findings on real neural data using two state-of-the-art methods: LFADS and STNDT. In the absence of ground truth, we suggest a novel measure to validate our approach. By cross-decoding the latent variables of all model pairs with high co-smoothing, we identify models with minimal extraneous dynamics. We find a correlation between few-shot co-smoothing performance and this new measure. In summary, we present a novel prediction metric designed to yield latent variables that more accurately reflect the ground truth, offering a significant improvement for latent dynamics inference. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 456,418 |
1909.12316 | Preference-Based Learning for Exoskeleton Gait Optimization | This paper presents a personalized gait optimization framework for lower-body exoskeletons. Rather than optimizing numerical objectives such as the mechanical cost of transport, our approach directly learns from user preferences, e.g., for comfort. Building upon work in preference-based interactive learning, we present the CoSpar algorithm. CoSpar prompts the user to give pairwise preferences between trials and suggest improvements; as exoskeleton walking is a non-intuitive behavior, users can provide preferences more easily and reliably than numerical feedback. We show that CoSpar performs competitively in simulation and demonstrate a prototype implementation of CoSpar on a lower-body exoskeleton to optimize human walking trajectory features. In the experiments, CoSpar consistently found user-preferred parameters of the exoskeleton's walking gait, which suggests that it is a promising starting point for adapting and personalizing exoskeletons (or other assistive devices) to individual users. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 147,089 |
1809.07887 | Closeness of Solutions for Singularly Perturbed Systems via Averaging | This paper studies the behavior of singularly perturbed nonlinear differential equations with boundary-layer solutions that do not necessarily converge to an equilibrium. Using the average of the fast variable and assuming the boundary layer solutions converge to a bounded set, results on the closeness of solutions of the singularly perturbed system to the solutions of the reduced average and boundary layer systems over a finite time interval are presented. The closeness of solutions error is shown to be of order O(\sqrt(\epsilon)), where \epsilon is the perturbation parameter. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 108,380 |
1911.07509 | AI-based Pilgrim Detection using Convolutional Neural Networks | Pilgrimage represents the most important Islamic religious gathering in the world where millions of pilgrims visit the holy places of Makkah and Madinah to perform their rituals. The safety and security of pilgrims is the highest priority for the authorities. In Makkah, 5000 cameras are spread around the holy for monitoring pilgrims, but it is almost impossible to track all events by humans considering the huge number of images collected every second. To address this issue, we propose to use artificial intelligence technique based on deep learning and convolution neural networks to detect and identify Pilgrims and their features. For this purpose, we built a comprehensive dataset for the detection of pilgrims and their genders. Then, we develop two convolutional neural networks based on YOLOv3 and Faster-RCNN for the detection of Pilgrims. Experiments results show that Faster RCNN with Inception v2 feature extractor provides the best mean average precision over all classes of 51%. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | true | false | false | 153,877 |
2012.03468 | An Empirical Survey of Unsupervised Text Representation Methods on
Twitter Data | The field of NLP has seen unprecedented achievements in recent years. Most notably, with the advent of large-scale pre-trained Transformer-based language models, such as BERT, there has been a noticeable improvement in text representation. It is, however, unclear whether these improvements translate to noisy user-generated text, such as tweets. In this paper, we present an experimental survey of a wide range of well-known text representation techniques for the task of text clustering on noisy Twitter data. Our results indicate that the more advanced models do not necessarily work best on tweets and that more exploration in this area is needed. | false | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 210,128 |
2207.05833 | Earthformer: Exploring Space-Time Transformers for Earth System
Forecasting | Conventionally, Earth system (e.g., weather and climate) forecasting relies on numerical simulation with complex physical models and are hence both expensive in computation and demanding on domain expertise. With the explosive growth of the spatiotemporal Earth observation data in the past decade, data-driven models that apply Deep Learning (DL) are demonstrating impressive potential for various Earth system forecasting tasks. The Transformer as an emerging DL architecture, despite its broad success in other domains, has limited adoption in this area. In this paper, we propose Earthformer, a space-time Transformer for Earth system forecasting. Earthformer is based on a generic, flexible and efficient space-time attention block, named Cuboid Attention. The idea is to decompose the data into cuboids and apply cuboid-level self-attention in parallel. These cuboids are further connected with a collection of global vectors. We conduct experiments on the MovingMNIST dataset and a newly proposed chaotic N-body MNIST dataset to verify the effectiveness of cuboid attention and figure out the best design of Earthformer. Experiments on two real-world benchmarks about precipitation nowcasting and El Nino/Southern Oscillation (ENSO) forecasting show Earthformer achieves state-of-the-art performance. Code is available: https://github.com/amazon-science/earth-forecasting-transformer . | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 307,678 |
2408.17428 | CLOCR-C: Context Leveraging OCR Correction with Pre-trained Language
Models | The digitisation of historical print media archives is crucial for increasing accessibility to contemporary records. However, the process of Optical Character Recognition (OCR) used to convert physical records to digital text is prone to errors, particularly in the case of newspapers and periodicals due to their complex layouts. This paper introduces Context Leveraging OCR Correction (CLOCR-C), which utilises the infilling and context-adaptive abilities of transformer-based language models (LMs) to improve OCR quality. The study aims to determine if LMs can perform post-OCR correction, improve downstream NLP tasks, and the value of providing the socio-cultural context as part of the correction process. Experiments were conducted using seven LMs on three datasets: the 19th Century Serials Edition (NCSE) and two datasets from the Overproof collection. The results demonstrate that some LMs can significantly reduce error rates, with the top-performing model achieving over a 60\% reduction in character error rate on the NCSE dataset. The OCR improvements extend to downstream tasks, such as Named Entity Recognition, with increased Cosine Named Entity Similarity. Furthermore, the study shows that providing socio-cultural context in the prompts improves performance, while misleading prompts lower performance. In addition to the findings, this study releases a dataset of 91 transcribed articles from the NCSE, containing a total of 40 thousand words, to support further research in this area. The findings suggest that CLOCR-C is a promising approach for enhancing the quality of existing digital archives by leveraging the socio-cultural information embedded in the LMs and the text requiring correction. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 484,699 |
2007.00479 | The Restricted Isometry of ReLU Networks: Generalization through Norm
Concentration | While regression tasks aim at interpolating a relation on the entire input space, they often have to be solved with a limited amount of training data. Still, if the hypothesis functions can be sketched well with the data, one can hope for identifying a generalizing model. In this work, we introduce with the Neural Restricted Isometry Property (NeuRIP) a uniform concentration event, in which all shallow $\mathrm{ReLU}$ networks are sketched with the same quality. To derive the sample complexity for achieving NeuRIP, we bound the covering numbers of the networks in the Sub-Gaussian metric and apply chaining techniques. In case of the NeuRIP event, we then provide bounds on the expected risk, which hold for networks in any sublevel set of the empirical risk. We conclude that all networks with sufficiently small empirical risk generalize uniformly. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 185,128 |
2404.09480 | Mitigating Hallucination in Abstractive Summarization with
Domain-Conditional Mutual Information | A primary challenge in abstractive summarization is hallucination -- the phenomenon where a model generates plausible text that is absent in the source text. We hypothesize that the domain (or topic) of the source text triggers the model to generate text that is highly probable in the domain, neglecting the details of the source text. To alleviate this model bias, we introduce a decoding strategy based on domain-conditional pointwise mutual information. This strategy adjusts the generation probability of each token by comparing it with the token's marginal probability within the domain of the source text. According to evaluation on the XSUM dataset, our method demonstrates improvement in terms of faithfulness and source relevance. The code is publicly available at \url{https://github.com/qqplot/dcpmi}. | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 446,694 |
2404.00495 | Configurable Safety Tuning of Language Models with Synthetic Preference
Data | State-of-the-art language model fine-tuning techniques, such as Direct Preference Optimization (DPO), restrict user control by hard-coding predefined behaviors into the model. To address this, we propose a novel method, Configurable Safety Tuning (CST), that augments DPO using synthetic preference data to facilitate flexible safety configuration of LLMs at inference time. CST overcomes the constraints of vanilla DPO by introducing a system prompt specifying safety configurations, enabling LLM deployers to disable/enable safety preferences based on their need, just changing the system prompt. Our experimental evaluations indicate that CST successfully manages different safety configurations and retains the original functionality of LLMs, showing it is a robust method for configurable deployment. Data and models available at https://github.com/vicgalle/configurable-safety-tuning | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | false | false | false | 442,957 |
2405.09711 | STAR: A Benchmark for Situated Reasoning in Real-World Videos | Reasoning in the real world is not divorced from situations. How to capture the present knowledge from surrounding situations and perform reasoning accordingly is crucial and challenging for machine intelligence. This paper introduces a new benchmark that evaluates the situated reasoning ability via situation abstraction and logic-grounded question answering for real-world videos, called Situated Reasoning in Real-World Videos (STAR Benchmark). This benchmark is built upon the real-world videos associated with human actions or interactions, which are naturally dynamic, compositional, and logical. The dataset includes four types of questions, including interaction, sequence, prediction, and feasibility. We represent the situations in real-world videos by hyper-graphs connecting extracted atomic entities and relations (e.g., actions, persons, objects, and relationships). Besides visual perception, situated reasoning also requires structured situation comprehension and logical reasoning. Questions and answers are procedurally generated. The answering logic of each question is represented by a functional program based on a situation hyper-graph. We compare various existing video reasoning models and find that they all struggle on this challenging situated reasoning task. We further propose a diagnostic neuro-symbolic model that can disentangle visual perception, situation abstraction, language understanding, and functional reasoning to understand the challenges of this benchmark. | false | false | false | false | true | false | false | false | true | false | false | true | false | false | false | false | false | false | 454,500 |
1605.03956 | On the Convergent Properties of Word Embedding Methods | Do word embeddings converge to learn similar things over different initializations? How repeatable are experiments with word embeddings? Are all word embedding techniques equally reliable? In this paper we propose evaluating methods for learning word representations by their consistency across initializations. We propose a measure to quantify the similarity of the learned word representations under this setting (where they are subject to different random initializations). Our preliminary results illustrate that our metric not only measures a intrinsic property of word embedding methods but also correlates well with other evaluation metrics on downstream tasks. We believe our methods are is useful in characterizing robustness -- an important property to consider when developing new word embedding methods. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 55,812 |
1512.06868 | Minimum distance functions of graded ideals and Reed-Muller-type codes | We introduce and study the minimum distance function of a graded ideal in a polynomial ring with coefficients in a field, and show that it generalizes the minimum distance of projective Reed-Muller-type codes over finite fields. This gives an algebraic formulation of the minimum distance of a projective Reed-Muller-type code in terms of the algebraic invariants and structure of the underlying vanishing ideal. Then we give a method, based on Groebner bases and Hilbert functions, to find lower bounds for the minimum distance of certain Reed-Muller-type codes. Finally we show explicit upper bounds for the number of zeros of polynomials in a projective nested cartesian set and give some support to a conjecture of Carvalho, Lopez-Neumann and Lopez. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 50,357 |
2111.15129 | Anonymization for Skeleton Action Recognition | Skeleton-based action recognition attracts practitioners and researchers due to the lightweight, compact nature of datasets. Compared with RGB-video-based action recognition, skeleton-based action recognition is a safer way to protect the privacy of subjects while having competitive recognition performance. However, due to improvements in skeleton recognition algorithms as well as motion and depth sensors, more details of motion characteristics can be preserved in the skeleton dataset, leading to potential privacy leakage. We first train classifiers to categorize private information from skeleton trajectories to investigate the potential privacy leakage from skeleton datasets. Our preliminary experiments show that the gender classifier achieves 87% accuracy on average, and the re-identification classifier achieves 80% accuracy on average with three baseline models: Shift-GCN, MS-G3D, and 2s-AGCN. We propose an anonymization framework based on adversarial learning to protect potential privacy leakage from the skeleton dataset. Experimental results show that an anonymized dataset can reduce the risk of privacy leakage while having marginal effects on action recognition performance even with simple anonymizer architectures. The code used in our experiments is available at https://github.com/ml-postech/Skeleton-anonymization/ | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 268,821 |
2402.04469 | IoT Network Traffic Analysis with Deep Learning | As IoT networks become more complex and generate massive amounts of dynamic data, it is difficult to monitor and detect anomalies using traditional statistical methods and machine learning methods. Deep learning algorithms can process and learn from large amounts of data and can also be trained using unsupervised learning techniques, meaning they don't require labelled data to detect anomalies. This makes it possible to detect new and unknown anomalies that may not have been detected before. Also, deep learning algorithms can be automated and highly scalable; thereby, they can run continuously in the backend and make it achievable to monitor large IoT networks instantly. In this work, we conduct a literature review on the most recent works using deep learning techniques and implement a model using ensemble techniques on the KDD Cup 99 dataset. The experimental results showcase the impressive performance of our deep anomaly detection model, achieving an accuracy of over 98\%. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | true | 427,463 |
2311.02695 | Identifying Linearly-Mixed Causal Representations from Multi-Node
Interventions | The task of inferring high-level causal variables from low-level observations, commonly referred to as causal representation learning, is fundamentally underconstrained. As such, recent works to address this problem focus on various assumptions that lead to identifiability of the underlying latent causal variables. A large corpus of these preceding approaches consider multi-environment data collected under different interventions on the causal model. What is common to virtually all of these works is the restrictive assumption that in each environment, only a single variable is intervened on. In this work, we relax this assumption and provide the first identifiability result for causal representation learning that allows for multiple variables to be targeted by an intervention within one environment. Our approach hinges on a general assumption on the coverage and diversity of interventions across environments, which also includes the shared assumption of single-node interventions of previous works. The main idea behind our approach is to exploit the trace that interventions leave on the variance of the ground truth causal variables and regularizing for a specific notion of sparsity with respect to this trace. In addition to and inspired by our theoretical contributions, we present a practical algorithm to learn causal representations from multi-node interventional data and provide empirical evidence that validates our identifiability results. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 405,544 |
1405.6033 | Universal Bayesian Measures and Universal Histogram Sequences | Consider universal data compression: the length $l(x^n)$ of sequence $x^n\in A^n$ with finite alphabet $A$ and length $n$ satisfies Kraft's inequality over $A^n$, and $-\frac{1}{n}\log \frac{P^n(x^n)}{Q^n(x^n)}$ almost surely converges to zero as $n$ grows for the $Q^n(x^n)=2^{-l(x^n)}$ and any stationary ergodic source $P$. In this paper, we say such a $Q$ is a universal Bayesian measure. We generalize the notion to the sources in which the random variables may be either discrete, continuous, or none of them. The basic idea is due to Boris Ryabko who utilized model weighting over histograms that approximate $P$, assuming that a density function of $P$ exists. However, the range of $P$ depends on the choice of the histogram sequence. The universal Bayesian measure constructed in this paper overcomes the drawbacks and has many applications to infer relation among random variables, and extends the application area of the minimum description length principle. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 33,329 |
1607.01059 | Improving Sparse Representation-Based Classification Using Local
Principal Component Analysis | Sparse representation-based classification (SRC), proposed by Wright et al., seeks the sparsest decomposition of a test sample over the dictionary of training samples, with classification to the most-contributing class. Because it assumes test samples can be written as linear combinations of their same-class training samples, the success of SRC depends on the size and representativeness of the training set. Our proposed classification algorithm enlarges the training set by using local principal component analysis to approximate the basis vectors of the tangent hyperplane of the class manifold at each training sample. The dictionary in SRC is replaced by a local dictionary that adapts to the test sample and includes training samples and their corresponding tangent basis vectors. We use a synthetic data set and three face databases to demonstrate that this method can achieve higher classification accuracy than SRC in cases of sparse sampling, nonlinear class manifolds, and stringent dimension reduction. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 58,175 |
2308.10578 | Weakly synchronous systems with three machines are Turing powerful | Communicating finite-state machines (CFMs) are a Turing powerful model of asynchronous message-passing distributed systems. In weakly synchronous systems, processes communicate through phases in which messages are first sent and then received, for each process. Such systems enjoy a limited form of synchronization, and for some communication models, this restriction is enough to make the reachability problem decidable. In particular, we explore the intriguing case of p2p (FIFO) communication, for which the reachability problem is known to be undecidable for four processes, but decidable for two. We show that the configuration reachability problem for weakly synchronous systems of three processes is undecidable. This result is heavily inspired by our study on the treewidth of the Message Sequence Charts (MSCs) that might be generated by such systems. In this sense, the main contribution of this work is a weakly synchronous system with three processes that generates MSCs of arbitrarily large treewidth. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | true | 386,794 |
2108.07478 | Instance Segmentation in 3D Scenes using Semantic Superpoint Tree
Networks | Instance segmentation in 3D scenes is fundamental in many applications of scene understanding. It is yet challenging due to the compound factors of data irregularity and uncertainty in the numbers of instances. State-of-the-art methods largely rely on a general pipeline that first learns point-wise features discriminative at semantic and instance levels, followed by a separate step of point grouping for proposing object instances. While promising, they have the shortcomings that (1) the second step is not supervised by the main objective of instance segmentation, and (2) their point-wise feature learning and grouping are less effective to deal with data irregularities, possibly resulting in fragmented segmentations. To address these issues, we propose in this work an end-to-end solution of Semantic Superpoint Tree Network (SSTNet) for proposing object instances from scene points. Key in SSTNet is an intermediate, semantic superpoint tree (SST), which is constructed based on the learned semantic features of superpoints, and which will be traversed and split at intermediate tree nodes for proposals of object instances. We also design in SSTNet a refinement module, termed CliqueNet, to prune superpoints that may be wrongly grouped into instance proposals. Experiments on the benchmarks of ScanNet and S3DIS show the efficacy of our proposed method. At the time of submission, SSTNet ranks top on the ScanNet (V2) leaderboard, with 2% higher of mAP than the second best method. The source code in PyTorch is available at https://github.com/Gorilla-Lab-SCUT/SSTNet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 250,927 |
2404.13333 | Parallel-in-Time Integration of Transient Phenomena in No-Insulation
Superconducting Coils Using Parareal | High-temperature superconductors (HTS) have the potential to enable magnetic fields beyond the current limits of low-temperature superconductors in applications like accelerator magnets. However, the design of HTS-based magnets requires computationally demanding transient multi-physics simulations with highly non-linear material properties. To reduce the solution time, we propose using Parareal (PR) for parallel-in-time magneto-thermal simulation of magnets based on HTS, particularly, no-insulation coils without turn-to-turn insulation. We propose extending the classical PR method to automatically find a time partitioning using a first coarse adaptive propagator. The proposed PR method is shown to reduce the computing time when fine engineering tolerances are required despite the highly nonlinear character of the problem. The full software stack used is open-source. | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 448,250 |
2411.03351 | Tabular Data Synthesis with Differential Privacy: A Survey | Data sharing is a prerequisite for collaborative innovation, enabling organizations to leverage diverse datasets for deeper insights. In real-world applications like FinTech and Smart Manufacturing, transactional data, often in tabular form, are generated and analyzed for insight generation. However, such datasets typically contain sensitive personal/business information, raising privacy concerns and regulatory risks. Data synthesis tackles this by generating artificial datasets that preserve the statistical characteristics of real data, removing direct links to individuals. However, attackers can still infer sensitive information using background knowledge. Differential privacy offers a solution by providing provable and quantifiable privacy protection. Consequently, differentially private data synthesis has emerged as a promising approach to privacy-aware data sharing. This paper provides a comprehensive overview of existing differentially private tabular data synthesis methods, highlighting the unique challenges of each generation model for generating tabular data under differential privacy constraints. We classify the methods into statistical and deep learning-based approaches based on their generation models, discussing them in both centralized and distributed environments. We evaluate and compare those methods within each category, highlighting their strengths and weaknesses in terms of utility, privacy, and computational complexity. Additionally, we present and discuss various evaluation methods for assessing the quality of the synthesized data, identify research gaps in the field and directions for future research. | false | false | false | false | true | false | false | false | false | false | false | false | true | false | false | false | true | false | 505,871 |
2210.04620 | FLamby: Datasets and Benchmarks for Cross-Silo Federated Learning in
Realistic Healthcare Settings | Federated Learning (FL) is a novel approach enabling several clients holding sensitive data to collaboratively train machine learning models, without centralizing data. The cross-silo FL setting corresponds to the case of few ($2$--$50$) reliable clients, each holding medium to large datasets, and is typically found in applications such as healthcare, finance, or industry. While previous works have proposed representative datasets for cross-device FL, few realistic healthcare cross-silo FL datasets exist, thereby slowing algorithmic research in this critical application. In this work, we propose a novel cross-silo dataset suite focused on healthcare, FLamby (Federated Learning AMple Benchmark of Your cross-silo strategies), to bridge the gap between theory and practice of cross-silo FL. FLamby encompasses 7 healthcare datasets with natural splits, covering multiple tasks, modalities, and data volumes, each accompanied with baseline training code. As an illustration, we additionally benchmark standard FL algorithms on all datasets. Our flexible and modular suite allows researchers to easily download datasets, reproduce results and re-use the different components for their research. FLamby is available at~\url{www.github.com/owkin/flamby}. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 322,529 |
2303.05101 | Scalable Stochastic Gradient Riemannian Langevin Dynamics in
Non-Diagonal Metrics | Stochastic-gradient sampling methods are often used to perform Bayesian inference on neural networks. It has been observed that the methods in which notions of differential geometry are included tend to have better performances, with the Riemannian metric improving posterior exploration by accounting for the local curvature. However, the existing methods often resort to simple diagonal metrics to remain computationally efficient. This loses some of the gains. We propose two non-diagonal metrics that can be used in stochastic-gradient samplers to improve convergence and exploration but have only a minor computational overhead over diagonal metrics. We show that for fully connected neural networks (NNs) with sparsity-inducing priors and convolutional NNs with correlated priors, using these metrics can provide improvements. For some other choices the posterior is sufficiently easy also for the simpler metrics. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 350,340 |
2109.12722 | Markerless Suture Needle 6D Pose Tracking with Robust Uncertainty
Estimation for Autonomous Minimally Invasive Robotic Surgery | Suture needle localization is necessary for autonomous suturing. Previous approaches in autonomous suturing often relied on fiducial markers rather than markerless detection schemes for localizing a suture needle due to the inconsistency of markerless detections. However, fiducial markers are not practical for real-world applications and can often be occluded from environmental factors in surgery (e.g., blood). Therefore in this work, we present a robust tracking approach for estimating the 6D pose of a suture needle when using inconsistent detections. We define observation models based on suture needles' geometry that captures the uncertainty of the detections and fuse them temporally in a probabilistic fashion. In our experiments, we compare different permutations of the observation models in the suture needle localization task to show their effectiveness. Our proposed method outperforms previous approaches in localizing a suture needle. We also demonstrate the proposed tracking method in an autonomous suture needle regrasping task and ex vivo environments. | false | false | false | false | false | false | false | true | false | false | false | true | false | false | false | false | false | false | 257,391 |
1909.00392 | Towards Robust Learning-Based Pose Estimation of Noncooperative
Spacecraft | This work presents a novel Convolutional Neural Network (CNN) architecture and a training procedure to enable robust and accurate pose estimation of a noncooperative spacecraft. First, a new CNN architecture is introduced that has scored a fourth place in the recent Pose Estimation Challenge hosted by Stanford's Space Rendezvous Laboratory (SLAB) and the Advanced Concepts Team (ACT) of the European Space Agency (ESA). The proposed architecture first detects the object by regressing a 2D bounding box, then a separate network regresses the 2D locations of the known surface keypoints from an image of the target cropped around the detected Region-of-Interest (RoI). In a single-image pose estimation problem, the extracted 2D keypoints can be used in conjunction with corresponding 3D model coordinates to compute relative pose via the Perspective-n-Point (PnP) problem. These keypoint locations have known correspondences to those in the 3D model, since the CNN is trained to predict the corners in a pre-defined order, allowing for bypassing the computationally expensive feature matching processes. This work also introduces and explores the texture randomization to train a CNN for spaceborne applications. Specifically, Neural Style Transfer (NST) is applied to randomize the texture of the spacecraft in synthetically rendered images. It is shown that using the texture-randomized images of spacecraft for training improves the network's performance on spaceborne images without exposure to them during training. It is also shown that when using the texture-randomized spacecraft images during training, regressing 3D bounding box corners leads to better performance on spaceborne images than regressing surface keypoints, as NST inevitably distorts the spacecraft's geometric features to which the surface keypoints have closer relation. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 143,622 |
2404.06004 | AiSAQ: All-in-Storage ANNS with Product Quantization for DRAM-free
Information Retrieval | In approximate nearest neighbor search (ANNS) methods based on approximate proximity graphs, DiskANN achieves good recall-speed balance for large-scale datasets using both of RAM and storage. Despite it claims to save memory usage by loading compressed vectors by product quantization (PQ), its memory usage increases in proportion to the scale of datasets. In this paper, we propose All-in-Storage ANNS with Product Quantization (AiSAQ), which offloads the compressed vectors to storage. Our method achieves $\sim$10 MB memory usage in query search even with billion-scale datasets with minor performance degradation. AiSAQ also reduces the index load time before query search, which enables the index switch between muitiple billion-scale datasets and significantly enhances the flexibility of retrieval-augmented generation (RAG). This method is applicable to all graph-based ANNS algorithms and can be combined with higher-spec ANNS methods in the future. | false | false | false | false | false | true | false | false | true | false | false | false | false | false | false | false | false | true | 445,295 |
1903.07988 | Deep Learning Enables Automatic Detection and Segmentation of Brain
Metastases on Multi-Sequence MRI | Detecting and segmenting brain metastases is a tedious and time-consuming task for many radiologists, particularly with the growing use of multi-sequence 3D imaging. This study demonstrates automated detection and segmentation of brain metastases on multi-sequence MRI using a deep learning approach based on a fully convolution neural network (CNN). In this retrospective study, a total of 156 patients with brain metastases from several primary cancers were included. Pre-therapy MR images (1.5T and 3T) included pre- and post-gadolinium T1-weighted 3D fast spin echo, post-gadolinium T1-weighted 3D axial IR-prepped FSPGR, and 3D fluid attenuated inversion recovery. The ground truth was established by manual delineation by two experienced neuroradiologists. CNN training/development was performed using 100 and 5 patients, respectively, with a 2.5D network based on a GoogLeNet architecture. The results were evaluated in 51 patients, equally separated into those with few (1-3), multiple (4-10), and many (>10) lesions. Network performance was evaluated using precision, recall, Dice/F1 score, and ROC-curve statistics. For an optimal probability threshold, detection and segmentation performance was assessed on a per metastasis basis. The area under the ROC-curve (AUC), averaged across all patients, was 0.98. The AUC in the subgroups was 0.99, 0.97, and 0.97 for patients having 1-3, 4-10, and >10 metastases, respectively. Using an average optimal probability threshold determined by the development set, precision, recall, and Dice-score were 0.79, 0.53, and 0.79, respectively. At the same probability threshold, the network showed an average false positive rate of 8.3/patient (no lesion-size limit) and 3.4/patient (10 mm3 lesion size limit). In conclusion, a deep learning approach using multi-sequence MRI can aid in the detection and segmentation of brain metastases. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 124,750 |
2102.06659 | Leveraging Artificial Intelligence to Analyze Citizens' Opinions on
Urban Green Space | Continued population growth and urbanization is shifting research to consider the quality of urban green space over the quantity of these parks, woods, and wetlands. The quality of urban green space has been hitherto measured by expert assessments, including in-situ observations, surveys, and remote sensing analyses. Location data platforms, such as TripAdvisor, can provide people's opinion on many destinations and experiences, including UGS. This paper leverages Artificial Intelligence techniques for opinion mining and text classification using such platform's reviews as a novel approach to urban green space quality assessments. Natural Language Processing is used to analyze contextual information given supervised scores of words by implementing computational analysis. Such an application can support local authorities and stakeholders in their understanding of and justification for future investments in urban green space. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 219,831 |
1907.05671 | Justifying Diagnosis Decisions by Deep Neural Networks | An integrated approach is proposed across visual and textual data to both determine and justify a medical diagnosis by a neural network. As deep learning techniques improve, interest grows to apply them in medical applications. To enable a transition to workflows in a medical context that are aided by machine learning, the need exists for such algorithms to help justify the obtained outcome so human clinicians can judge their validity. In this work, deep learning methods are used to map a frontal X-Ray image to a continuous textual representation. This textual representation is decoded into a diagnosis and the associated textual justification that will help a clinician evaluate the outcome. Additionally, more explanatory data is provided for the diagnosis by generating a realistic X-Ray that belongs to the nearest alternative diagnosis. With a clinical expert opinion study on a subset of the X-Ray data set from the Indiana University hospital network, we demonstrate that our justification mechanism significantly outperforms existing methods that use saliency maps. While performing multi-task training with multiple loss functions, our method achieves excellent diagnosis accuracy and captioning quality when compared to current state-of-the-art single-task methods. | true | false | false | false | false | false | true | false | true | false | false | false | false | false | false | false | false | false | 138,426 |
2201.09433 | Active Learning Polynomial Threshold Functions | We initiate the study of active learning polynomial threshold functions (PTFs). While traditional lower bounds imply that even univariate quadratics cannot be non-trivially actively learned, we show that allowing the learner basic access to the derivatives of the underlying classifier circumvents this issue and leads to a computationally efficient algorithm for active learning degree-$d$ univariate PTFs in $\tilde{O}(d^3\log(1/\varepsilon\delta))$ queries. We also provide near-optimal algorithms and analyses for active learning PTFs in several average case settings. Finally, we prove that access to derivatives is insufficient for active learning multivariate PTFs, even those of just two variables. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 276,674 |
2007.12236 | Usability of a Robot's Realistic Facial Expressions and Peripherals in
Autistic Children's Therapy | Robot-assisted therapy is an emerging form of therapy for autistic children, although designing effective robot behaviors is a challenge for effective implementation of such therapy. A series of usability tests assessed trends in the effectiveness of modelling a robot's facial expressions on realistic facial expressions and of adding peripherals enabling child-led control of emotion learning activities with autistic children. Nineteen autistic children interacted with a small humanoid robot and an adult therapist in several emotion-learning activities that featured realistic facial expressions modelled on either a pre-existing database or live facial mirroring, and that used peripherals (tablets or tangible 'squishies') to enable child-led activities. Both types of realistic facial expressions by the robot were less effective than exaggerated expressions, with the mirroring being unintuitive for children. The tablet was usable but required more feedback and lower latency, while the tactile tangibles were engaging aids. | true | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 188,766 |
1512.00743 | Recognizing Semantic Features in Faces using Deep Learning | The human face constantly conveys information, both consciously and subconsciously. However, as basic as it is for humans to visually interpret this information, it is quite a big challenge for machines. Conventional semantic facial feature recognition and analysis techniques are already in use and are based on physiological heuristics, but they suffer from lack of robustness and high computation time. This thesis aims to explore ways for machines to learn to interpret semantic information available in faces in an automated manner without requiring manual design of feature detectors, using the approach of Deep Learning. This thesis provides a study of the effects of various factors and hyper-parameters of deep neural networks in the process of determining an optimal network configuration for the task of semantic facial feature recognition. This thesis explores the effectiveness of the system to recognize the various semantic features (like emotions, age, gender, ethnicity etc.) present in faces. Furthermore, the relation between the effect of high-level concepts on low level features is explored through an analysis of the similarities in low-level descriptors of different semantic features. This thesis also demonstrates a novel idea of using a deep network to generate 3-D Active Appearance Models of faces from real-world 2-D images. For a more detailed report on this work, please see [arXiv:1512.00743v1]. | false | false | false | false | true | false | true | false | false | false | false | true | false | false | false | false | false | false | 49,738 |
2103.01452 | Social Profit Optimization with Demand Response Management in
Electricity Market: A Multi-timescale Leader-following Approach | In the electricity market, it is quite common that the market participants make "selfish" strategies to harvest the maximum profits for themselves, which may cause the social benefit loss and impair the sustainability of the society in the long term. Regarding this issue, in this work, we will discuss how the social profit can be improved through strategic demand response (DR) management. Specifically, we explore two interaction mechanisms in the market: Nash equilibrium (NE) and Stackelberg equilibrium (SE) among utility companies (UCs) and user-UC interactions, respectively. At the user side, each user determines the optimal energy-purchasing strategy to maximize its own profit. At the UC side, a governmental UC (g-UC) is considered, who aims to optimize the social profit of the market. Meanwhile, normal UCs play games to maximize their own profits. As a result, a basic leader-following problem among the UCs is formulated under the coordination of the independent system operator (ISO). Moreover, by using our proposed demand function amelioration (DFA) strategy, a multi-timescale leader-following problem is formulated. In this case, the maximal market efficiency can be achieved without changing the "selfish instinct" of normal UCs. In addition, by considering the local constraints for the UCs, two projection-based pricing algorithms are proposed for UCs, which can provide approximate optimal solutions for the resulting non-convex social profit optimization problems. The feasibility of the proposed algorithms is verified by using the concept of price of anarchy (PoA) in a multi-UC multi-user market model in the simulation. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | true | 222,621 |
1907.02821 | Benchmarking unsupervised near-duplicate image detection | Unsupervised near-duplicate detection has many practical applications ranging from social media analysis and web-scale retrieval, to digital image forensics. It entails running a threshold-limited query on a set of descriptors extracted from the images, with the goal of identifying all possible near-duplicates, while limiting the false positives due to visually similar images. Since the rate of false alarms grows with the dataset size, a very high specificity is thus required, up to $1 - 10^{-9}$ for realistic use cases; this important requirement, however, is often overlooked in literature. In recent years, descriptors based on deep convolutional neural networks have matched or surpassed traditional feature extraction methods in content-based image retrieval tasks. To the best of our knowledge, ours is the first attempt to establish the performance range of deep learning-based descriptors for unsupervised near-duplicate detection on a range of datasets, encompassing a broad spectrum of near-duplicate definitions. We leverage both established and new benchmarks, such as the Mir-Flick Near-Duplicate (MFND) dataset, in which a known ground truth is provided for all possible pairs over a general, large scale image collection. To compare the specificity of different descriptors, we reduce the problem of unsupervised detection to that of binary classification of near-duplicate vs. not-near-duplicate images. The latter can be conveniently characterized using Receiver Operating Curve (ROC). Our findings in general favor the choice of fine-tuning deep convolutional networks, as opposed to using off-the-shelf features, but differences at high specificity settings depend on the dataset and are often small. The best performance was observed on the MFND benchmark, achieving 96\% sensitivity at a false positive rate of $1.43 \times 10^{-6}$. | false | false | false | false | false | false | true | false | false | false | false | true | true | false | false | false | false | true | 137,693 |
2305.00533 | Guaranteed Evader Detection in Multi-Agent Search Tasks using Pincer
Trajectories | Assume that inside an initial planar area there are smart mobile evaders attempting to avoid detection by a team of sweeping searching agents. All sweepers detect evaders with fan-shaped sensors, modeling the field of view of real cameras. Detection of all evaders is guaranteed with cooperative sweeping strategies, by setting requirements on sweepers' speed, and by carefully designing their trajectories. Assume the smart evaders have an upper limit on their speed which is a-priori known to the sweeping team. An easier task for the team of sweepers is to confine evaders to the domain in which they are initially located. The sweepers accomplish the confinement task if they move sufficiently fast and detect evaders by applying an appropriate search strategy. Any given search strategy results in a minimal sweeper's speed in order to be able to detect all evaders. The minimal speed guarantees the ability of the sweeping team to confine evaders to their original domain, and if the sweepers move faster they are able to detect all evaders that are present in the region. We present results on the total search time for a novel pincer-movement based search protocol that utilizes complementary trajectories along with adaptive sensor geometries for any even number of pursuers. | false | false | false | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | 361,363 |
2305.02312 | AG3D: Learning to Generate 3D Avatars from 2D Image Collections | While progress in 2D generative models of human appearance has been rapid, many applications require 3D avatars that can be animated and rendered. Unfortunately, most existing methods for learning generative models of 3D humans with diverse shape and appearance require 3D training data, which is limited and expensive to acquire. The key to progress is hence to learn generative models of 3D avatars from abundant unstructured 2D image collections. However, learning realistic and complete 3D appearance and geometry in this under-constrained setting remains challenging, especially in the presence of loose clothing such as dresses. In this paper, we propose a new adversarial generative model of realistic 3D people from 2D images. Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator and integrating an efficient and flexible articulation module. To improve realism, we train our model using multiple discriminators while also integrating geometric cues in the form of predicted 2D normal maps. We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance. We validate the effectiveness of our model and the importance of each component via systematic ablation studies. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 361,990 |
2402.01618 | Style Vectors for Steering Generative Large Language Model | This research explores strategies for steering the output of large language models (LLMs) towards specific styles, such as sentiment, emotion, or writing style, by adding style vectors to the activations of hidden layers during text generation. We show that style vectors can be simply computed from recorded layer activations for input texts in a specific style in contrast to more complex training-based approaches. Through a series of experiments, we demonstrate the effectiveness of activation engineering using such style vectors to influence the style of generated text in a nuanced and parameterisable way, distinguishing it from prompt engineering. The presented research constitutes a significant step towards developing more adaptive and effective AI-empowered interactive systems. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 426,102 |
2407.01445 | FastCLIP: A Suite of Optimization Techniques to Accelerate CLIP Training
with Limited Resources | Existing studies of training state-of-the-art Contrastive Language-Image Pretraining (CLIP) models on large-scale data involve hundreds of or even thousands of GPUs due to the requirement of a large batch size. However, such a large amount of resources is not accessible to most people. While advanced compositional optimization techniques for optimizing global contrastive losses have been demonstrated effective for removing the requirement of large batch size, their performance on large-scale data remains underexplored and not optimized. To bridge the gap, this paper explores several aspects of CLIP training with limited resources (e.g., up to tens of GPUs). First, we introduce FastCLIP, a general CLIP training framework built on advanced compositional optimization techniques while designed and optimized for the distributed setting. Our framework is equipped with an efficient gradient reduction strategy to reduce communication overhead. Second, to further boost training efficiency, we investigate three components of the framework from an optimization perspective: the schedule of the inner learning rate, the update rules of the temperature parameter and the model parameters, respectively. Experiments on different strategies for each component shed light on how to conduct CLIP training more efficiently. Finally, we benchmark the performance of FastCLIP and the state-of-the-art training baseline (OpenCLIP) on different compute scales up to 32 GPUs on 8 nodes, and three data scales ranging from 2.7 million, 9.1 million to 315 million image-text pairs to demonstrate the significant improvement of FastCLIP in the resource-limited setting. We release the code of FastCLIP at https://github.com/Optimization-AI/fast_clip . | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 469,300 |
2012.01405 | Learning View-Disentangled Human Pose Representation by Contrastive
Cross-View Mutual Information Maximization | We introduce a novel representation learning method to disentangle pose-dependent as well as view-dependent factors from 2D human poses. The method trains a network using cross-view mutual information maximization (CV-MIM) which maximizes mutual information of the same pose performed from different viewpoints in a contrastive learning manner. We further propose two regularization terms to ensure disentanglement and smoothness of the learned representations. The resulting pose representations can be used for cross-view action recognition. To evaluate the power of the learned representations, in addition to the conventional fully-supervised action recognition settings, we introduce a novel task called single-shot cross-view action recognition. This task trains models with actions from only one single viewpoint while models are evaluated on poses captured from all possible viewpoints. We evaluate the learned representations on standard benchmarks for action recognition, and show that (i) CV-MIM performs competitively compared with the state-of-the-art models in the fully-supervised scenarios; (ii) CV-MIM outperforms other competing methods by a large margin in the single-shot cross-view setting; (iii) and the learned representations can significantly boost the performance when reducing the amount of supervised training data. Our code is made publicly available at https://github.com/google-research/google-research/tree/master/poem | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 209,409 |
2408.09886 | SAM-UNet:Enhancing Zero-Shot Segmentation of SAM for Universal Medical
Images | Segment Anything Model (SAM) has demonstrated impressive performance on a wide range of natural image segmentation tasks. However, its performance significantly deteriorates when directly applied to medical domain, due to the remarkable differences between natural images and medical images. Some researchers have attempted to train SAM on large scale medical datasets. However, poor zero-shot performance is observed from the experimental results. In this context, inspired by the superior performance of U-Net-like models in medical image segmentation, we propose SAMUNet, a new foundation model which incorporates U-Net to the original SAM, to fully leverage the powerful contextual modeling ability of convolutions. To be specific, we parallel a convolutional branch in the image encoder, which is trained independently with the vision Transformer branch frozen. Additionally, we employ multi-scale fusion in the mask decoder, to facilitate accurate segmentation of objects with different scales. We train SAM-UNet on SA-Med2D-16M, the largest 2-dimensional medical image segmentation dataset to date, yielding a universal pretrained model for medical images. Extensive experiments are conducted to evaluate the performance of the model, and state-of-the-art result is achieved, with a dice similarity coefficient score of 0.883 on SA-Med2D-16M dataset. Specifically, in zero-shot segmentation experiments, our model not only significantly outperforms previous large medical SAM models across all modalities, but also substantially mitigates the performance degradation seen on unseen modalities. It should be highlighted that SAM-UNet is an efficient and extensible foundation model, which can be further fine-tuned for other downstream tasks in medical community. The code is available at https://github.com/Hhankyangg/sam-unet. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 481,633 |
2404.09365 | Hierarchical Attention Models for Multi-Relational Graphs | We present Bi-Level Attention-Based Relational Graph Convolutional Networks (BR-GCN), unique neural network architectures that utilize masked self-attentional layers with relational graph convolutions, to effectively operate on highly multi-relational data. BR-GCN models use bi-level attention to learn node embeddings through (1) node-level attention, and (2) relation-level attention. The node-level self-attentional layers use intra-relational graph interactions to learn relation-specific node embeddings using a weighted aggregation of neighborhood features in a sparse subgraph region. The relation-level self-attentional layers use inter-relational graph interactions to learn the final node embeddings using a weighted aggregation of relation-specific node embeddings. The BR-GCN bi-level attention mechanism extends Transformer-based multiplicative attention from the natural language processing (NLP) domain, and Graph Attention Networks (GAT)-based attention, to large-scale heterogeneous graphs (HGs). On node classification, BR-GCN outperforms baselines from 0.29% to 14.95% as a stand-alone model, and on link prediction, BR-GCN outperforms baselines from 0.02% to 7.40% as an auto-encoder model. We also conduct ablation studies to evaluate the quality of BR-GCN's relation-level attention and discuss how its learning of graph structure may be transferred to enrich other graph neural networks (GNNs). Through various experiments, we show that BR-GCN's attention mechanism is both scalable and more effective in learning compared to state-of-the-art GNNs. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 446,640 |
2402.03319 | Physical Reservoir Computing Enabled by Solitary Waves and
Biologically-Inspired Nonlinear Transformation of Input Data | Reservoir computing (RC) systems can efficiently forecast chaotic time series using nonlinear dynamical properties of an artificial neural network of random connections. The versatility of RC systems has motivated further research on both hardware counterparts of traditional RC algorithms and more efficient RC-like schemes. Inspired by the nonlinear processes in a living biological brain and using solitary waves excited on the surface of a flowing liquid film, in this paper we experimentally validate a physical RC system that substitutes the effect of randomness for a nonlinear transformation of input data. Carrying out all operations using a microcontroller with a minimal computational power, we demonstrate that the so-designed RC system serves as a technically simple hardware counterpart to the `next-generation' improvement of the traditional RC algorithm. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | false | false | 426,960 |
2007.13019 | Feedback Loop and Bias Amplification in Recommender Systems | Recommendation algorithms are known to suffer from popularity bias; a few popular items are recommended frequently while the majority of other items are ignored. These recommendations are then consumed by the users, their reaction will be logged and added to the system: what is generally known as a feedback loop. In this paper, we propose a method for simulating the users interaction with the recommenders in an offline setting and study the impact of feedback loop on the popularity bias amplification of several recommendation algorithms. We then show how this bias amplification leads to several other problems such as declining the aggregate diversity, shifting the representation of users' taste over time and also homogenization of the users experience. In particular, we show that the impact of feedback loop is generally stronger for the users who belong to the minority group. | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | 188,996 |
1704.06955 | Superadditivity of the Classical Capacity with Limited Entanglement
Assistance | Finding the optimal encoding strategies can be challenging for communication using quantum channels, as classical and quantum capacities may be superadditive. Entanglement assistance can often simplify this task, as the entanglement-assisted classical capacity for any channel is additive, making entanglement across channel uses unnecessary. If the entanglement assistance is limited, the picture is much more unclear. Suppose the classical capacity is superadditive, then the classical capacity with limited entanglement assistance could retain superadditivity by continuity arguments. If the classical capacity is additive, it is unknown if superadditivity can still be developed with limited entanglement assistance. We show this is possible, by providing an example. We construct a channel for which, the classical capacity is additive, but that with limited entanglement assistance can be superadditive. This shows entanglement plays a weird role in communication and we still understand very little about it. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 72,264 |
2108.09993 | Image coding for machines: an end-to-end learned approach | Over recent years, deep learning-based computer vision systems have been applied to images at an ever-increasing pace, oftentimes representing the only type of consumption for those images. Given the dramatic explosion in the number of images generated per day, a question arises: how much better would an image codec targeting machine-consumption perform against state-of-the-art codecs targeting human-consumption? In this paper, we propose an image codec for machines which is neural network (NN) based and end-to-end learned. In particular, we propose a set of training strategies that address the delicate problem of balancing competing loss functions, such as computer vision task losses, image distortion losses, and rate loss. Our experimental results show that our NN-based codec outperforms the state-of-the-art Versa-tile Video Coding (VVC) standard on the object detection and instance segmentation tasks, achieving -37.87% and -32.90% of BD-rate gain, respectively, while being fast thanks to its compact size. To the best of our knowledge, this is the first end-to-end learned machine-targeted image codec. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 251,767 |
2501.02194 | Ensemble-based Deep Multilayer Community Search | Multilayer graphs, consisting of multiple interconnected layers, are widely used to model diverse relationships in the real world. A community is a cohesive subgraph that offers valuable insights for analyzing (multilayer) graphs. Recently, there has been an emerging trend focused on searching query-driven communities within the multilayer graphs. However, existing methods for multilayer community search are either 1) rule-based, which suffer from structure inflexibility; or 2) learning-based, which rely on labeled data or fail to capture layer-specific characteristics. To address these, we propose EnMCS, an Ensemble-based unsupervised (i.e., label-free) Multilayer Community Search framework. EnMCS contains two key components, i.e., HoloSearch which identifies potential communities in each layer while integrating both layer-shared and layer-specific information, and EMerge which is an Expectation-Maximization (EM)-based method that synthesizes the potential communities from each layer into a consensus community. Specifically, HoloSearch first employs a graph-diffusion-based model that integrates three label-free loss functions to learn layer-specific and layer-shared representations for each node. Communities in each layer are then identified based on nodes that exhibit high similarity in layer-shared representations while demonstrating low similarity in layer-specific representations w.r.t. the query nodes. To account for the varying layer-specific characteristics of each layer when merging communities, EMerge models the error rates of layers and true community as latent variables. It then employs the EM algorithm to simultaneously minimize the error rates of layers and predict the final consensus community through iterative maximum likelihood estimation. Experiments over 10 real-world datasets highlight the superiority of EnMCS in terms of both efficiency and effectiveness. | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | false | 522,389 |
2107.09802 | Private Alternating Least Squares: Practical Private Matrix Completion
with Tighter Rates | We study the problem of differentially private (DP) matrix completion under user-level privacy. We design a joint differentially private variant of the popular Alternating-Least-Squares (ALS) method that achieves: i) (nearly) optimal sample complexity for matrix completion (in terms of number of items, users), and ii) the best known privacy/utility trade-off both theoretically, as well as on benchmark data sets. In particular, we provide the first global convergence analysis of ALS with noise introduced to ensure DP, and show that, in comparison to the best known alternative (the Private Frank-Wolfe algorithm by Jain et al. (2018)), our error bounds scale significantly better with respect to the number of items and users, which is critical in practical problems. Extensive validation on standard benchmarks demonstrate that the algorithm, in combination with carefully designed sampling procedures, is significantly more accurate than existing techniques, thus promising to be the first practical DP embedding model. | false | false | false | false | false | false | true | false | false | false | false | false | true | false | false | false | false | false | 247,129 |
1408.1600 | Change Impact Analysis Based Regression Testing of Web Services | Reducing the effort required to make changes in web services is one of the primary goals in web service projects maintenance and evolution. Normally, functional and non-functional testing of a web service is performed by testing the operations specified in its WSDL. The regression testing is performed by identifying the changes made thereafter to the web service code and the WSDL. In this thesis, we present a tool-supported approach to perform efficient regression testing of web services. By representing a web service as a directed graph of WSDL elements, we identify and gathers the changed portions of the graph and use this information to reduce regression testing efforts. Specifically, we identify, categorize, and capture the web service testing needs in two different ways, namely, Operationalized Regression Testing of Web Service (ORTWS) and Parameterized Regression Testing of Web Service (PRTWS). Both of the approach can be combined to reduce the regression testing efforts in the web service project. The proposed approach is prototyped as a tool, named as Automatic Web Service Change Management (AWSCM), which helps in selecting the relevant test cases to construct reduced test suite from the old test suite. We present few case studies on different web service projects to demonstrate the applicability of the proposed tool. The reduction in the effort for regression testing of web service is also estimated. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | true | 35,191 |
2410.05805 | PostCast: Generalizable Postprocessing for Precipitation Nowcasting via
Unsupervised Blurriness Modeling | Precipitation nowcasting plays a pivotal role in socioeconomic sectors, especially in severe convective weather warnings. Although notable progress has been achieved by approaches mining the spatiotemporal correlations with deep learning, these methods still suffer severe blurriness as the lead time increases, which hampers accurate predictions for extreme precipitation. To alleviate blurriness, researchers explore generative methods conditioned on blurry predictions. However, the pairs of blurry predictions and corresponding ground truth need to be generated in advance, making the training pipeline cumbersome and limiting the generality of generative models within blur modes that appear in training data. By rethinking the blurriness in precipitation nowcasting as a blur kernel acting on predictions, we propose an unsupervised postprocessing method to eliminate the blurriness without the requirement of training with the pairs of blurry predictions and corresponding ground truth. Specifically, we utilize blurry predictions to guide the generation process of a pre-trained unconditional denoising diffusion probabilistic model (DDPM) to obtain high-fidelity predictions with eliminated blurriness. A zero-shot blur kernel estimation mechanism and an auto-scale denoise guidance strategy are introduced to adapt the unconditional DDPM to any blurriness modes varying from datasets and lead times in precipitation nowcasting. Extensive experiments are conducted on 7 precipitation radar datasets, demonstrating the generality and superiority of our method. | false | false | false | false | true | false | false | false | false | false | false | true | false | false | false | false | false | false | 495,927 |
2002.10836 | Gesture recognition with 60GHz 802.11 waveforms | Gesture recognition application over 802.11 ad/y waveforms is developed. Simultaneous gestures of slider-control and two-finger gesture for switching are detected based on Golay sequences of channel estimation fields of the packets. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 165,529 |
1702.08235 | Variational Inference using Implicit Distributions | Generative adversarial networks (GANs) have given us a great tool to fit implicit generative models to data. Implicit distributions are ones we can sample from easily, and take derivatives of samples with respect to model parameters. These models are highly expressive and we argue they can prove just as useful for variational inference (VI) as they are for generative modelling. Several papers have proposed GAN-like algorithms for inference, however, connections to the theory of VI are not always well understood. This paper provides a unifying review of existing algorithms establishing connections between variational autoencoders, adversarially learned inference, operator VI, GAN-based image reconstruction, and more. Secondly, the paper provides a framework for building new algorithms: depending on the way the variational bound is expressed we introduce prior-contrastive and joint-contrastive methods, and show practical inference algorithms based on either density ratio estimation or denoising. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 68,947 |
2010.04430 | Large-scale randomized experiment reveals machine learning helps people
learn and remember more effectively | Machine learning has typically focused on developing models and algorithms that would ultimately replace humans at tasks where intelligence is required. In this work, rather than replacing humans, we focus on unveiling the potential of machine learning to improve how people learn and remember factual material. To this end, we perform a large-scale randomized controlled trial with thousands of learners from a popular learning app in the area of mobility. After controlling for the length and frequency of study, we find that learners whose study sessions are optimized using machine learning remember the content over $\sim$67% longer than those whose study sessions are generated using two alternative heuristics. Our randomized controlled trial also reveals that the learners whose study sessions are optimized using machine learning are $\sim$50% more likely to return to the app within 4-7 days. | true | false | false | true | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 199,741 |
1206.6409 | Scaling Up Coordinate Descent Algorithms for Large $\ell_1$
Regularization Problems | We present a generic framework for parallel coordinate descent (CD) algorithms that includes, as special cases, the original sequential algorithms Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | true | 16,944 |
2201.01819 | Formal Analysis of Art: Proxy Learning of Visual Concepts from Style
Through Language Models | We present a machine learning system that can quantify fine art paintings with a set of visual elements and principles of art. This formal analysis is fundamental for understanding art, but developing such a system is challenging. Paintings have high visual complexities, but it is also difficult to collect enough training data with direct labels. To resolve these practical limitations, we introduce a novel mechanism, called proxy learning, which learns visual concepts in paintings though their general relation to styles. This framework does not require any visual annotation, but only uses style labels and a general relationship between visual concepts and style. In this paper, we propose a novel proxy model and reformulate four pre-existing methods in the context of proxy learning. Through quantitative and qualitative comparison, we evaluate these methods and compare their effectiveness in quantifying the artistic visual concepts, where the general relationship is estimated by language models; GloVe or BERT. The language modeling is a practical and scalable solution requiring no labeling, but it is inevitably imperfect. We demonstrate how the new proxy model is robust to the imperfection, while the other models are sensitively affected by it. | false | false | false | false | false | false | true | false | true | false | false | true | false | false | false | false | false | false | 274,357 |
2309.07990 | Leveraging Contextual Information for Effective Entity Salience
Detection | In text documents such as news articles, the content and key events usually revolve around a subset of all the entities mentioned in a document. These entities, often deemed as salient entities, provide useful cues of the aboutness of a document to a reader. Identifying the salience of entities was found helpful in several downstream applications such as search, ranking, and entity-centric summarization, among others. Prior work on salient entity detection mainly focused on machine learning models that require heavy feature engineering. We show that fine-tuning medium-sized language models with a cross-encoder style architecture yields substantial performance gains over feature engineering approaches. To this end, we conduct a comprehensive benchmarking of four publicly available datasets using models representative of the medium-sized pre-trained language model family. Additionally, we show that zero-shot prompting of instruction-tuned language models yields inferior results, indicating the task's uniqueness and complexity. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 391,988 |
2303.13760 | Multiple Access Design for Symbiotic Radios: Facilitating Massive IoT
Connections with Cellular Networks | Symbiotic radio (SR) has emerged as a spectrum- and energy-efficient paradigm to support massive Internet of Things (IoT) connections. Two multiple access schemes are proposed in this paper to facilitate the massive IoT connections using the cellular network based on the SR technique, namely, the simultaneous access (SA) scheme and the selection diversity access (SDA) scheme. In the SA scheme, the base station (BS) transmits information to the receiver while multiple IoT devices transmit their information simultaneously by passively backscattering the BS signal to the receiver, while in the SDA scheme, only the IoT device with the strongest backscatter link transmits information to the receiver. In both of the schemes, the receiver jointly decodes the information from the BS and the IoT devices. To evaluate the above two schemes, in this paper, we have derived the closed-form expressions of the ergodic rates and the outage probabilities for the cellular and IoT transmissions. Finally, numerical results are provided to verify the theoretical analysis and compare the two proposed multiple access schemes. When the number of IoT devices is small, the SDA scheme is more appealing since it can significantly reduce the computational complexity while achieving equivalent performance to the SA scheme. When the number of IoT devices is large, the SA scheme is preferable since it guarantees a significantly better rate performance and a lower outage probability. | false | false | false | false | false | false | false | false | false | true | true | false | false | false | false | false | false | false | 353,807 |
2402.01209 | Solution of the Probabilistic Lambert's Problem: Optimal Transport
Approach | The deterministic variant of the Lambert's problem was posed by Lambert in the 18th century and its solution for conic trajectory has been derived by many, including Euler, Lambert, Lagrange, Laplace, Gauss and Legendre. The solution amounts to designing velocity control for steering a spacecraft from a given initial to a given terminal position subject to gravitational potential and flight time constraints. In recent years, a probabilistic variant of the Lambert's problem has received attention in the aerospace community where the endpoint position constraints are softened to endpoint joint probability distributions over the respective positions. Such probabilistic specifications account for the estimation errors, modeling uncertainties, etc. Building on a deterministic optimal control reformulation via analytical mechanics, we show that the probabilistic Lambert's problem is a generalized dynamic optimal mass transport problem where the gravitational potential plays the role of an additive state cost. This allows us to rigorously prove the existence-uniqueness of the solution for the probabilistic Lambert problem both with and without process noise. In the latter case, the problem and its solution correspond to a generalized Schr\"odinger bridge, much like how classical Schrodinger bridge can be seen as stochastic regularization of the optimal mass transport. We deduce the large deviation principle enjoyed by the Lambertian Schr\"odinger bridge. Leveraging these newfound connections, we design a computational algorithm to illustrate the nonparametric numerical solution of the probabilistic Lambert's problem. | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | 425,915 |
2006.12618 | A Bayesian incorporated linear non-Gaussian acyclic model for multiple
directed graph estimation to study brain emotion circuit development in
adolescence | Emotion perception is essential to affective and cognitive development which involves distributed brain circuits. The ability of emotion identification begins in infancy and continues to develop throughout childhood and adolescence. Understanding the development of brain's emotion circuitry may help us explain the emotional changes observed during adolescence. Our previous study delineated the trajectory of brain functional connectivity (FC) from late childhood to early adulthood during emotion identification tasks. In this work, we endeavour to deepen our understanding from association to causation. We proposed a Bayesian incorporated linear non-Gaussian acyclic model (BiLiNGAM), which incorporated our previous association model into the prior estimation pipeline. In particular, it can jointly estimate multiple directed acyclic graphs (DAGs) for multiple age groups at different developmental stages. Simulation results indicated more stable and accurate performance over various settings, especially when the sample size was small (high-dimensional cases). We then applied to the analysis of real data from the Philadelphia Neurodevelopmental Cohort (PNC). This included 855 individuals aged 8-22 years who were divided into five different adolescent stages. Our network analysis revealed the development of emotion-related intra- and inter- modular connectivity and pinpointed several emotion-related hubs. We further categorized the hubs into two types: in-hubs and out-hubs, as the center of receiving and distributing information. Several unique developmental hub structures and group-specific patterns were also discovered. Our findings help provide a causal understanding of emotion development in the human brain. | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | 183,643 |
1909.04142 | DaTscan SPECT Image Classification for Parkinson's Disease | Parkinson's Disease (PD) is a neurodegenerative disease that currently does not have a cure. In order to facilitate disease management and reduce the speed of symptom progression, early diagnosis is essential. The current clinical, diagnostic approach is to have radiologists perform human visual analysis of the degeneration of dopaminergic neurons in the substantia nigra region of the brain. Clinically, dopamine levels are monitored through observing dopamine transporter (DaT) activity. One method of DaT activity analysis is performed with the injection of an Iodine-123 fluoropropyl (123I-FP-CIT) tracer combined with single photon emission computerized tomography (SPECT) imaging. The tracer illustrates the region of interest in the resulting DaTscan SPECT images. Human visual analysis is slow and vulnerable to subjectivity between radiologists, so the goal was to develop an introductory implementation of a deep convolutional neural network that can objectively and accurately classify DaTscan SPECT images as Parkinson's Disease or normal. This study illustrates the approach of using a deep convolutional neural network and evaluates its performance on DaTscan SPECT image classification. The data used in this study was obtained through a database provided by the Parkinson's Progression Markers Initiative (PPMI). The deep neural network in this study utilizes the InceptionV3 architecture, 1st runner up in the 2015 ImageNet Large Scale Visual Recognition Competition (ILSVRC), as a base model. A custom, binary classifier block was added on top of this base. In order to account for the small dataset size, a ten fold cross validation was implemented to evaluate the model's performance. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 144,707 |
2208.01849 | Coarse-to-Fine Knowledge-Enhanced Multi-Interest Learning Framework for
Multi-Behavior Recommendation | Multi-types of behaviors (e.g., clicking, adding to cart, purchasing, etc.) widely exist in most real-world recommendation scenarios, which are beneficial to learn users' multi-faceted preferences. As dependencies are explicitly exhibited by the multiple types of behaviors, effectively modeling complex behavior dependencies is crucial for multi-behavior prediction. The state-of-the-art multi-behavior models learn behavior dependencies indistinguishably with all historical interactions as input. However, different behaviors may reflect different aspects of user preference, which means that some irrelevant interactions may play as noises to the target behavior to be predicted. To address the aforementioned limitations, we introduce multi-interest learning to the multi-behavior recommendation. More specifically, we propose a novel Coarse-to-fine Knowledge-enhanced Multi-interest Learning (CKML) framework to learn shared and behavior-specific interests for different behaviors. CKML introduces two advanced modules, namely Coarse-grained Interest Extracting (CIE) and Fine-grained Behavioral Correlation (FBC), which work jointly to capture fine-grained behavioral dependencies. CIE uses knowledge-aware information to extract initial representations of each interest. FBC incorporates a dynamic routing scheme to further assign each behavior among interests. Additionally, we use the self-attention mechanism to correlate different behavioral information at the interest level. Empirical results on three real-world datasets verify the effectiveness and efficiency of our model in exploiting multi-behavior data. Further experiments demonstrate the effectiveness of each module and the robustness and superiority of the shared and specific modelling paradigm for multi-behavior data. | false | false | false | false | true | true | false | false | false | false | false | false | false | false | false | false | false | false | 311,295 |
2304.10075 | Multiscale Representation for Real-Time Anti-Aliasing Neural Rendering | The rendering scheme in neural radiance field (NeRF) is effective in rendering a pixel by casting a ray into the scene. However, NeRF yields blurred rendering results when the training images are captured at non-uniform scales, and produces aliasing artifacts if the test images are taken in distant views. To address this issue, Mip-NeRF proposes a multiscale representation as a conical frustum to encode scale information. Nevertheless, this approach is only suitable for offline rendering since it relies on integrated positional encoding (IPE) to query a multilayer perceptron (MLP). To overcome this limitation, we propose mip voxel grids (Mip-VoG), an explicit multiscale representation with a deferred architecture for real-time anti-aliasing rendering. Our approach includes a density Mip-VoG for scene geometry and a feature Mip-VoG with a small MLP for view-dependent color. Mip-VoG encodes scene scale using the level of detail (LOD) derived from ray differentials and uses quadrilinear interpolation to map a queried 3D location to its features and density from two neighboring downsampled voxel grids. To our knowledge, our approach is the first to offer multiscale training and real-time anti-aliasing rendering simultaneously. We conducted experiments on multiscale datasets, and the results show that our approach outperforms state-of-the-art real-time rendering baselines. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 359,282 |
1903.02165 | Camera Obscurer: Generative Art for Design Inspiration | We investigate using generated decorative art as a source of inspiration for design tasks. Using a visual similarity search for image retrieval, the \emph{Camera Obscurer} app enables rapid searching of tens of thousands of generated abstract images of various types. The seed for a visual similarity search is a given image, and the retrieved generated images share some visual similarity with the seed. Implemented in a hand-held device, the app empowers users to use photos of their surroundings to search through the archive of generated images and other image archives. Being abstract in nature, the retrieved images supplement the seed image rather than replace it, providing different visual stimuli including shapes, colours, textures and juxtapositions, in addition to affording their own interpretations. This approach can therefore be used to provide inspiration for a design task, with the abstract images suggesting new ideas that might give direction to a graphic design project. We describe a crowdsourcing experiment with the app to estimate user confidence in retrieved images, and we describe a pilot study where Camera Obscurer provided inspiration for a design task. These experiments have enabled us to describe future improvements, and to begin to understand sources of visual inspiration for design tasks. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | true | false | true | 123,441 |
1909.00531 | Improving Context-aware Neural Machine Translation with Target-side
Context | In recent years, several studies on neural machine translation (NMT) have attempted to use document-level context by using a multi-encoder and two attention mechanisms to read the current and previous sentences to incorporate the context of the previous sentences. These studies concluded that the target-side context is less useful than the source-side context. However, we considered that the reason why the target-side context is less useful lies in the architecture used to model these contexts. Therefore, in this study, we investigate how the target-side context can improve context-aware neural machine translation. We propose a weight sharing method wherein NMT saves decoder states and calculates an attention vector using the saved states when translating a current sentence. Our experiments show that the target-side context is also useful if we plug it into NMT as the decoder state when translating a previous sentence. | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | 143,659 |
2409.11307 | GS-Net: Generalizable Plug-and-Play 3D Gaussian Splatting Module | 3D Gaussian Splatting (3DGS) integrates the strengths of primitive-based representations and volumetric rendering techniques, enabling real-time, high-quality rendering. However, 3DGS models typically overfit to single-scene training and are highly sensitive to the initialization of Gaussian ellipsoids, heuristically derived from Structure from Motion (SfM) point clouds, which limits both generalization and practicality. To address these limitations, we propose GS-Net, a generalizable, plug-and-play 3DGS module that densifies Gaussian ellipsoids from sparse SfM point clouds, enhancing geometric structure representation. To the best of our knowledge, GS-Net is the first plug-and-play 3DGS module with cross-scene generalization capabilities. Additionally, we introduce the CARLA-NVS dataset, which incorporates additional camera viewpoints to thoroughly evaluate reconstruction and rendering quality. Extensive experiments demonstrate that applying GS-Net to 3DGS yields a PSNR improvement of 2.08 dB for conventional viewpoints and 1.86 dB for novel viewpoints, confirming the method's effectiveness and robustness. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 489,100 |
1304.2347 | Process, Structure, and Modularity in Reasoning with Uncertainty | Computational mechanisms for uncertainty management must support interactive and incremental problem formulation, inference, hypothesis testing, and decision making. However, most current uncertainty inference systems concentrate primarily on inference, and provide no support for the larger issues. We present a computational approach to uncertainty management which provides direct support for the dynamic, incremental aspect of this task, while at the same time permitting direct representation of the structure of evidential relationships. At the same time, we show that this approach responds to the modularity concerns of Heckerman and Horvitz [Heck87]. This paper emphasizes examples of the capabilities of this approach. Another paper [D'Am89] details the representations and algorithms involved. | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | false | false | false | 23,655 |
1711.07240 | MegDet: A Large Mini-Batch Object Detector | The improvements in recent CNN-based object detection works, from R-CNN [11], Fast/Faster R-CNN [10, 31] to recent Mask R-CNN [14] and RetinaNet [24], mainly come from new network, new framework, or novel loss design. But mini-batch size, a key factor in the training, has not been well studied. In this paper, we propose a Large MiniBatch Object Detector (MegDet) to enable the training with much larger mini-batch size than before (e.g. from 16 to 256), so that we can effectively utilize multiple GPUs (up to 128 in our experiments) to significantly shorten the training time. Technically, we suggest a learning rate policy and Cross-GPU Batch Normalization, which together allow us to successfully train a large mini-batch detector in much less time (e.g., from 33 hours to 4 hours), and achieve even better accuracy. The MegDet is the backbone of our submission (mmAP 52.5%) to COCO 2017 Challenge, where we won the 1st place of Detection task. | false | false | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | 84,946 |
2104.14506 | Explainable AI For COVID-19 CT Classifiers: An Initial Comparison Study | Artificial Intelligence (AI) has made leapfrogs in development across all the industrial sectors especially when deep learning has been introduced. Deep learning helps to learn the behaviour of an entity through methods of recognising and interpreting patterns. Despite its limitless potential, the mystery is how deep learning algorithms make a decision in the first place. Explainable AI (XAI) is the key to unlocking AI and the black-box for deep learning. XAI is an AI model that is programmed to explain its goals, logic, and decision making so that the end users can understand. The end users can be domain experts, regulatory agencies, managers and executive board members, data scientists, users that use AI, with or without awareness, or someone who is affected by the decisions of an AI model. Chest CT has emerged as a valuable tool for the clinical diagnostic and treatment management of the lung diseases associated with COVID-19. AI can support rapid evaluation of CT scans to differentiate COVID-19 findings from other lung diseases. However, how these AI tools or deep learning algorithms reach such a decision and which are the most influential features derived from these neural networks with typically deep layers are not clear. The aim of this study is to propose and develop XAI strategies for COVID-19 classification models with an investigation of comparison. The results demonstrate promising quantification and qualitative visualisations that can further enhance the clinician's understanding and decision making with more granular information from the results given by the learned XAI models. | false | false | false | false | false | false | true | false | false | false | false | true | false | false | false | false | false | false | 232,842 |
1203.4867 | Multi-hop Analog Network Coding: An Amplify-and-Forward Approach | In this paper, we study the performance of an amplify-and-forward (AF) based analog network coding (ANC) relay scheme in a multi-hop wireless network under individual power constraints. In the first part, a unicast scenario is considered. The problem of finding the maximum achievable rate is formulated as an optimization problem. Rather than solving this non-concave maximization problem, we derive upper and lower bounds for the optimal rate. A cut-set like upper bound is obtained in a closed form for a layered relay network. A pseudo-optimal AF scheme is developed for a two-hop parallel network, which is different from the conventional scheme with all amplification gains chosen as the maximum possible values. The conditions under which either the novel scheme or the conventional one achieves a rate within half a bit of the upper bound are found. Then we provide an AF-based multi-hop ANC scheme with the two schemes for a layered relay network. It is demonstrated that the lower bound of the optimal rate can asymptotically achieve the upper bound when the network is in the generalized high-SNR regime. In the second part, the optimal rate region for a two-hop multiple access channel (MAC) via AF relays is investigated. In a similar manner, we first derive an outer bound for it and then focus on designing low complexity AF-based ANC schemes for different scenarios. Several examples are given and the numerical results indicate that the achievable rate region of the ANC schemes can perform close to the outer bound. | false | false | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | 15,063 |
1809.04317 | Action Representations in Robotics: A Taxonomy and Systematic
Classification | Understanding and defining the meaning of "action" is substantial for robotics research. This becomes utterly evident when aiming at equipping autonomous robots with robust manipulation skills for action execution. Unfortunately, to this day we still lack both a clear understanding of the concept of an action and a set of established criteria that ultimately characterize an action. In this survey we thus first review existing ideas and theories on the notion and meaning of action. Subsequently we discuss the role of action in robotics and attempt to give a seminal definition of action in accordance with its use in robotics research. Given this definition we then introduce a taxonomy for categorizing action representations in robotics along various dimensions. Finally, we provide a systematic literature survey on action representations in robotics where we categorize relevant literature along our taxonomy. After discussing the current state of the art we conclude with an outlook towards promising research directions. | false | false | false | false | false | false | false | true | false | false | false | false | false | false | false | false | false | false | 107,535 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.